RSS

API Deployment News

These are the news items I've curated in my monitoring of the API space that are related to deploying APIs and thought worth enough to include in my research. I'm using all of these links to better understand how APIs are being deployed across a diverse range of implementations.

Containerized Microservices Monitoring Driving API Infrastructure Visualizations

While I track on what is going on with visualizations generated from data, I haven’t seen much when it comes to API driven visualizations, or specifically visualization about API infrastructure, that is new and interesting. This week I came across an interesting example in a post from Netsil about mapping microservices so that you can monitor them. They are a pretty basic visualization of each database, API, and DNS element for your stack, but it does provide solid example of visualizing not just the deployment of database and API resources, but also DNS, and other protocols in your stack.

Netsil microservices visualization is focused on monitoring, but I can see this type of visualization also being applied to design, deployment, management, logging, testing, and any other stop along the API lifecycle. I can see API lifecycle visualization tooling like this becoming more common place, and play more of a role in making API infrastructure more observable. Visualizations are an important of the storytelling around API operations that moves things from just IT and dev team monitoring, making it more observable by all stakeholders.

I’m glad to see service providers moving the needle with helping visualize API infrastructure. I’d like to see more embeddable solutions deployed to Github emerge as part of API life cycle monitoring. I’d like to see what full life cycle solutions are possible when it comes to my partners like deployment visualizations from Tyk and Dreamfactory APIs, and management visualizations with 3Scale APIs, and monitoring and testing visualizations using Runscope. I’ll play around with pulling data from these provides, and publishing to Github as YAML, which I can then easily make available as JSON or CSV for use in some basic visualizations.

If you think about it, thee really should be a wealth of open source dashboard visualizations that could be embedded on any public or private Github repository, for every API service provider out there. API providers should be able to easily map out their API infrastructure, using any of the API service providers they are using already using to operate their APIs. Think of some of the embeddable API status pages we see out there already, and what Netsil is offering for mapping out infrastructure, but something for ever stop along the API life cycle, helping deliver visualizations of API infrastructure no matter which stop you find yourself at.


Internet Connectivity As A Poster Child For How Markets Work Things Out

I have a number of friends who worship markets, and love to tell me that we should be allowing them to just work things out. They truly believe in the magical powers of markets, that they are great equalizers, and work out all the worlds problems each day. ALL the folks who tell me this are dudes, with 90% being white dudes. From their privileged vantage point, markets are what brings balance and truth to everything–may the best man win. Survival of the fittest. May the best product win, and all that that delusion.

From my vantage point markets work things out for business leaders. Markets do not work things out for people. Markets don’t care about people with disabilities. Markets don’t see education and healthcare any differently than it sees financial products and commodities–it just works to find the most profit it possibly can. Markets work so diligent and blindly towards this goal, it will even do this to its own detriment, while believers think this is just how things should be–the markets decided.

I see Internet connectivity as a great example of markets working things out. We’ve seen consolidation of network connections into the hands of a few cable and telco giants. These market forces are looking to work things out and squeeze every bit of profit out of it’s networks that it can, completely ignoring the opportunities that are available when the networks operate at scale, and freely operate to protect everyone’s benefits. Instead of paying attention to the bigger picture, these Internet gatekeepers are all about squeezing every nickel they can for every bit of bandwidth that is currently being transmitted over the network.

The markets that are working the Internet out do not care if the bits on the network are from a school, a hospital, or you playing an online game and watching videos–it just wants to meter and throttle them. It may care just enough to understand where it can possible charge more because it is a matter of life or death, or it is your child’s education, so you are willing to pay more, but as far as actually equipping our world with quality Internet–it could care less. Cable providers and telco operators are in the profit making business, using the network that drives the Internet, even at the cost of the future–this is how short sighted markets are.

AT&T, Verizon, and Comcast do not care about the United States remaining competitive in a global environment. They care about profits. AT&T, Verizon, and Comcast do not care about folks in rural areas possessing quality broadband to remain competitive with metropolitan areas. They care about profits. In these games, markets may work things out between big companies, deciding who wins and loses, but markets do not work things out for people who live in rural areas, or depend on Internet for education and healthcare. Markets do not work things out for people, they work things out for businesses, and the handful of people who operate these businesses.

So, when you tell me that I should trust that markets will work things out, you are showing me that you do not care about people. Except for those handful of business owners who are hoping you will some day be in the club with. Markets rarely ever work things out for average people, let alone people of color, with disabilities, and beyond. When you tell me about the magic of markets, you are demonstrating to me that you don’t see these layers of society. Which demonstrates your privilege, your lack of empathy for the humans around you, while also demonstrating how truly sad your life must be, because it is lacking in meaningful interactions with a diverse slice of the life we are living on this amazing planet.


Challenges When Aggregating Data Published Across Many Years

My partner in crime is working on a large data aggregation project regarding ed-tech funding. She is publishing data to Google Sheets, and I’m helping her develop Jekyll templates she can fork and expand using Github when it comes to publishing and telling stories around this data across her network of sites. Like API Evangelist, Hack Education runs as a network of Github repositories, with a common template across them–we call the overlap between API Evangelist, Contrafabulists.

One of the smaller projects she is working on as part of her ed-tech funding research involves pulling the grants made by the Gates Foundation since the 1990s. Similar to my story a couple weeks ago about my friend David Kernohan, where he was wanting to pull data from multiple sources, and aggregate into a single, workable project. Audrey is looking to pull data from a single source, but because the data spans almost 20 years–it ends up being a lot like aggregating data from across multiple sources.

A couple of the challenges she is facing trying to gather the data, and aggregate as a common dataset are:

  • PDF - The enemy of any open data advocate is the PDF, and a portion of her research data data is only available in PDF format which translates into a good deal of manual work.
  • Search - Other portions of the data is available via the web, but obfuscated behind search forms requiring many different searches to occur, with paginated results to navigate.
  • Scraping - The lack of APIs, CSV, XML, and other machine readable results raises the bar when it comes to aggregating and normalizing data across many years, making scraping a consideration, but because of PDFs, and obfuscated HTML pages behind a search, even scraping will have a significant costs.
  • Format - Even once you’ve aggregated data from across the many sources, there is a challenge with it being in different formats. Some years are broken down by topic, while others are geographically based. All of this requires a significant amount of overhead to normalize and bring into focus.
  • Manual - Ultimately Audrey has a lot of work ahead of her, manually pulling PDFs and performing searches, then copying and pasting data locally. Then she’ll have to roll up her sleeves to normalize all the data she has aggregated into a single, coherent vision of where the foundation has put its money.

Data research takes time, and is tedious, mind numbing work. I encounter many projects like hers where I have to make a decision between scraping or manually aggregating and normalizing data–each project will have it’s own pros and cons. I wish I could help, but it sounds like it will end up being a significant amount of manual labor to establish a coherent set of data in Google Sheets. Once, she is done though, she has all the tools in place to publish as YAML to Github, and get to work telling stories around the data across her work using Jekyll and Liquid. I’m also helping her make sure she has a JSON representation of each of her data projects, allowing others to build on top of her hard work.

I wish all companies, organizations, institutions, and agencies would think about how they publish their data publicly. It’s easy to think that data stewards will have ill intentions when it comes to publishing data in a variety of formats like they do, but more likely it is just a change of stewardship when it comes to managing and publishing the data. Different folks will have different visions of what sharing data on the web needs to look like, and have different tools available to them, and without a clear strategy you’ll end up with a mosaic of published data over the years. Which is why I’m telling her story. I am hoping to possibly influence one or two data stewards, or would-be data stewards when it comes to the importance of pausing for a moment and thinking through your strategy for standardizing how you store and publish your data online.


Each Airpair Datastore Comes With Complete API and Developer Portal

I see a lot of tools come across my desk each week, and I have to be honest I don’t alway fully get what they are and what they do. There are many reasons why I overlook interesting applications, but the most common reason is because I’m too busy and do not have the time to fully play with a solution. One application I’ve been keeping an eye on as part of my work is Airtable, which I have to be honest, I didn’t get what they were doing, or really I just didn’t notice because I was too busy.

Airtable is part spreadsheet, part database, that operates as a simple, easy to use web application, which with a push of a button, you can publish an API from. You don’t just get an API by default with each Airtable, you get a pretty robust developer portal for your API complete with good looking API documentation. Allowing you to go from an Airtable (spreadsheet / database) to API and documentation–no coding necessary. Trust me. Try it out, anyone can create an Airtable and publish an API that any developer can visit and quickly understand what is going on.

As a developer, API deployment still feels like it can be a lot of work. Then, once I take off my programmers hat, and put on my business user hat, I see that there are some very easy to use solutions like Airtable available to me. Knowing how to code is almost slowing me down when it comes API deployment. Sure, the APIs that Airtable publishes aren’t the perfectly designed, artisanally crafted API I make with my bare hands, but they work just as well as mine. Most importantly, they get business done. No coding necessary. Something that anyone can do without the burden of programming.

Airtable provides me another solution that I can recommend that my readers and clients should consider using when managing their data, which will also allow them to easily deploy an API for developers to build applications against.I also notice that Airtable has a whole API integration part of their platform, which allows you to integrate your Airtables into other APIs–something I will have to write about separately in a future post. I just wanted to make sure and take the time to properly add Airtable to my research, and write a story about them so that they are in my brain, available for recall when people are asking me for easy to use solutions that will help them deploy an API.


When You Publish A Google Sheet To The Web It Also Becomes An API

When you take any Google Sheet and choose to publish it to the web, you immediately get an API. Well, you get the HTML representation of the spreadsheet (shared with the web), and if you know the right way to ask, you also can get the JSON representation of the spreadsheet–which gives you an interface you can program against in any application.

Articles I curate, the companies, institutions, organizations, government agencies, and everything else I track on lives in Google Sheets that are published to the web in this way. When you are viewing any Google Sheet in your browser you are viewing it using a URL like:

https://docs.google.com/spreadsheets/d/[sheet_id]/edit

Of course, [sheet_id] is replaced with the actual id for your sheet, but the URL demonstrates what you will see. Once you publish your Google sheet to the web you are given a slight variation on that url:

https://docs.google.com/spreadsheets/d/[sheet_id]/pubhtml

This is the URL you will share with the public, allowing them to view the data you have in your spreadsheet in their browsers. In order to get at a JSON representation of the data you just need to learn the right way to craft the URL using the same sheet id:

https://spreadsheets.google.com/feeds/list/[sheet_id]/default/public/values?alt=json

Ok, one thing I have to come clean on is that the JSON available for each Google sheet is not the most intuitive JSON you will come across, but once you learn what is going on you can easily consume the data within a spreadsheet using any programming languages. Personally, I use a JavaScript library called tabletop.js that quickly helps you make sense of a spreadsheet and get to work using the data in any (JavaScript) application.

The fastest, lowest cost way to deploy an API is to put some data in a Google Sheet, and hit publish to the web. Ok, its not a full blown API, it’s just JSON available at a public URL, but it does provide an interface you can program against when developing an application. I take all the data I have in spreadsheets and publish to Github as YAML, and then make static APIs available using that YAML in XML, CSV, JSON, Atom, or any other format that I need. Taking the load of Google, creating a cached version at any point in time that runs on Github, in a versioned repository that anyone can fork, or integrate into any workflow.


Being First With Any Technology Trend Is Hard

I first wrote about Iron.io back in 2012. The are an API-first company, and they were the first serverless platform. I’ve known the team since they first reached out back in 2011, and I consider them one of my poster children for why there is more to all of this than just the technology. Iron.io gets the technology side of API deployment, and they saw the need for enabling developers to go serverless, running small scalable scripts in the cloud, and offloading the backend worries to someone who knows what they are doing.

Iron.io is what I’d consider to be a pretty balanced startup, slowly growing, and taking sensible amounts of funding they needed to grow their business. The primary area I would say that Iron.io has fallen short is when it comes to storytelling about what they are up to, and generally playing the role of a shiny startup everyone should pay attention to. They are great storytellers, but unfortunately the frequency and amplification of their stories has fallen short, allowing other strong players to fill the void–opening the door for Amazon to take the lion share of the conversation when it comes to serverless. Demonstrating that you can rock the technology side of things, but if you don’t also rock the storytelling and more theatrical side of things, there is a good chance you can come in second.

Storytelling is key to all of this. I always love the folks who push back on me saying that nobody cares about these stories, the markets only care about successful strong companies–when it reality, IT IS ALL ABOUT STORYTELLING! Amazon’s platform machine is good at storytelling. Not just their serverless group, but the entire platform. They blog, tweet, publish press releases, whisper in reporter ears, buy entire newspapers, publish science fiction patents, conduct road shows, and flagship conferences. Each AWS platform team can tap into this, participate, and benefit from the momentum, helping them dominate the conversation around their particular technical niche.

Being first with any technology trend will always be hard, but it will be even harder if you do not consistently tell stories about what you are doing, and what those who are using your platform are doing with it. Iron.io has been rocking it for five years now, and are continuing to define what serverless is all about, they just need to turn up the volume a little bit, and keep doing what they are doing. I’ll own a portion of this story, as I probably didn’t do my share to tell more stories about what they are up to, which would have helped amplify their work over the years–something I’m working to correct with a little storytelling here on API Evangelist.


Bringing The API Deployment Landscape Into Focus

I am finally getting the time to invest more into the rest of my API industry guides, which involves deep dives into core areas of my research like API definitions, design, and now deployment. The outline for my API deployment research has begun to come into focus and looks like it will rival my API management research in size.

With this release, I am looking to help onboard some of my less technical readers with API deployment. Not the technical details, but the big picture, so I wanted to start with some simple questions, to help prime the discussion around API development.

  • Where? - Where are APIs being deployed. On-premise, and in the clouds. Traditional website hosting, and even containerized and serverless API deployment.
  • How? - What technologies are being used to deploy APIs? From using spreadsheets, document and file stores, or the central database. Also thinking smaller with microservices, containes, and serverless.
  • Who? - Who will be doing the deployment? Of course, IT and developers groups will be leading the charge, but increasingly business users are leveraging new solutions to play a significant role in how APIs are deployed.

The Role Of API Definitions While not every deployment will be auto-generated using an API definition like OpenAPI, API definitions are increasingly playing a lead role as the contract that doesn’t just deploy an API, but sets the stage for API documentation, testing, monitoring, and a number of other stops along the API lifecycle. I want to make sure to point out in my API deployment research that API definitions aren’t just overlapping with deploying APIs, they are essential to connect API deployments with the rest of the API lifecycle.

Using Open Source Frameworks Early on in this research guide I am focusing on the most common way for developers to deploy an API, using an open source API framework. This is how I deploy my APIs, and there are an increasing number of open source API frameworks available out there, in a variety of programming languages. In this round I am taking the time to highlight at least six separate frameworks in the top programming languages where I am seeing sustained deployment of APIs using a framework. I don’t take a stance on any single API framework, but I do keep an eye on which ones are still active, and enjoying usag bey developers.

Deployment In The Cloud After frameworks, I am making sure to highlight some of the leading approaches to deploying APIs in the cloud, going beyond just a server and framework, and leveraging the next generation of API deployment service providers. I want to make sure that both developers and business users know that there are a growing number of service providers who are willing to assist with deployment, and with some of them, no coding is even necessary. While I still like hand-rolling my APIs using my peferred framework, when it comes to some simpler, more utility APIs, I prefer offloading the heavy lifting to a cloud service, and save me the time getting my hands dirty.

Essential Ingredients for Deployment Whether in the cloud, on-premise, or even on device and even the network, there are some essential ingredients to deploying APIs. In my API deployment guide I wanted to make sure and spend some time focusing on the essential ingredients every API provider will have to think about.

-Compute - The base ingredient for any API, providing the compute under the hood. Whether its baremetal, cloud instances, or serverless, you will need a consistent compute strategy to deploy APIs at any scale. -Storage - Next, I want to make sure my readers are thinking about a comprehensive storage strategy that spans all API operations, and hopefully multiple locations and providers. -DNS - Then I spend some time focusing on the frontline of API deployment–DNS. In todays online environment DNS is more than just addressing for APIs, it is also security. -Encryption - I also make sure encryption is baked in to all API deployment by default in both transit, and storage.

Some Of The Motivations Behind Deploying APIs In previous API deployment guides I usually just listed the services, tools, and other resources I had been aggregating as part of my monitoring of the API space. Slowly I have begun to organize these into a variety of buckets that help speak to many of the motivations I encounter when it comes to deploying APIs. While not a perfect way to look at API deployment, it helps me thinking about the many reasons people are deploying APIs, and craft a narrative, and provide a guide for others to follow, that is potentially aligned with their own motivations.

  • Geographic - Thinking about the increasing pressure to deploy APIs in specific geographic regions, leveraging the expansion of the leading cloud providers.
  • Virtualization - Considering the fact that not all APIs are meant for production and there is a lot to be learned when it comes to mocking and virtualizing APIs.
  • Data - Looking at the simplest of Create, Read, Update, and Delete (CRUD) APIs, and how data is being made more accessible by deploying APIs.
  • Database - Also looking at how APIs are beign deployed from relational, noSQL, and other data sources–providing the most common way for APIs to be deployed.
  • Spreadsheet - I wanted to make sure and not overlook the ability to deploy APIs directly from a spreadsheet making APIs are within reach of business users.
  • Search - Looking at how document and content stores are being indexed and made searchable, browsable, and accessible using APIs.
  • Scraping - Another often overlooked way of deploying an API, from the scraped content of other sites–an approach that is alive and well.
  • Proxy - Evolving beyond early gateways, using a proxy is still a valid way to deploy an API from existing services.
  • Rogue - I also wanted to think more about some of the rogue API deployments I’ve seen out there, where passionate developers reverse engineer mobile apps to deploy a rogue API.
  • Microservices - Microservices has provided an interesting motivation for deploying APIs–one that potentially can provide small, very useful and focused API deployments.
  • Containers - One of the evolutions in compute that has helped drive the microservices conversation is the containerization of everything, something that compliments the world of APis very well.
  • Serverless - Augmenting the microservices and container conversation, serverless is motivating many to think differently about how APIs are being deployed.
  • Real Time - Thinking briefly about real time approaches to APIs, something I will be expanding on in future releases, and thinking more about HTTP/2 and evented approaches to API deployment.
  • Devices - Considering how APis are beign deployed on device, when it comes to Internet of Things, industrial deployments, as well as even at the network level.
  • Marketplaces - Thinking about the role API marketplaces like Mashape (now RapidAPI) play in the decision to deploy APIs, and how other cloud providers like AWS, Google, and Azure will play in this discussion.
  • Webhooks - Thinking of API deployment as a two way street. Adding webhooks into the discussion and making sure we are thinking about how webhooks can alleviate the load on APIs, and push data and content to external locations.
  • Orchestration - Considering the impact of continous integration and deployment on API deploy specifically, and looking at it through the lens of the API lifecycle.

I feel like API deployment is still all over the place. The mandate for API management was much better articulated by API service providers like Mashery, 3Scale, and Apigee. Nobody has taken the lead when it came to API deployment. Service providers like DreamFactory and Restlet have kicked ass when it comes to not just API management, but making sure API deployment was also part of the puzzle. Newer API service providers like Tyk are also pusing the envelope, but I still don’t have the number of API deployment providers I’d like, when it comes to referring my readers. It isn’t a coincidence that DreamFactory, Restlet, and Tyk are API Evangelist partners, it is because they have the services I want to be able to recommend to my readers.

This is the first time I have felt like my API deployment research has been in any sort of focus. I carved this layer of my research of my API management research some years ago, but I really couldn’t articulate it very well beyond just open source frameworks, and the emerging cloud service providers. After I publish this edition of my API deployment guide I’m going to spend some time in the 17 areas of my research listed above. All these areas are heavily focused on API deployment, but I also think they are all worth looking at individually, so that I can better understand where they also intersect with other areas like management, testing, monitoring, security, and other stops along the API lifecycle.


The Growing Importance of Geographic Regions In API Operations

I have been revisiting my earlier work on an API rating system. One area that keeps coming up as I’m working is around the availability of APIs in a variety of regions, and the cloud platforms that are driving them. I have talked about regional availability of APIs for some time now, keeping an eye on how API providers are supporting multiple regions, as well as the expanding world of cloud computing that is powering these regional examples of providing and consuming APIs.

I have been watching Amazon rapidly expand their available regions, as well as Google and Microsoft racing to catch up. But I am starting to see API providers like Digital Ocean providing APIs for getting at geographic region information, and Amazon provides API methods for getting the available regions for Amazon EC2 compute–I will have to check if this is standard across all services. Twilio has regions for their API client, and Runscope has a region API for managing how you run API tests from a variety of regions. The role of geographic regions when it comes to providing APIs, as well as consuming APIs is increasingly part of the conversation when you visit the most mature API platforms, and something that keeps coming up on my radar.

We are still far from the average company being able to easily deploy, deprecate, and migrate APIs seamlessly across cloud providers and geographic regions, but as APIs become smaller and more modular, and cloud providers add more regions, and APIs to support automation around these regions, we will begin to see more decisions being made at deploy and run time regarding where you want to deploy or consume your API resources. To be able to do this we are going to need a lot more data and common schema regarding the what geographic regions are available for deployment, what services operate in which regions, and other key considerations about exactly where our resources should operate. This is why I’m revisiting this work, to see what I can do to get API service providers to share more data from either the API provider or consumer side of the equation.

I am considering adding an area of my research dedicated to API regions, aggregating examples of how geographic regions are playing a role in API operations. I’m thinking region availability will be playing just as significant role as performance, plans, security, reliability, and other areas of the API lifecycle when it comes to deciding where you deploy or consume your APIs. It feels like another one of the aspects of API operations that will overlap with many stops along the API lifecycle–not just deployment. One of the areas of the API lifecycle I’m increasingly thinking about that will affect geographic API decisions is regulations, and how governments are dictating what is acceptable when it comes to the storage, transmission, and access of digital resources. It feels like early notions of what the World Wide Web has been for the last 25 years is about to be blown out of the water, with the influences of digital nationalism, regulation, or even the Internet moving off planet, and increasingly driven by satellite infrastructure.


The Growing Importance of Geographic Regions In API Operations

I have been revisiting my earlier work on an API rating system. One area that keeps coming up as I’m working is around the availability of APIs in a variety of regions, and the cloud platforms that are driving them. I have talked about regional availability of APIs for some time now, keeping an eye on how API providers are supporting multiple regions, as well as the expanding world of cloud computing that is powering these regional examples of providing and consuming APIs.

I have been watching Amazon rapidly expand their available regions, as well as Google and Microsoft racing to catch up. But I am starting to see API providers like Digital Ocean providing APIs for getting at geographic region information, and Amazon provides API methods for getting the available regions for Amazon EC2 compute–I will have to check if this is standard across all services. Twilio has regions for their API client, and Runscope has a region API for managing how you run API tests from a variety of regions. The role of geographic regions when it comes to providing APIs, as well as consuming APIs is increasingly part of the conversation when you visit the most mature API platforms, and something that keeps coming up on my radar.

We are still far from the average company being able to easily deploy, deprecate, and migrate APIs seamlessly across cloud providers and geographic regions, but as APIs become smaller and more modular, and cloud providers add more regions, and APIs to support automation around these regions, we will begin to see more decisions being made at deploy and run time regarding where you want to deploy or consume your API resources. To be able to do this we are going to need a lot more data and common schema regarding the what geographic regions are available for deployment, what services operate in which regions, and other key considerations about exactly where our resources should operate. This is why I’m revisiting this work, to see what I can do to get API service providers to share more data from either the API provider or consumer side of the equation.

I am considering adding an area of my research dedicated to API regions, aggregating examples of how geographic regions are playing a role in API operations. I’m thinking region availability will be playing just as significant role as performance, plans, security, reliability, and other areas of the API lifecycle when it comes to deciding where you deploy or consume your APIs. It feels like another one of the aspects of API operations that will overlap with many stops along the API lifecycle–not just deployment. One of the areas of the API lifecycle I’m increasingly thinking about that will affect geographic API decisions is regulations, and how governments are dictating what is acceptable when it comes to the storage, transmission, and access of digital resources. It feels like early notions of what the World Wide Web has been for the last 25 years is about to be blown out of the water, with the influences of digital nationalism, regulation, or even the Internet moving off planet, and increasingly driven by satellite infrastructure.


Publishing Your API In The AWS Marketplace

I’ve been watching the conversation around how APIs are discovered since 2010 and I ave been working to understand where things might be going beyond ProgrammableWeb, to the Mashape Marketplace, and even investing in my own API discovery format APIs.json. It is a layer of the API space that feels very bipolar to me, with highs and lows, and a lot of meh in the middle. I do not claim to have “the solution” when it comes to API discovery and prefer just watching what is happening, and contributing where I can.

A number interesting signals for API deployment, as well as API discovery, are coming out of Amazon Marketplace lately. I find myself keeping a closer eye on the almost 350 API related solutions in the marketplace, and today I’m specifically taking notice of the Box API availability in the AWS Marketplace. I find this marketplace approach to not just API discovery via an API marketplace, but also API deployment very interesting. AWS isn’t just a marketplace of APIs, where you find what you need and integrate directly with that provider. It is where you find your API(s) and then spin up an instance within your AWS infrastructure that facilitates that API integration–a significant shift.

I’m interested in the coupling between API providers and AWS. AWS and Box have entered into a partnership, but their approach provides a possible blueprint for how this approach to API integration and deployment can scale. How tightly coupled each API provider chooses to be, looser (proxy calling the API), or tighter (deploying API as AMI), will vary from implementation to implementation, but the model is there. The Box AWS Marketplace instance dependencies on the Box platform aren’t evident to me, but I’m sure they can easily be quantified, and something I can get other API providers to make sure and articulate when publishing their API solutions to AWS Marketplace.

AWS is moving towards earlier visions I’ve had of selling wholesale editions of an API, helping you manage the on-premise and private label API contracts for your platform, and helping you explore the economics of providing wholesale editions of your platforms, either tightly or loosely coupled with AWS infrastructure. Decompiling your API platform into small deployable units of value that can be deployed within a customer’s existing AWS infrastructure, seamlessly integrating with existing AWS services.

I like where Box is going with their AWS partnership. I like how it is pushing forward the API conversation when it comes to using AWS infrastructure, and specifically the marketplace. I’ll keep an eye on where things are going. Box seems to be making all the right moves lately by going all in on the OpenAPI Spec, and decompiling their API platform making it deployable and manageable from the cloud, but also much more modular and usable in a serverless way. Providing us all with one possible blueprint for how we handle the technology and business of our API operations in the clouds.


API Providers Localizing Compute For Developers Using Serverless

Twilio launched their Twilio Function this last week, localizing serverless infrastructure for Twilio API consumers, when it comes to powering key functionality that Twilio brings to the table. This seems like a logical move for mature API providers, keeping in tune with shifts in how developers are integrating with APIs, and deploying their applications in a DevOps, continuous integration world.

I could see other API providers following Twilio’s lead, jumping on the serverless bandwagon, and localizing compute within their API ecosystems. I can see this approach converging with other movements in the SDK space where service providers like APIMATIC are enabling the continuous deployment of SDKs, samples, and other scripts for API integration. Allowing developers to quickly deploy integration scripts, in the programming language of choice–all baked into their existing API platform developer arrangement.

It makes sense that some of these common approaches that are emerging across the API space like containerization, webhooks, serverless, evented and other real-time technologies make their way to being baked in, or at least augmenting existing API operations. I don’t think that every API provider should be following Twilio’s lead in every area, but they do provide a pretty interesting example consider when we think about where the API space might be headed–I find the most mature API providers are just as important to keep an eye on as much as each wave of startups.

I’ll keep an eye on serverless being localized like this with other API providers. It seems like an opportunity for some provider, to develop a white label solution to help API providers deliver scripting, events, webhooks, and other emerging ways to orchestrate and integrate with APIs like Twilio is doing.


Craft Your API Design Guide So You Can Move To Other Areas of The Lifecycle

I am working on an API definition and design guide for my human services API work, helping establish a framework for approaching API design as part of the human services data and API specification, but also for implementers to follow in their own individual deployments. Every time I work on the subject of API design, I’m reminded of how far behind the API sector is when it comes to standardizing what it is we do.

Every month or so I see a new company publicly share their API design guide. When they do my friend Arnaud always adds to his API Stylebook, adding it to the wealth of information available in his work. I’m happy to see each API design guide release, but in reality, ALL API providers should have an API design guide, and they should also be open to publishing it publicly, showing their consumers they have their act together, and sharing with the wider API community the best practices in play.

The lack of companies sharing their API design practices and their API definitions is why we have such a deficiency when it comes to common API patterns in use. It is why we have so many variations of web APIs, as well as the underlying schema. We have an API industry because early practitioners like SalesForce, Amazon, eBay, Flickr, Delicious, Twitter, Youtube, and others were open with their API operations. People emulate what they see and know. Each wave of the API sector depends on the previous wave sharing what they do publicly–it is how this all works.

To demonstrate even further about how deficient we are, I do not find companies sharing their guides for API deployment, management, testing, monitoring, clients, and other stops along the API lifecycle. I’m glad we are seeing an uptick in the number of API design guides, but we need this practice to spread to every other stop. We need successful providers to share how they deploy their APIs, and when any company hires a new developer, you should ALWAYS be given a standard guide for deploying, managing, testing, as well as designing APIs.

It’s not rocket science, and honestly, it’s not even technical. It just means pausing for a moment, thinking about how we approach each stop in the API lifecycle, writing up an overview, publishing, and sharing it with API stakeholders, and even the wider API community. Every company doing APIs in 2017 should be crafting an API design guide so you can get to work on guides for the other areas of your lifecycle, thinking through and standardizing your approach, and making it known to every person involved–ideally, you are also being very public about all of this, and sharing your work with me and Arnaud, so we can get the word out about the good stuff you are up to! ;-)


My Google Sheet Driven Product API And Web Page

I am in the process of eliminating the MySQL backend behind much of my research, eliminating a business expense, as well as an unnecessary complexity in my architecture. There really is no reason for the data I use in my business to be in a database. Nothing I track on tends to go beyond 10K rows, with most of the tables actually being less than 100 rows–perfect for spreadsheets, and my new static approach to delivering APIs, and websites for my research.

The time had come to update some of the products on my website, and I thought my product page was a perfect candidate for this approach, providing me with the following elements:

Products Google Sheet - I have a simple spreadsheet with all of my products in it. Jekyll YAML Data Store - I have a YAML data store in the _data folder for API Evangelist. Google Sheet to YAML Sync - I have a JavaScript function that pulls the data from the Google Sheet, converts it to YAML, and writes to the _data folder in the Jekyll repository. Products Web Page - I have a page that lists all the products in the YAML file as HTML using Liquid. Products API - I have a JSON page that lists all the products in the YAML file as JSON using Liquid.

This simple approach to publishing static APIs using Google Sheets and Github is working well for little data like this–I am all about the little data, while everyone else is excited about big data. ;-) I even have the beginning of some documentation and an updated APIs.json for my website.

Next, I’ll work through the rest of my projects, organizations, tools, and other data I track on as part of my API research. I’ll be publishing a complete snapshot of this data at API Evangelist, as well as subsets of it at each of the individual research projects. When I’m done I’ll have a nice static stack of APIs for all of my research, easily managed via Google Sheets, and YAML on Github.


Google Spanner Is A Database With An Api Core


Regional Availability When It Comes To API Access

I have been profiling the Microsoft Azure platform over the last couple of weeks, and I found their approach to talking about the regions that were available was worth taking note of. I haven't actually assessed who has more regions, but Azure's approach seems to be pretty advanced, even if AWS might possess more regions (gut feeling). By profiling these cloud services and their available APIs using OpenAPI I am hoping to eventually develop a machine-readable approach to comparing which providers are available within which regions.

Google has a regions page, but it doesn't feel as forward leaning as AWS and Azures. It is interesting to watch how each of these providers is handling the availability of API services in a variety of regions across North and South America, Europe, Asia, Africa, and the Middle East. I've been watching how providers are thinking about the availability of API resources in different geographic regions for a while, but after seeing Azure evolve in this area, it is something I'll keep a closer eye on it moving forward.

Increasing the number of available regions is definitely the biggest concern for providers, something that small providers will be able to piggyback on and expand using as the top cloud providers grow and expand their regions. API providers and API service providers should be expanding the number of regions available, but everyone involved needs to also get more organized about how they communicate with customers about which regions available--region availability should be communicated at the highest level, like we see with the AWS, Google, and Azure deployment pages, but should also work to articulate which regions are available at the individual API level.

As data and algorithmic nationalism continue to grow, we are going to see more focus from providers when it comes to enabling their customer's deployment and operation of APIs into exactly the region they need. I"m guessing with the evolution of software-defined networking (SDN), we are going to see more control over the transport and routing of the data, content, and other resources we are making available via our regionally deployed APIs. Along with other channels and building blocks that I tune into I will start working to define a schema for tracking on regions, allowing me to index which APIs are available in specific regions using APIs.json.


Your Wholesale API For Sale In The Major API Marketplaces

I have been talking about selling wholesale APIs for some time now, allowing your potential customers to pick and choose exactly the API infrastructure they need, and develop their own virtualized API stacks. I'm not talking about publishing your retail API into marketplaces like Mashpe, I'm talking about making your API deployable and manageable on all the major cloud providers. 

You see this shift in business with a recent AWS email I got telling me about multi-year contracs for SaaS and APIs. Right now there are 70 SaaS products on AWS Marketplace, but from the email I can tell that Amazon is really trying to expand it's API offerings as well. When you deploy an API solution using the AWS Marketplace, and a customer signs up for a one, two, or three year contract, they don't pay for the underlying AWS infrastructure, just for the SaaS, or API solution. I will have to expore more to see if this is just absorbed by the API operator, or AWS working to incentivize this type of wholesale API deployment in their marketplace, and locking in providers and consumers.

I'm still learning about how Amazon is shifting the landscpe for deploying and managing APIs in this wholesale, almost API broker type of way. I recently came across the AWS Serverless API Portal, which is meant to augment the delivery of SaaS or API solutions in this way. With this model you could be in the business of deploying API developer portals for companies, and fill ingthe catalog with a variety of wholesale API resources, from a varietiy of providers--opening up a pretty interesting opportunity for white label APIs, and API brokers.

As I'm studying this new approach to deploying and managing APIs using marketplaces like this, I'm also noticing a shift towards deliving more algorithmic APIs, with machine learning, artificial intelligence, and other voodoo as the engine--resulting in a shift towards machine learning API marketplaces. I really need to carve off time to think about API deployment and management in this way. I've already begun looking at what it takes to deploy bare bones, wholesale APIs using AWS, Google, Heroku, or Azure clouds, but I really haven't invested much in the business side of all of this, soewhere Amazon seems to be slightly ahead of the curve in.


Human Service APIs On AWS, Azure, Google, and Heroku

I have several volunteers available to do work on Open Referral's Human Services Data Specification (API). I have three developers who are ready to work on some projects, as well as an ongoing stream of potential developers I would like to keep busy working on a variety of implementations. I am focusing attention on the top four cloud platforms that companies are using today: AWS, Azure, Google, and Heroku. 

I am looking to develop a rolling wave of projects that will run on any cloud platform, as well as taking advantage of the unique features that each provider offers. I've setup Github projects for managing the brainstorming and development of solutions for each of the four cloud platforms:

  • AWS - A project site outlining the services, tooling, projects, and communication around HSDS AWS development.
  • Azure - A project site outlining the services, tooling, projects, and communication around HSDS Azure development.
  • Google - A project site outlining the services, tooling, projects, and communication around HSDS Google development.
  • Heroku - A project site outlining the services, tooling, projects, and communication around HSDS Heroku development.

I want to incentivize the develop of APIs, that follow v1.1 of the HSDS OpenAPI. I'm encouraging PHP, Python, Ruby, and Node.js implementations, but open to other suggestions. I would like to have very simple API implementations in each language, running on all four of the cloud platforms, with push button (or at least easy) installation from Github for each implementation.

Ideally, we focus on single API implementations, until there is a complete toolbox that helps providers of all shapes and sizes. Then I'd love to see administrative, web search, and other applications that can be implemented on top of any HSDS API. I can imagine the following elements:

  • API - Server-side implementations, or API implementation using specialized services available via any of the providers like Lambda, or Google Endpoints.
  • Validator - A JSON Schema, andany other suggested validotr for the API definition, helping implementations validate their APIs.
  • Admin - Develop an administrative system for managing all of the data, content, and media that is stored as part of an HSDS API implementation.
  • Website - Develop a website or application that allows data, content, and media within an HSDS API implementation to be searched, browsed and engaged with by end-users.
  • Mobile App - Develop a mobile application that allows data, content, and media within an HSDS API implementation to be searched, browsed and engaged with by end-users via common mobile devices.
  • Developer Portal - Develop an API portal for managing and providing access to an HSDS API Implementation, allowing developers to sign up, and integrate with an API in their web, mobile, or another type of application.
  • Push Button Deployment - The ability to deploy any of the server side API implementations to the desired cloud platform of your choice with minimum configuration.

I'm looking to incentivize a buffet of simple API-driven implementations that can be easily deployment by cities, states, and other organizations that help deliver human services. They shouldn't be too complicated or be trying to do everything for everyone. Ideally, they are simple, easily deployed infrastructure that can provide a seed for organizations looking to get started with their API efforts.

Additionally, I am looking understand the realities of running a single API design across multiple cloud platforms. It seems like a realistic vision, but I know it is one that will be much more difficult than my geek brain thinks it will be. Along the way, I'm hoping to learn a lot more about each cloud platform, as well as the nuance of keeping my API design simple, even if the underlying platform varies from provider to provider.


Open Source Drag And Drop API Lifecycle Design Tooling

I'm always on the hunt for new ways to define, design, deploy, and manage API infrastructure, and thought the AWS Cloud Formation Designer provides a nice look at where things might be headed. AWS CloudFormation Designer (Designer) is a graphic tool for creating, viewing, and modifying AWS CloudFormation templates, which translates pretty nicely to managing your API infrastructure as well.

While the AWS Cloud Formation Designer spans all AWS services, all the elements are there for managing all the core stops along the API life cycle liked definition, design, DNS, deployment, management, monitoring, and others. Each of the Amazon services is available with a listing of each element available for the service, complete with all the inputs and outputs as connectors on the icons. Since all the AWS services are APIs, it's basically a drag and drop interface for mapping out how you use these APIs to define, design, deploy and manage your API infrastructure.

Using the design tool you can create templates for governing the deployment and management of API infrastructure by your team, partners, and other customers. This approach to defining the API life cycle is the closest I've seen to what stimulated my API subway map work, which became the subject of my keynotes at APIStrat in Austin, TX. It allows API architects and providers to templatize their approaches to delivering API infrastructure, in a way that is plug and play, and evolvable using the underlying JSON or YAML templates--right alongside the OpenAPI templates, we are crafting for each individual API.

The AWS Cloud Formation Designer is a drag and drop UI for the entire AWS API stack. It is something that could easily be applied to Google's API stack, Microsoft, or any other stack you define--something that could easily be done using APIs.json, developing another layer of templating for which resource types are available in the designer, as well as the formation templates generated by the design tool itself. There should be an open source "API formation designer" available, that could span cloud providers, allowing architects to define which resources are available in their toolbox--that anyone could fork and run in their environment.

I like where AWS is headed with their Cloud Formation Designer. It's another approach to providing full lifecycle tooling for use in the API space. It almost reminds me of Yahoo Pipes for the AWS Cloud, which triggers awkward feels for me. I'm hoping it is a glimpse of what's to come, and someone steps up with an even more attractive drag and drop version, that helps folks work with API-driven infrastructure no matter where it runs--maybe Google will get to work on something. They seem to be real big on supporting infrastructure that runs in any cloud environment. *wink wink*


Getting Feedback From Your API Community When Developing APIs

Establishing a feedback loop with your API community is one of the most valuable aspects of doing APIs, opening up your organization to ideas from outside your firewall. When you are designing new APIs or the next generation of your APIs, make sure you are tapping into the feedback loop you have already created within your community, by providing access to the alpha, beta, and prototype versions of your APIs.

The Oxford Dictionaries API is doing this with their latest additions to their stack of word related APIs, by providing early access for their community with two of their new API prototypes that are currently in development:

  • The Oxford English Dictionary (OED) is the definitive authority on the English language containing the meaning, history, and pronunciation of more than 280,000 entries – past and present – from across the English-speaking world. Its historical record of the English language is traced through more than 3.5 million quotations ranging from classic literature and specialist periodicals to film scripts and cookery books.
  • bab.la offers quick and easy translations and answers to everyday language questions. As part of the Oxford Dictionaries family, it provides practical support to people using a language that is not their mother tongue.

To get access to the new prototypes, all you have to do is fill out a short questionnaire, and they will consider giving you access to the prototype APIs. It is interesting to review the questions they ask developers, which help qualify users but also asks some questions that could potentially impact the design of the API. The Oxford Dictionaries API team is smart to solicit some external feedback from developers before getting too far down the road developing your API and making it available in any production environment.

I do not think all companies, organizations, and government agencies have it in their DNA to design APIs in this way. There are also some concerns when you are doing this in highly competitive environments, but there are also some competitive advantages in doing this regularly, and developing a strong R&D group within your API ecosystem--even if your competitors get a look at things. I'm going to be flagging API providers who approach API development in this way and start developing a list of best practices to consider when it comes to including your API community in the design and development process, and leveraging their feedback loop in this way.


REST, Linked Data, Hypermedia, GraphQL, and gRPC

I'm endlessly fascinated by APIs and enjoy studying their evolution. One of the challenges in helping evangelize APIs that I come across regularly is the many different views of what is or isn't an API amongst people who are API literate, as well as helping bring APIs into focus for the API newcomers because there are so many possibilities. Out of the two, I'd say that dealing with API dogma is by far a bigger challenge, than explaining APIs to newbies--dogma can be very poisonous to productive conversations and end up working against everyone involved in my opinion. 

I'm enjoying reading about the evolution in the API space when it comes to GraphQL and gRPC. There are a number of very interesting implementations, services, tooling, and implementations emerging in both these areas. However, I do see similar mistakes being made regarding dogmatic behavior, aggressive marketing tactics, and shaming folks for doing things differently, as I've seen with REST, Hypermedia, and linked data efforts. I know folks are passionate about what they are doing, and truly believe their way is right, but I'm concerned you will all suffer from the same deficiencies in adoption I've seen with previous implementations.

I started API Evangelist with the mission of counteracting the aggressive approach of the RESTafarians. I've spent a great deal of time thinking about how I can turn average developers and even business folks on to the concept of APIs--no not just REST or just hypermedia, but web APIs in general. Something that I now feel includes GraphQL and gRPC. I've seen many hardworking folks invest a lot into their APIs, only to have them torn apart by API techbros (TM) who think they've done it wrong--not giving a rats ass regarding the need to actually help someone understand the pros and cons of each approach.

I'm confident that GraphQL will find its place in the API toolbox, and enjoy significant adoption when it comes to data-intensive API implementations. However, I'd say 75% of the posts I read are pitting GraphQL against REST, stating it is a better solution. Period. No mention of its limitations or use cases where it might not be a good idea. Leaving us to only find out about these from the GraphQL haters--playing out the exact same production we've seen over the last five years with REST v Hypermedia. Hypermedia is finding its place in some very useful API implementations like FoxyCart, and AWS API Gateway (to name just a few), but its growth has definitely suffered from this type of storytelling, and I fear that GraphQL will face a similar fate. 

This problem is not a technical challenge. It is a storytelling and communication challenge, bundled with some very narrow incentive models fueled by a male-dominated startup culture, where folks really, really like being right and making others feel bad for not being right. Stop it. You aren't helping your cause. Even if you do get all your techbros thinking like you do, your tactics will fail in the mainstream business world, and you will only give more ammo to your haters, and further confuse your would be consumers, adopters, and practitioners. You will have a lot more success if you are empathetic towards your readers, and produce content that educates, and empowers, over shames and tearing down.

I'm writing this because I want my readers to understand the benefits of GraphQL, and I don't want gRPC evangelists to make the same mistake. It has taken waaaay too long for linked data efforts to recover, and before you say it isn't a thing, it has made a significant comeback in SEO circles, because of Google's adoption of JSON-LD, and a handful of SEO evangelists spreading the gospel in a friendly and accessible way--not because of linked data people (they tend to be dicks in my experience). As I've said before, we should be investing in a robust API toolbox, and we should be helping people understand that benefits of different approaches, and learn about the successful implementations. Please learn from others mistakes in the sector, and help see meaningful growth across all viable approaches to doing API--thanks.


Deploying Your APIs Exactly Where You Need Them

Building on earlier stories about how my API partners are making API deployment more modular and composable, and pushing forward my understanding of what is possible with API deployment, I'm looking into the details of what DreamFactory enables when it comes to API deployment. "DreamFactory is a free, Apache 2 open source project that runs on Linux, Windows, and Mac OS X. DreamFactory is scalable, stateless, and portable" -- making it pretty good candidate for running it wherever you need.

After spending time at Google and hearing about how they want to enable multi-cloud infrastructure deployment, I wanted to see how my API service provider partners are able to actually power these visions of running your APIs anywhere, in any infrastructure. Using DreamFactory you can deploy your APIs using Docker, Kubernetes, or directly from a Github repository, something I'm exploring as standard operating procedure for government agencies, like we see with 18F's US Forest Service ePermit Middlelayer API--in my opinion, all federal, state, and local government should be able to deploy API infrastructure like this.

One of the projects I am working on this week is creating a base blueprint of what it will take to deploy a human services API for any city in Google or Azure. I have a demo working on AWS already, but I need a basic understanding of what it will take to do the same in any cloud environment. I'm not in the business of hosting and operating APIs for anyone, let alone for government agencies--this is why I have partners like DreamFactory, who I can route specific projects as they come in. Obviously, I am looking to support my partners, as they support me, but I'm also looking to help other companies, organizations, institutions, and government agencies better leverage the cloud providers they are already using.

I'll share more stories about how I'm deploying APIs to AWS, as well as Google and Azure, as I do the work over the next couple of weeks. I'm looking to develop a healthy toolbox of solutions for government agencies to use. This weeks project is focused on the human services data specification, but next week I'm going to look replicating the model to allow for other Schema.org vocabulary, providing simple blueprints for deploying other common APIs like products, directories, link listings, and directories. My goal is to provide a robust toolbox of APIs that anyone can launch in AWS, Google, and Azure, with a push of a button--eventually.


Opportunity For Push Button API Deployment With Google Cloud Launcher

I'm keeping an eye on the different approaches to deploying infrastructure coming out of AWS, Google, Microsoft and other providers. In my version of the near future, we should be able to deploy any API we want, in any infrastructure we want with a single push of a button. We are getting there, as I'm seeing more publish to Heroku buttons, AWS and Azure deployment packages, and I recently came across the Google Cloud Launcher, which I think will work well for deploying a variety of API driven solutions--we just need more selection and a button!

All the parts and pieces for this type of push button API deployment exist already, we just need someone to step up and provide a dead simple framework for defining and embedding the buttons, abstracting away the complexities of each cloud platform. I want to be able to take a single manifest for my open source or wholesale API on Github, and allow anyone to deploy it into Heroku, AWS, Google, Azure, or anywhere else they want. I want the technical, business, and legal complexities of deployment abstracted away for me, the API provider.

API management has matured a lot over the last 10 years, and API design and definitions are currently flourishing. We need a lot more investment in helping people easily deploy APIs, wherever they need. I think this layer of interoperability is the responsibility of the emerging API service providers like Restlet, DreamFactory, or maybe even APIMATIC. I will keep tracking on what I'm seeing evolve out of the leading cloud platforms like AWS, Azure, and now Google with their Cloud Launcher. I will also keep pushing on my API service provider partners in the space to enable API deployment like this--I am guessing they will just need a little nudging to see the opportunity around providing API deployment in this seamless, cloud-agnostic way.


Deploy A Grape Doorkeeper Driven API To Heroku With A Click Of A Button

There have been many advances in the way that we deploy APIs in the last couple of years, but I still want more of an embeddable, push botton way to deploy generic or even more specialized APIs. This is something I've ranted about before, asking where the deploy to AWS and Google buttons. I'm seeing more AWS solutions emerge, helping deploy from Github using AWS Codeploy, and the regular number of deploy to Heroku buttons, but not the real growth I'd like to see occur--making it a drum I will keep beating until I get what I want.

I was working on my OpenAPI toolbox, cataloging open source tools that put the OpenAPI specification to work, and came across a deploy with Heroku button for the Grape Doorkeeper, which helps you "create an awesome versioned API, secured with OAuth2 and automatically documented". This should be the default for all server-side API deployment frameworks, allowing push button deployment of any open source API framework to the cloud platform of your choosing.

If I have my way, it won't just be API frameworks that will have deployment buttons. Specialized API designs, available in a variety of frameworks will be available for deployment with a single click of a button. We should be able to deploy a product API, or a user API, to AWS, Heroku, Google, or Microsoft, with a single click. There should be a wealth of open source templates for us to choose from on Github, with deploy buttons, and easy to follow wizards that help us set things up properly.

Smells like an opportunity to me. I'll have to think more about where the revenue would come from in such a model, but I'm sure it would be easy enough to upsell deployments to some premium features and services. I understand that both the areas of API design and API deployment are playing catch-up with API management at the moment, but someone needs to get to work on streamlining the API deployment button experience across all major cloud platforms and get to work on crafting some useful API server deployments that people can put to work instantly. #please #thankyou


The AWS Serverless API Portal

I was looking through the Github accounts for Amazon Web Services and came across their Serverless API Portal--a pretty functional example of a forkable developer portal for your API, running on a variety of AWS services. It's a pretty interesting implementation because in addition to the tech of your API management it also helps you with the business side of things. 

The AWS Serverless Developer Portal "is a reference implementation for a developer portal application that allows users to register, discover, and subscribe to your API Products (API Gateway Usage Plans), manage their API Keys, and view their usage metrics for your APIs..[]..it also supports subscription/unsubscription through a SaaS product offering through the AWS Marketplace."--providing a pretty compelling API portal solution running on AWS.

There are a couple things I think are pretty noteworthy:

  • Application Backend (/lambdas/backend) - The application backend is a Lambda function built on the aws-serverless-express library. The backend is responsible for login/registration, API subscription/unsubscription, usage metrics, and handling product subscription redirects from AWS Marketplace.
  • Marketplace SaaS Setup Instructions - You can sell your SaaS product through AWS Marketplace and have the developer portal manage the subscription/unsubscription workflows. API Gateway will automatically provide authorization and metering for your product and subscribers will be automatically billed through AWS Marketplace
  • AWS Marketplace SNS Listener Function (Optional) (/listener) - The listener Lambda function will be triggered when customers subscribe or unsubscribe to your product through the AWS Marketplace console. AWS Marketplace will generate a unique SNS Topic where events will be published for your product.

This is the required infrastructure we'll need to get to what I've been talking about for some time with my wholesale API and virtual API stack stories. Amazon is providing you with the infrastructure you need to set up the storefront for your APIs, providing the management layer you will need, including monetization via their marketplace. This is a retail layer, but because your infrastructure is setup in this way, there is no reason you can't sell all or part of your setup to other wholesale customers, using the same AWS marketplace.

I had AWS marketplace on my list of solutions to better understand for some time now, but the AWS Serverless Developer Portal really begins to connect the dots for me. If you can sell access to your API infrastructure using this model, you can also sell your API infrastructure to others using this model. I will have to set up some infrastructure using this approach to better flush out how AWS infrastructure and open templates like this serverless developer portal can help facilitate a more versatile, virtualized, and wholesale API lifecycle. 

There is a more detailed walkthrough of how to get going with the AWS Serverless Developer Portal, helping you think through the details. I am a big fan of these types of templates--forkable Github repositories, with a blueprint you can follow to achieve a specific API deployment, management, or any other lifecycle objective.


The New API Design And Deployment Solution Materia Is Pretty Slick

I was playing with a new API design and deployment solution, from some of my favorite developers out there this weekend called Materia, which bills itself as "a modern development environment to build advanced mobile and web applications"--I would add, "with an API heart".

Materia is slick. it is modern. While very simple, it is also very complete--allowing you to define your underlying data model or entities, design and deploy APIs, and then publish a single page applications (SPA) for use on the web, or mobile devices. Even though I'm one of those back to land, hand-crafted API folks, I could see myself using Materia to quickly design and deploy APIs. 

I say this in the most positive light imaginable, but Materia reminds me of the Microsoft Access for APIs. Partly its the diagramming interface for the entities, but it is also the fact that it bridges the backend to the frontend, allowing you to not just design and deploy the database and APIs, but also the resulting user interface that will put them to work.

I know they are just getting going with developing Materia, but I can't help but share a couple things I'd like to see, that would make it continue to be the modern API driven application it is striving to be:

  • OpenAPI Specs or Blueprints - Allow users to import, export, and manage my APIs in the popular API definition format my choice.
  • Schema.org - Provide users with a wealth of existing entity models to choose from, so they do not reinvent the wheel.
  • Github - Allow for the publishing of projects, and importing of them to and from Github, allowing for the sharing of server design and deployment patterns.

There are a number of other things I'd like to see, but I'm sensitive to the fact that they are just getting started. These three areas would significantly widen the initial audience for Materia beyond the developer class, which is who the application solution should be targeting. Like I said, it has the potential to be the Microsoft Access of APIs for small businesses, which isn't quite the Microsoft Excel of APIs, but a close second. ;-)

Nice work guys! It is the another positive advancement in the world of API design alongside Restlet launched their API design studio, and Apiary setting this modern era of API design into motion with Apiary. I'll be tuning into Materia's evolution on Twitter, and play more with the server and designer editions available on Github.


If you think there is a link I should have listed here feel free to tweet it at me, or submit as a Github issue. Even though I do this full time, I'm still a one person show, and I miss quite a bit, and depend on my network to help me know what is going on.