{"API Deployment"}

Blog Posts on API Deployment

These are posts from the API Evangelist blog that are focused on API deplooyment, allowing for a filtered look at my analysis on the topic. I rely on these posts, along with the curated, organizations, APIs, and tools to help paint me a picture of what is going on.

Regional Availability When It Comes To API Access

I have been profiling the Microsoft Azure platform over the last couple of weeks, and I found their approach to talking about the regions that were available was worth taking note of. I haven't actually assessed who has more regions, but Azure's approach seems to be pretty advanced, even if AWS might possess more regions (gut feeling). By profiling these cloud services and their available APIs using OpenAPI I am hoping to eventually develop a machine-readable approach to comparing which providers are available within which regions.

Google has a regions page, but it doesn't feel as forward leaning as AWS and Azures. It is interesting to watch how each of these providers is handling the availability of API services in a variety of regions across North and South America, Europe, Asia, Africa, and the Middle East. I've been watching how providers are thinking about the availability of API resources in different geographic regions for a while, but after seeing Azure evolve in this area, it is something I'll keep a closer eye on it moving forward.

Increasing the number of available regions is definitely the biggest concern for providers, something that small providers will be able to piggyback on and expand using as the top cloud providers grow and expand their regions. API providers and API service providers should be expanding the number of regions available, but everyone involved needs to also get more organized about how they communicate with customers about which regions available--region availability should be communicated at the highest level, like we see with the AWS, Google, and Azure deployment pages, but should also work to articulate which regions are available at the individual API level.

As data and algorithmic nationalism continue to grow, we are going to see more focus from providers when it comes to enabling their customer's deployment and operation of APIs into exactly the region they need. I"m guessing with the evolution of software-defined networking (SDN), we are going to see more control over the transport and routing of the data, content, and other resources we are making available via our regionally deployed APIs. Along with other channels and building blocks that I tune into I will start working to define a schema for tracking on regions, allowing me to index which APIs are available in specific regions using APIs.json.


Your Wholesale API For Sale In The Major API Marketplaces

I have been talking about selling wholesale APIs for some time now, allowing your potential customers to pick and choose exactly the API infrastructure they need, and develop their own virtualized API stacks. I'm not talking about publishing your retail API into marketplaces like Mashpe, I'm talking about making your API deployable and manageable on all the major cloud providers. 

You see this shift in business with a recent AWS email I got telling me about multi-year contracs for SaaS and APIs. Right now there are 70 SaaS products on AWS Marketplace, but from the email I can tell that Amazon is really trying to expand it's API offerings as well. When you deploy an API solution using the AWS Marketplace, and a customer signs up for a one, two, or three year contract, they don't pay for the underlying AWS infrastructure, just for the SaaS, or API solution. I will have to expore more to see if this is just absorbed by the API operator, or AWS working to incentivize this type of wholesale API deployment in their marketplace, and locking in providers and consumers.

I'm still learning about how Amazon is shifting the landscpe for deploying and managing APIs in this wholesale, almost API broker type of way. I recently came across the AWS Serverless API Portal, which is meant to augment the delivery of SaaS or API solutions in this way. With this model you could be in the business of deploying API developer portals for companies, and fill ingthe catalog with a variety of wholesale API resources, from a varietiy of providers--opening up a pretty interesting opportunity for white label APIs, and API brokers.

As I'm studying this new approach to deploying and managing APIs using marketplaces like this, I'm also noticing a shift towards deliving more algorithmic APIs, with machine learning, artificial intelligence, and other voodoo as the engine--resulting in a shift towards machine learning API marketplaces. I really need to carve off time to think about API deployment and management in this way. I've already begun looking at what it takes to deploy bare bones, wholesale APIs using AWS, Google, Heroku, or Azure clouds, but I really haven't invested much in the business side of all of this, soewhere Amazon seems to be slightly ahead of the curve in.


Human Service APIs On AWS, Azure, Google, and Heroku

I have several volunteers available to do work on Open Referral's Human Services Data Specification (API). I have three developers who are ready to work on some projects, as well as an ongoing stream of potential developers I would like to keep busy working on a variety of implementations. I am focusing attention on the top four cloud platforms that companies are using today: AWS, Azure, Google, and Heroku. 

I am looking to develop a rolling wave of projects that will run on any cloud platform, as well as taking advantage of the unique features that each provider offers. I've setup Github projects for managing the brainstorming and development of solutions for each of the four cloud platforms:

  • AWS - A project site outlining the services, tooling, projects, and communication around HSDS AWS development.
  • Azure - A project site outlining the services, tooling, projects, and communication around HSDS Azure development.
  • Google - A project site outlining the services, tooling, projects, and communication around HSDS Google development.
  • Heroku - A project site outlining the services, tooling, projects, and communication around HSDS Heroku development.

I want to incentivize the develop of APIs, that follow v1.1 of the HSDS OpenAPI. I'm encouraging PHP, Python, Ruby, and Node.js implementations, but open to other suggestions. I would like to have very simple API implementations in each language, running on all four of the cloud platforms, with push button (or at least easy) installation from Github for each implementation.

Ideally, we focus on single API implementations, until there is a complete toolbox that helps providers of all shapes and sizes. Then I'd love to see administrative, web search, and other applications that can be implemented on top of any HSDS API. I can imagine the following elements:

  • API - Server-side implementations, or API implementation using specialized services available via any of the providers like Lambda, or Google Endpoints.
  • Validator - A JSON Schema, andany other suggested validotr for the API definition, helping implementations validate their APIs.
  • Admin - Develop an administrative system for managing all of the data, content, and media that is stored as part of an HSDS API implementation.
  • Website - Develop a website or application that allows data, content, and media within an HSDS API implementation to be searched, browsed and engaged with by end-users.
  • Mobile App - Develop a mobile application that allows data, content, and media within an HSDS API implementation to be searched, browsed and engaged with by end-users via common mobile devices.
  • Developer Portal - Develop an API portal for managing and providing access to an HSDS API Implementation, allowing developers to sign up, and integrate with an API in their web, mobile, or another type of application.
  • Push Button Deployment - The ability to deploy any of the server side API implementations to the desired cloud platform of your choice with minimum configuration.

I'm looking to incentivize a buffet of simple API-driven implementations that can be easily deployment by cities, states, and other organizations that help deliver human services. They shouldn't be too complicated or be trying to do everything for everyone. Ideally, they are simple, easily deployed infrastructure that can provide a seed for organizations looking to get started with their API efforts.

Additionally, I am looking understand the realities of running a single API design across multiple cloud platforms. It seems like a realistic vision, but I know it is one that will be much more difficult than my geek brain thinks it will be. Along the way, I'm hoping to learn a lot more about each cloud platform, as well as the nuance of keeping my API design simple, even if the underlying platform varies from provider to provider.


Open Source Drag And Drop API Lifecycle Design Tooling

I'm always on the hunt for new ways to define, design, deploy, and manage API infrastructure, and thought the AWS Cloud Formation Designer provides a nice look at where things might be headed. AWS CloudFormation Designer (Designer) is a graphic tool for creating, viewing, and modifying AWS CloudFormation templates, which translates pretty nicely to managing your API infrastructure as well.

While the AWS Cloud Formation Designer spans all AWS services, all the elements are there for managing all the core stops along the API life cycle liked definition, design, DNS, deployment, management, monitoring, and others. Each of the Amazon services is available with a listing of each element available for the service, complete with all the inputs and outputs as connectors on the icons. Since all the AWS services are APIs, it's basically a drag and drop interface for mapping out how you use these APIs to define, design, deploy and manage your API infrastructure.

Using the design tool you can create templates for governing the deployment and management of API infrastructure by your team, partners, and other customers. This approach to defining the API life cycle is the closest I've seen to what stimulated my API subway map work, which became the subject of my keynotes at APIStrat in Austin, TX. It allows API architects and providers to templatize their approaches to delivering API infrastructure, in a way that is plug and play, and evolvable using the underlying JSON or YAML templates--right alongside the OpenAPI templates, we are crafting for each individual API.

The AWS Cloud Formation Designer is a drag and drop UI for the entire AWS API stack. It is something that could easily be applied to Google's API stack, Microsoft, or any other stack you define--something that could easily be done using APIs.json, developing another layer of templating for which resource types are available in the designer, as well as the formation templates generated by the design tool itself. There should be an open source "API formation designer" available, that could span cloud providers, allowing architects to define which resources are available in their toolbox--that anyone could fork and run in their environment.

I like where AWS is headed with their Cloud Formation Designer. It's another approach to providing full lifecycle tooling for use in the API space. It almost reminds me of Yahoo Pipes for the AWS Cloud, which triggers awkward feels for me. I'm hoping it is a glimpse of what's to come, and someone steps up with an even more attractive drag and drop version, that helps folks work with API-driven infrastructure no matter where it runs--maybe Google will get to work on something. They seem to be real big on supporting infrastructure that runs in any cloud environment. *wink wink*


Getting Feedback From Your API Community When Developing APIs

Establishing a feedback loop with your API community is one of the most valuable aspects of doing APIs, opening up your organization to ideas from outside your firewall. When you are designing new APIs or the next generation of your APIs, make sure you are tapping into the feedback loop you have already created within your community, by providing access to the alpha, beta, and prototype versions of your APIs.

The Oxford Dictionaries API is doing this with their latest additions to their stack of word related APIs, by providing early access for their community with two of their new API prototypes that are currently in development:

  • The Oxford English Dictionary (OED) is the definitive authority on the English language containing the meaning, history, and pronunciation of more than 280,000 entries – past and present – from across the English-speaking world. Its historical record of the English language is traced through more than 3.5 million quotations ranging from classic literature and specialist periodicals to film scripts and cookery books.
  • bab.la offers quick and easy translations and answers to everyday language questions. As part of the Oxford Dictionaries family, it provides practical support to people using a language that is not their mother tongue.

To get access to the new prototypes, all you have to do is fill out a short questionnaire, and they will consider giving you access to the prototype APIs. It is interesting to review the questions they ask developers, which help qualify users but also asks some questions that could potentially impact the design of the API. The Oxford Dictionaries API team is smart to solicit some external feedback from developers before getting too far down the road developing your API and making it available in any production environment.

I do not think all companies, organizations, and government agencies have it in their DNA to design APIs in this way. There are also some concerns when you are doing this in highly competitive environments, but there are also some competitive advantages in doing this regularly, and developing a strong R&D group within your API ecosystem--even if your competitors get a look at things. I'm going to be flagging API providers who approach API development in this way and start developing a list of best practices to consider when it comes to including your API community in the design and development process, and leveraging their feedback loop in this way.


REST, Linked Data, Hypermedia, GraphQL, and gRPC

I'm endlessly fascinated by APIs and enjoy studying their evolution. One of the challenges in helping evangelize APIs that I come across regularly is the many different views of what is or isn't an API amongst people who are API literate, as well as helping bring APIs into focus for the API newcomers because there are so many possibilities. Out of the two, I'd say that dealing with API dogma is by far a bigger challenge, than explaining APIs to newbies--dogma can be very poisonous to productive conversations and end up working against everyone involved in my opinion. 

I'm enjoying reading about the evolution in the API space when it comes to GraphQL and gRPC. There are a number of very interesting implementations, services, tooling, and implementations emerging in both these areas. However, I do see similar mistakes being made regarding dogmatic behavior, aggressive marketing tactics, and shaming folks for doing things differently, as I've seen with REST, Hypermedia, and linked data efforts. I know folks are passionate about what they are doing, and truly believe their way is right, but I'm concerned you will all suffer from the same deficiencies in adoption I've seen with previous implementations.

I started API Evangelist with the mission of counteracting the aggressive approach of the RESTafarians. I've spent a great deal of time thinking about how I can turn average developers and even business folks on to the concept of APIs--no not just REST or just hypermedia, but web APIs in general. Something that I now feel includes GraphQL and gRPC. I've seen many hardworking folks invest a lot into their APIs, only to have them torn apart by API techbros (TM) who think they've done it wrong--not giving a rats ass regarding the need to actually help someone understand the pros and cons of each approach.

I'm confident that GraphQL will find its place in the API toolbox, and enjoy significant adoption when it comes to data-intensive API implementations. However, I'd say 75% of the posts I read are pitting GraphQL against REST, stating it is a better solution. Period. No mention of its limitations or use cases where it might not be a good idea. Leaving us to only find out about these from the GraphQL haters--playing out the exact same production we've seen over the last five years with REST v Hypermedia. Hypermedia is finding its place in some very useful API implementations like FoxyCart, and AWS API Gateway (to name just a few), but its growth has definitely suffered from this type of storytelling, and I fear that GraphQL will face a similar fate. 

This problem is not a technical challenge. It is a storytelling and communication challenge, bundled with some very narrow incentive models fueled by a male-dominated startup culture, where folks really, really like being right and making others feel bad for not being right. Stop it. You aren't helping your cause. Even if you do get all your techbros thinking like you do, your tactics will fail in the mainstream business world, and you will only give more ammo to your haters, and further confuse your would be consumers, adopters, and practitioners. You will have a lot more success if you are empathetic towards your readers, and produce content that educates, and empowers, over shames and tearing down.

I'm writing this because I want my readers to understand the benefits of GraphQL, and I don't want gRPC evangelists to make the same mistake. It has taken waaaay too long for linked data efforts to recover, and before you say it isn't a thing, it has made a significant comeback in SEO circles, because of Google's adoption of JSON-LD, and a handful of SEO evangelists spreading the gospel in a friendly and accessible way--not because of linked data people (they tend to be dicks in my experience). As I've said before, we should be investing in a robust API toolbox, and we should be helping people understand that benefits of different approaches, and learn about the successful implementations. Please learn from others mistakes in the sector, and help see meaningful growth across all viable approaches to doing API--thanks.


Deploying Your APIs Exactly Where You Need Them

Building on earlier stories about how my API partners are making API deployment more modular and composable, and pushing forward my understanding of what is possible with API deployment, I'm looking into the details of what DreamFactory enables when it comes to API deployment. "DreamFactory is a free, Apache 2 open source project that runs on Linux, Windows, and Mac OS X. DreamFactory is scalable, stateless, and portable" -- making it pretty good candidate for running it wherever you need.

After spending time at Google and hearing about how they want to enable multi-cloud infrastructure deployment, I wanted to see how my API service provider partners are able to actually power these visions of running your APIs anywhere, in any infrastructure. Using DreamFactory you can deploy your APIs using Docker, Kubernetes, or directly from a Github repository, something I'm exploring as standard operating procedure for government agencies, like we see with 18F's US Forest Service ePermit Middlelayer API--in my opinion, all federal, state, and local government should be able to deploy API infrastructure like this.

One of the projects I am working on this week is creating a base blueprint of what it will take to deploy a human services API for any city in Google or Azure. I have a demo working on AWS already, but I need a basic understanding of what it will take to do the same in any cloud environment. I'm not in the business of hosting and operating APIs for anyone, let alone for government agencies--this is why I have partners like DreamFactory, who I can route specific projects as they come in. Obviously, I am looking to support my partners, as they support me, but I'm also looking to help other companies, organizations, institutions, and government agencies better leverage the cloud providers they are already using.

I'll share more stories about how I'm deploying APIs to AWS, as well as Google and Azure, as I do the work over the next couple of weeks. I'm looking to develop a healthy toolbox of solutions for government agencies to use. This weeks project is focused on the human services data specification, but next week I'm going to look replicating the model to allow for other Schema.org vocabulary, providing simple blueprints for deploying other common APIs like products, directories, link listings, and directories. My goal is to provide a robust toolbox of APIs that anyone can launch in AWS, Google, and Azure, with a push of a button--eventually.


Opportunity For Push Button API Deployment With Google Cloud Launcher

I'm keeping an eye on the different approaches to deploying infrastructure coming out of AWS, Google, Microsoft and other providers. In my version of the near future, we should be able to deploy any API we want, in any infrastructure we want with a single push of a button. We are getting there, as I'm seeing more publish to Heroku buttons, AWS and Azure deployment packages, and I recently came across the Google Cloud Launcher, which I think will work well for deploying a variety of API driven solutions--we just need more selection and a button!

All the parts and pieces for this type of push button API deployment exist already, we just need someone to step up and provide a dead simple framework for defining and embedding the buttons, abstracting away the complexities of each cloud platform. I want to be able to take a single manifest for my open source or wholesale API on Github, and allow anyone to deploy it into Heroku, AWS, Google, Azure, or anywhere else they want. I want the technical, business, and legal complexities of deployment abstracted away for me, the API provider.

API management has matured a lot over the last 10 years, and API design and definitions are currently flourishing. We need a lot more investment in helping people easily deploy APIs, wherever they need. I think this layer of interoperability is the responsibility of the emerging API service providers like Restlet, DreamFactory, or maybe even APIMATIC. I will keep tracking on what I'm seeing evolve out of the leading cloud platforms like AWS, Azure, and now Google with their Cloud Launcher. I will also keep pushing on my API service provider partners in the space to enable API deployment like this--I am guessing they will just need a little nudging to see the opportunity around providing API deployment in this seamless, cloud-agnostic way.


Deploy A Grape Doorkeeper Driven API To Heroku With A Click Of A Button

There have been many advances in the way that we deploy APIs in the last couple of years, but I still want more of an embeddable, push botton way to deploy generic or even more specialized APIs. This is something I've ranted about before, asking where the deploy to AWS and Google buttons. I'm seeing more AWS solutions emerge, helping deploy from Github using AWS Codeploy, and the regular number of deploy to Heroku buttons, but not the real growth I'd like to see occur--making it a drum I will keep beating until I get what I want.

I was working on my OpenAPI toolbox, cataloging open source tools that put the OpenAPI specification to work, and came across a deploy with Heroku button for the Grape Doorkeeper, which helps you "create an awesome versioned API, secured with OAuth2 and automatically documented". This should be the default for all server-side API deployment frameworks, allowing push button deployment of any open source API framework to the cloud platform of your choosing.

If I have my way, it won't just be API frameworks that will have deployment buttons. Specialized API designs, available in a variety of frameworks will be available for deployment with a single click of a button. We should be able to deploy a product API, or a user API, to AWS, Heroku, Google, or Microsoft, with a single click. There should be a wealth of open source templates for us to choose from on Github, with deploy buttons, and easy to follow wizards that help us set things up properly.

Smells like an opportunity to me. I'll have to think more about where the revenue would come from in such a model, but I'm sure it would be easy enough to upsell deployments to some premium features and services. I understand that both the areas of API design and API deployment are playing catch-up with API management at the moment, but someone needs to get to work on streamlining the API deployment button experience across all major cloud platforms and get to work on crafting some useful API server deployments that people can put to work instantly. #please #thankyou


The AWS Serverless API Portal

I was looking through the Github accounts for Amazon Web Services and came across their Serverless API Portal--a pretty functional example of a forkable developer portal for your API, running on a variety of AWS services. It's a pretty interesting implementation because in addition to the tech of your API management it also helps you with the business side of things. 

The AWS Serverless Developer Portal "is a reference implementation for a developer portal application that allows users to register, discover, and subscribe to your API Products (API Gateway Usage Plans), manage their API Keys, and view their usage metrics for your APIs..[]..it also supports subscription/unsubscription through a SaaS product offering through the AWS Marketplace."--providing a pretty compelling API portal solution running on AWS.

There are a couple things I think are pretty noteworthy:

  • Application Backend (/lambdas/backend) - The application backend is a Lambda function built on the aws-serverless-express library. The backend is responsible for login/registration, API subscription/unsubscription, usage metrics, and handling product subscription redirects from AWS Marketplace.
  • Marketplace SaaS Setup Instructions - You can sell your SaaS product through AWS Marketplace and have the developer portal manage the subscription/unsubscription workflows. API Gateway will automatically provide authorization and metering for your product and subscribers will be automatically billed through AWS Marketplace
  • AWS Marketplace SNS Listener Function (Optional) (/listener) - The listener Lambda function will be triggered when customers subscribe or unsubscribe to your product through the AWS Marketplace console. AWS Marketplace will generate a unique SNS Topic where events will be published for your product.

This is the required infrastructure we'll need to get to what I've been talking about for some time with my wholesale API and virtual API stack stories. Amazon is providing you with the infrastructure you need to set up the storefront for your APIs, providing the management layer you will need, including monetization via their marketplace. This is a retail layer, but because your infrastructure is setup in this way, there is no reason you can't sell all or part of your setup to other wholesale customers, using the same AWS marketplace.

I had AWS marketplace on my list of solutions to better understand for some time now, but the AWS Serverless Developer Portal really begins to connect the dots for me. If you can sell access to your API infrastructure using this model, you can also sell your API infrastructure to others using this model. I will have to set up some infrastructure using this approach to better flush out how AWS infrastructure and open templates like this serverless developer portal can help facilitate a more versatile, virtualized, and wholesale API lifecycle. 

There is a more detailed walkthrough of how to get going with the AWS Serverless Developer Portal, helping you think through the details. I am a big fan of these types of templates--forkable Github repositories, with a blueprint you can follow to achieve a specific API deployment, management, or any other lifecycle objective.


I've been thinking about the concept of a wholesale API for some time. Going beyond how we technically deploy our APIs, and focusing more on how we can provide a wholesale version of the same API resources, with accompanying terms of services that go beyond just a retail level of API access in the cloud. Not all APIs fit into this category of API, but with the containerization of everything, and the evolving world of Internet of Things (IoT), there are many new ways in which API resources are being deployed.

You can see this evolution in how we are deploying APIs present in one of the latest API deployment platforms I added to my API deployment research, Nanoscale.io. This image is just a portion of their platform, but the separation of deployment concerns articulates the technical side of what I'm talking about, we just need to add in considerations for the business and political side of how this works.

We've seen API deployment move from on-premise and back again, and now we are seeing it move onto everyday objects like cameras, printers, routers, and other everyday objects. I'm watching service providers like Nanoscale.io emerge to help us deploy our APIs exactly where we need them. I'm guessing that the companies who have their business models in similar order, allowing for API service composition from the management layer to further slide down the stack to the deployment layer, will come out ahead.


The New API Design And Deployment Solution Materia Is Pretty Slick

I was playing with a new API design and deployment solution, from some of my favorite developers out there this weekend called Materia, which bills itself as "a modern development environment to build advanced mobile and web applications"--I would add, "with an API heart".

Materia is slick. it is modern. While very simple, it is also very complete--allowing you to define your underlying data model or entities, design and deploy APIs, and then publish a single page applications (SPA) for use on the web, or mobile devices. Even though I'm one of those back to land, hand-crafted API folks, I could see myself using Materia to quickly design and deploy APIs. 

I say this in the most positive light imaginable, but Materia reminds me of the Microsoft Access for APIs. Partly its the diagramming interface for the entities, but it is also the fact that it bridges the backend to the frontend, allowing you to not just design and deploy the database and APIs, but also the resulting user interface that will put them to work.

I know they are just getting going with developing Materia, but I can't help but share a couple things I'd like to see, that would make it continue to be the modern API driven application it is striving to be:

  • OpenAPI Specs or Blueprints - Allow users to import, export, and manage my APIs in the popular API definition format my choice.
  • Schema.org - Provide users with a wealth of existing entity models to choose from, so they do not reinvent the wheel.
  • Github - Allow for the publishing of projects, and importing of them to and from Github, allowing for the sharing of server design and deployment patterns.

There are a number of other things I'd like to see, but I'm sensitive to the fact that they are just getting started. These three areas would significantly widen the initial audience for Materia beyond the developer class, which is who the application solution should be targeting. Like I said, it has the potential to be the Microsoft Access of APIs for small businesses, which isn't quite the Microsoft Excel of APIs, but a close second. ;-)

Nice work guys! It is the another positive advancement in the world of API design alongside Restlet launched their API design studio, and Apiary setting this modern era of API design into motion with Apiary. I'll be tuning into Materia's evolution on Twitter, and play more with the server and designer editions available on Github.


In The Future APIs Will Be Default For All Cities

In 2014 we are making significant progress in deploying APIs in support of city operations, but we still have so much more work ahead of us when it comes to making public resources available. You can find a dedicated developer area full of data sets, and APIs, in most major US cities like New York, Chicago, San Francisco, Seattle, Philadelphia, Washington D.C. and many more, but what else can we do to really pick up the momentum and quality?

Standardizing API Design Practices
APIs are not that difficult to design with the right education, and experience. Developers who work on city contracts, or are employed by the city should all be taught common web API design practices, and be exposed to modern API design tooling like Swagger and Apiary. Even with this type of education, there will still be many differences between city deployments based upon needs, and tactics, but a little training could go a long way to make city operations more streamlined.

Open Solutions For API Deployment
There are a lot of common approaches to delivering city services, which means there should also a be a number of ways to provide standardized, open solutions for deploying APIs that support city operations. There should be a wealth of open source, Wordpress like solutions for deploying APIs in support of government operations. Sometimes connecting to legacy systems is just to much work, and deploying a simple, standalone solution, then syncing using data dumps or directly with backbend system might be more fruitful.

Common API Management Vision
I’m pretty impressed with the standard approach to deploying city developer areas, and delivering data sets, and APIs, but in reality this is the result of the hard work of Socrata, one of the API management providers dedicated to the government space. I think Socrata, and the other vendors out there are definitely one piece of the puzzle for managing APIs for city operations, but I also think we need other competing, open solutions similar to API Umbrella which is being used across the federal government.

Open Source Tooling Across Cities
When it comes to helping cities better serve their citizens, and save money along the way, I can’t think of a better place to start than by providing common, open source tools for delivering web, and mobile applications on top of city data and APIs. We have to stop re-inventing the wheel for each city when it comes to developing common apps, city needs are going to be very similar across cities--just take a look at solutions like Open311, and the let’s get to work on delivering similiar solutions for every part of city operations.

There is no reason each city should have to go at it alone when it comes to designing, deploying, managing, evangelizing, and putting APIs to work across city operations. We should have standard data models, API definitions, and a wealth of open source tools for cities to put to work.

I don't see APIs as the solution for all cities problems, but I do think that APIs should be common practice for ALL cities. Every city should be publishing all of their data and content in machine readable way, without causing employees any extra work—it should just be part of normal operations.

In the future, all cities will have standard APIs across all cities, and common open source solutions that can be put to work serving citizens in all aspects of city operations. This is how we are going to empower our cities to more with less, and make governing a more inclusive for everyone.


Real-time and Visualizations Will Be Key in Financial API Deployments

I have been doing a lot of research into the world of financial APIs, specifically looking at some of the larger companies providing APIs that deliver market news, data, corporate profiles, and other data that make markets go round.

As I consider some of the common building blocks that are common across many financial API--real-time data frameworks, and visualization tools are two of the top items that I think will be part of every financial API stack in the future. Almost every API I looked at had some sort of real-time stream, promising data faster, as well as a way to extract meaning from these streams using template, or custom visualizations.

I’m tracking on real-time API services and tools, and i’ve been seeing some of these frameworks, like Firebase getting baked in by default to some API platforms. I am also tracking on visualization tools, I just don’t have the research published as a Github repository yet, like I do with my real-time research.

I will keep tracking on API providers who are doing interesting things with real-time or visualizations, and hopefully be able to publish more examples. I can’t help but think there are some pretty interesting opportunities for open frameworks, and white label solutions for API providers when it comes to real-time, and visualization layers on top of their existing APIs.


Push Button API Deployment With The Heroku Button

The new Heroku Button gets us one step closer to a new age of API deployment, where anyone can deploy the APIs they need without any developer or IT resources. As I’m working on packaging up API designs for my screen capture API, and image manipulation API, this type of approach is what I’m envisioning for all of my APIs in the future—push button API deployment.

You shouldn't have to wait to deploy the API you need. Just as we are beginning to deploy pre-packaged application stacks like Wordpress, and Drupal, we should be able to deploy common API deployments for images, blogs, videos, and much, much more, with a single click of a button. Once any new API is launched it can be configured, and connected to other systems using the API, allowing it to operate as part of larger stack, or stay as completely independent node  that just checks in with the mother ship from time to time.

While there will remain a handful of API leaders like Twilio and SendGrid who will have a big presence, many of the APIs in this next wave of API deployment will be smaller, and more transient in nature, taking advantage of current cloud trends around PaaS, and containerization. This new type of APis will possess a self contained blueprint for the OS, database, server-side code, API definition, and even configuration, integration, and automation tooling for the API.

I’m working on Docker definitions for creen capture andimage manipulation AP, as well as other APIs I develop in the future, but first I think any API I deisn shouldl run as a Heroku app, that anyone can deploy in their own account with a single click of a button. It won't take much work to make this happen for any API I deploy, and since 3Scale API infrastructure, which I use to secure my APIs, already runs on Heroku--making securing, and managing my Heroku deployed APis seamless.

Disclosure: 3Scale is an API Evangelist partner.


What I Have Been Calling API Trends, Are Slowly Being Baked Into API Operations

In my monitoring of the API space, when I started seeing a large number of blog posts, tweets, companies, and other elements I track on get tagged with the same tag over and over, I take notice. My blogging, CRM, and news curation system all have their own tag cloud interface for the week, showing which tags have been applied--so if a tag gets heavy usage, I know it.

Over the last couple of years, I've spun up new research into other areas within the world of APIs, beyond my core design, deployment, management, evangelism, discovery, and integration research. I created separate buckets beyond just provide and consume to track on these new areas, called trends, opportunities, and priorities.

In 2014 it is beginning to seem like each of my trend research areas are getting baked directly into API platforms, ranging from real-time features with Firebase, to reciprocity by default using Zapier. API providers are learning that having a real-time layer, or a reciprocity layer baked into their platform is a good thing, and why reinvent the wheel when you have kick ass solutions like Firebase and Zapier.

It makes sense that API providers would be looking externally to deliver aggregation, real-time, reciprocity, and even voice layers for their API platforms--this stuff is hard, and why spread yourself too thin. Intuit just bought reciprocity provider itDuzzit, and I think we will more providers integrating Zapier into their platforms by default like Nimble did. We'll also see more API platforms bring in Firebase as a real-time layer like Nest did for their Internet of Things (Iot) thermostat API platform.

Overall it seems like a white label solution that any API provider could put to use when considering solutions for aggregation, real-time, reciprocity, voice or even data solutions including spreadsheet connectors, analysis, and visualization, would do well in the space. At the very least, any company looking to step up and provide solutions in these areas, should definitely have a strong partner program like Zapier and Firebase have brought to the table.

I will have to start considering how to migrate aggregation, real-time, reciprocity, voice out of the trends bucket and into either the provide or consume buckets, or maybe both. It would seem that both API providers and consumers need to be educated in these areas, and made aware of what solutions are available.

I’m not that worried about the overall structure of API Evangelist at the moment. That is one of the beautiful aspects of how I architected the site(s), is that each research area lives as its own node on the network, so I can move around, shift as I need to find the right formula—something that helps me in a very fast moving space, where my understanding is constantly shifting and evolving with the swift currents of the APi space.

Photo Credit: Diego Naive


Adding Google To List Of API Deployment Companies

I was taking another look at the Google Cloud Platform yesterday, and stumbled across Google Cloud Endpoints. It was something I saw come across my feeds, but really didn’t give it the time it needed to see what it was all about. With the new Google Cloud Endpoints, Google is making a strong push to be not just an API deployment provider, but their approach also reflects what I’d consider to be an evolution of backend as a service (BaaS) deployment.

I think Google describes their service better than I can do it justice:

Google Cloud Endpoints consists of tools, libraries and capabilities that allow you to generate APIs and client libraries from an App Engine application, referred to as an API backend, to simplify client access to data from other applications. Endpoints makes it easier to create a web backend for web clients and mobile clients such as Android or Apple's iOS.

While much of the Google Cloud Platform offering looks a lot like the cloud offering over at Amazon Web Services, AWS definitely does not have API deployment as a service, baked into their cloud stack, like Google does with Google Cloud Endpoints.

Google puts an emphasis on API endpoint deployment for mobile purposes, but leaves it open to be used in JavaScript as well—which seems a little limiting, since you could call same endpoints in any language. Oh well, I’m not writing their marketing.

..the API backend is an App Engine app that performs business logic and other functions for Android and iOS clients, as well as JavaScript web clients. The functionality of the backend is made available to clients through Endpoints, which exposes an API that clients can call.

Google provides SDKs for Android, iOS, and JavaScript, as well as a very Java heavy development process using Maven, combined with the Google Plugin for Eclipse, which is used to design, develop, and deploy your APIs to Google App Engine.

All Google has to do now, is open up the Google Console as a Google Cloud Endpoints management console, giving developers on the Google Cloud Platform the ability to design, deploy, and manage their APIs. Then if Google baked in Google Discovery services for all Google Cloud Endpoints, developers would have a pretty slick discovery layer on top of their cloud API stacks. Hell, all you need then is to allow generation of APIs.json for each collection, and boom you have a pretty complete API design, deployment, management, and discovery platform in the clouds.

Now that Google is added to my API deployment research, I will be keeping a closer eye on what they are doing in respects to being a cloud API platform.


Expanding API Gateway Connectors Into A World of API Deployment Startups

I’m seeing an increase in the number of API deployment services this year, such as startups like StrongLoop and API Spark. These companies are looking to help all of us deploy APIs from common systems, often without the need for IT or programming resources.

The providers I’m seeing emerge are catering to some of the lowest hanging fruit for deploying APIs. The commonly used, and easiest to access systems, that contain the valuable content, data, and media we need to make accessible via APIs.

The common source for many of these API deploy solutions are:

These common information sources, represent the places where the average person in any size company, organization or government agency will be storing their valuable resources. It makes sense that this new wave of API deployment startups would target these services.

If you consider every system integration option that classic API gateways have delivered, it provides a good reference for finding opportunities for building independent API deployment solutions, that if done right, could be a startup all by itself.

Not all companies can afford a full API gateway solution, and their needs are too small for a gateway. There is an emerging opportunity to help people quickly deploy APIs from common sources like spreadsheet, database, file stores, as well as more unique sources like specialty CMS and CRM systems.

Sometimes I like to look at the expanding API universe as a result of the SOA big bang, where all the tools that were in your SOA toolbox, being liberated, and available as independent services that you can get from an increasing number of new API deployment startups.


Everyone Is About To Get An API With The New Wordpress API

While at API Craft in Detroit this week I had the pleasure of hanging with two leads on the WordPress(org) development team, and discuss the API strategy for the blogging platform. Andrew Nacin (@nacin), Lead Developer, and Ryan McCue (@rmccue), WordPress Plugin Developer, facilitated an open circle discussion to work through the challenges that WordPress is facing when developing an API for the open source blogging platform.

At face value, I know a number of API developers who will be less than pleased when they hear about a WordPress API, as both PHP, and WordPress are easy targets for developer’s hatred, for generating less than perfect code. ;-) But, in the end you can’t ignore some of the stats on WordPress usage:

These stats don’t even touch on the number of WordPress plugins and mobile solutions there are developed on the insanely popular website, and blogging platform. I couldn't think of a better platform to start rolling APIs out APIs to the masses, and quickly educating a very large group of tech savvy folks along the way.

Many of these WordPress sites are already integrated with popular API platforms like Twitter, Facebook, Google, and Amazon, so it makes sense to educate WordPress administrators on the APIs they are already using, as well as introduce them to the concept of having their own API around the potentially valuable content they generate.

We Are Talking About Developing 60 M+ Distributed APIs
There is no API deployment or management service provider that is even close to numbers of this magnitude. When ready, the WordPress API will give millions of websites, and the individuals, companies, organizations and government agencies behind them, an Application Programming Interface (API). (weird, I haven't spelled that out in a while) What does this mean for the API sector? I do not know yet, but to me it definitely resembles the long tail of API deployment, that I, and 3Scale have been advocating for the last four years. 

Building For The Lowest Common Denominator
One dominant theme in the conversation around the WordPress API at API Craft, was about building an API for the lowest common denominator. WordPress is an open source blogging platform built using PHP and MySQL, something that enables it to be installed on just about any hosting platform, which has contributed to the platform's growth. However, many of these platforms restrict common aspects of the Internet like the ability to use all of your HTTP verbs, like PUT or DELETE, or simply do not offer essentials like SSL, which is needed for oAuth. The bar for developing an API that will be used by 60 M+ providers has to be set pretty low, ensuring it will work across the entire ecosystem—something WordPress is already pretty damn good at.

Ensuring That The WordPress API Is Extensible
One of the hallmarks of WordPress is the ability to create custom themes and plugins, that extend the functionality of the blogging platform, and the API will be no different. WordPress is much more than just a collection of pages, posts, tags, and links. WordPress has been used as the core of just about any time of website, serving up books, photos, videos, maps, and any other content type you can dream up. The WordPress API has to be able to provide a common interface that can be used to create, read, update, and delete core WP content types, like pages and posts, as well as be extended to any other interpretation the WordPress community can dream up—adding another dimension to the already massive scope of delivering an API to 60M+ websites.

Single Place To Educate Everyone About APIs
I can't think of another platform that has introduced the average person to web programming, more than WordPress (maybe Drupal?). When you go back to the concept of lowest common denominator, WordPress employs PHP and MysQL to drive its functionality, and combined with the plugin and theme extensibility, the platform has provided a rich gateway to the world of web, and mobile app development. The opportunities to introduce, and educate the masses of the benefits (and downside) of APIs, is huge! While the already established API developer community may not want the WordPress community developing APIs, it will be the quickest way we from 10K APIs, into the millions of APIs. WordPress drives 22.0% of the top 10 million websites, which will instantly provide access to some potentially high value content and data, that will significantly increase the size of the overall API economy, as well as contribute to API literacy in every business sector, around the globe.

At the time of this post, the WordPress API is only a plugin, which means it is an optional part of the platform, requiring a WordPress site administrator to know about the plugin, and install as part of their individual WordPress installation. However, the goal is to bake the API into WordPress as part of the 4.1 release, which could happen as soon as this fall, or at the latest the first or second quarter of 2015. While not all 60 M+ WordPress sites will immediately update to the 4.1 release, a significant portion of websites around the world will instantly have an API--good or bad.

I’m pretty excited at the idea of so many websites having APIs, then at the same time I'm completely terrified. It is kind of like flooding the market, with some very high value APIs, while also adding millions of very shitty APIs, and I'm sure a lot in the middle. With a lot of the problems we are facing around discovery and integration at 10K, imagine what things will look like with millions of APIs all of a sudden. Do we want every website to have an API this fast?

Beyond just giving a website an API, I'm also very interested in the potential of using WordPress API as purely an API core, allowing it to be used to drive content and data for mobile, and single page applications (SPA). Considering the innovation we've seen around the core WordPress platform, from the community, I imagine we'll see similar when it comes to just raw API deployment, and when you start considering the potential when bundled with the latest containerization movement, led by Docker.io, and being driven by Amazon, Google, Microsoft, and Red Hat—your head will start to spin.

WordPress API + Virtual Containers = API Deployment As Part Of The Fabric Of The Cloud

I will be carving out more time to consider the implications of the introduction of the WordPress API, and hopefully provide more feedback to the WordPress API development team. If you are a seasoned WordPress developer, and experienced API consumer, you might consider getting involved too. You can get access to the source code for WP API over at Github, and you can submit issues via the issue management for the project—they also provide updates via the WP blog tagged as json-api.


The New StrongLoop API Server Provides A Look At Future Of API Deployment

I’m looking through the most recent API server release from StrongLoop, and I can’t help but see echoes of what I’ve been researching, and covering across the API Evangelist networkAPI management has been front and center for years, but API deployment is something that is just now being productized, with a wealth of new service providers emerging to provide API deployment solutions that go beyond DIY frameworks, and enterprise API gateways.

Let start with walking through their announcement of their StrongLoop API Server:

  • LoopBack 2.0 - An open source framework for quickly creating APIs with Node, including the client SDKs.
  • mobile Backend-as-a-Service - An mBaaS to provide mobile services like push, offline-sync, geopoint and social login either on-premise or in the cloud.
  • Connectors - Connectivity for Node apps leveraging over ten supported data sources including Oracle, SQL Server, MongoDB and SOAP.
  • Controller - Automated DevOps for Node apps including profiling, clustering, process management and log management capabilities.
  • Monitoring - A hosted or on-premise graphical console for monitoring resource utilization, response times and function tracing with the ability to send metrics to existing monitoring tools.

Just as StrongLoop did in their release post, let’s dive deeper into LoopBack 2.0, the open source core of StrongLoop, which they say "acts as a glue between apps or devices and data via APIs written in Node”:

  • Studio - A graphical interface to complement the command-line tooling and assist developers in building Loopback models.
  • Yeoman and Grunt - The ability to script tasks, scaffold, and template applications and externalize their configurations for multiple environments.
  • ExpressJS 4.0 - The latest update, for the well known Node.js package, bringing improvements by removing bundled middleware and refactoring them into maintainable modules, revamped router to remove confusion on HTTP verb usage and decoupling Connect, the HTTP framework of Node from the Express web framework. It is also the E in the MEAN stack (MongoDB, ExpressJS, AngularJS, Node.js).
  • Project Structure - An expanded directory structure has been expanded to make it easier to organize apps and add functionality via pre-built LoopBack components and Node modules.
  • Workspace API - Internal API making it easier to define, configure, and bootstrap your application at design time and runtime by simply defining metadata in the form of JSON.

This is one of the few sophisticated, next generation, API deployment frameworks I have seen. We have had gateways for a while, and we have a new breed of database and spreadsheet to API providers like APISpark. We also have a new wave of scraping to API solutions from Kimono Labs and Import.io, but I’d say Orchestrate.io gets us closest to the vision I have for StrongLoop, when it comes to API deployment.

I’ve referenced this ability in my stories on virtual API stacks:

This new approach to API deployment allows us to rapidly define, deploy, and orchestrate stacks of API resources for use in our web, single page, and mobile applications. I really feel like BaaS, as an entire company, was just a short growth phase, that leading us to this point, where anyone can quickly deploy their own BaaS, for any broad, or niche purpose. I also see my research into the world of APIs and Single Page Apps (SPAs) reflected here, in StrongLoops API platform vision.

I feel that StrongLoop has an important take on API deployment, one that reflects where leading API, web, single page, and mobile app developers have been for a while now. The difference is that StrongLoop is providing as a standardized platform, allowing developers to much more elegantly orchestrate their entire lifecycle. You have everything you need to connect to existing resources, generate new API resources, and organize work into reusable parts, to deliver the web, single page, mobile apps you need.

I am closely watching this new generation of API deployment providers, companies like StrongLoop, Orchestrate, Flynn, and Cosmic. I see these players being the next generation API gateway, that goes way beyond just providing an enterprise gateway to internal assets. This newer vision is much more directly aligned with the needs of developers, enabling them to rapidly design, deploy and manage the API services they need to drive the web, single page, and mobile apps that are the vehicles in the API economy.


API Deployment For Non-Developers Using Zapier, Google Docs, and APISpark

I’m exploring different ways that APIs can be deployed, with an emphasis on deployment by non-developers. There are numerous cloud services available, that allow non-developers to execute common business tasks like registration forms, surveys, payments, and product sales, and when you combine these business functions with Zapier, Google Docs and APISpark—you can deploy an API, no code skills required.

This story begins with the ability to deploy an API from any Google Spreadsheet using APISpark, putting API deployment within the grasp of the average business user. Next, I want the easiest possible way to get data, from multiple sources, into a Google Spreadsheet? Answer: Zapier (or other reciprocity provider, like IFTTT).  To support this, I started looking through the numerous Zapier recipes, that allow my me to publish results to a Google Spreadsheet—there are 167!

The most obvious data source I see is Twitter. Everyone time there is a Tweet from specific user, or from a specific Twitter search, you can have it published to a Google Spreadsheet, and when you have that spreadsheet connected to an APISpark API, the results will be automatically available via API.

The second most common source of data I see, would be cloud based forms. I see providers like Wufoo, Gravity Forms, and JotForm, to name a few, that allow you to submit form submissions to any Google Spreadsheet, and with the APISpark integration, all your form submissions are automatically available via API.

After that, I see numerous commerce, payments, and other key business functions, that Zapier enables publishing of data and content into a Google Spreadsheet from. All of these services have APIs, that is why Zapier is able to do what they do, but that would require a developer to tackle with custom API integration (not for this story). This story is all about enabling non-developers to deploy APIs, from common business functions, no coding necessary--Zapier is our middleman.

Beyond Twitter, forms, payments, and product sales, I will explore other easy to implement, API deployment using Zapier, Google Docs, and APISpark. These represent API deployment scenarios I don't think people are tracking on, and with a little education, I think we can bring problem owners up to speed, and increase the number of APIs available, driven by average person, and the common business problems they face every day.

Disclosure: APISpark is an API Evangelist partner.


Deploying An API From Amazon S3 File Store

I'm spending a lot of time updating my API deployment research lately, making sure it reflects what is truly going on out there in the space. In addition to tracking on legacy approaches to API deployment like enterprise API gateways, or using an open-source API frameworks, I am also trying to understand the realities of scraping data for deployment of APIs, and new solutions from API platforms like APISpark, StrongLoop, Orchestrate.io, and Import.io.

When it comes to the realities of deploying an API, your data or content sources is likely to come from a myriad of file stores, databases, and other systems, and I’m looking to explore as many of the as I possibly can. Todays exploration is focused on deploying an API, using Amazon S3 as a file store. I use Amazon S3 for all my heavy object storage which includes images, PDFs, XML, JSON and CSV data stores—it makes sense that someone companies would want to deploy APIs using their Amazon S3 stores.

I’m using APISpark as my API deployment platform, which allows me to first establish a datastore, which is mapped to a specific bucket within my Amazon S3. What I put into my buckets, and folders is up to me. I might use it to quickly provide access to my images, a folder of XML files, PDFs, or other resource. Once I have my datastore defined, I can deploy a simple web API using APISpark, which gives me all the expected features of an API—URL API endpoints, documentation, code samples, basic authentication (username / password), analytics, and much more.

As with the Google Spreadsheet to API example I wrote on Monday, this scenario allows anyone who manages content and data, to easily organize it on S3, then deploy an API for access, with no IT or developer experience required. You might need to share images, files, or other content with another department within your company or organization, partners outside the corporate firewall, or maybe some 3rd party developers you have building a new website or mobile application.

API deployment is getting easier, and cloud API service providers like APISpark are making API deployment something ANYONE can do—stop waiting for IT or developer resources! ;-)

Disclosure: APISpark is an API Evangelist partnership.


Building Blocks Of API Deployment

As I continue my research the world of API deployment, I'm trying to distill the services, and tooling I come across, down into what I consider to be a common set of building blocks. My goal with identifying API deployment building blocks is to provide a simple list of what the moving parts are, that enable API providers to successfully deploy their services.

Some of these building blocks overlap with other core areas of my research like design, and management, but I hope this list captures the basic building blocks of what anyone needs to know, to be able to follow the world of API deployment. While this post is meant for a wider audience, beyond just developers, I think it provides a good reminder for developers as well, and can help things come into focus. (I know it does for me!)

Also there is some overlap between some of these building blocks, like API Gateway and API Proxy, both doing very similiar things, but labeled differently. Identifying building blocks for me, can be very difficult, and I'm constantly shifting definitions around, until I find a comfortable fit--so some of these will evolve, especially with the speed at which things are moving in 2014.

CSV to API - Text files that contain comma separate values or CSVs, is one of the quickest ways to convert existing data to an API. Each row of a CSV can be imported and converted to a record in a database, and easily generate a RESTful interface that represents the data stored in the CSV. CSV to API can be very messy depending on the quality of the data in the CSV, but can be a quick way to breathe new life into old catalogs of data lying around on servers or even desktops. The easiest way to deal with CSV is to import directly into database, than generate API from database, but the process can be done at time of API creation.
Database to API - Database to API is definitely the quickest way to generate an API. If you have valuable data, generally in 2013, it will reside in a Microsoft, MySQL, PostgreSQL or other common database platform. Connecting to a database and generate a CRUD, or create, read, updated and delete API on an existing data make sense for a lot of reason. This is the quickest way to open up product catalogs, public directories, blogs, calendars or any other commonly stored data. APIs are rapidly replace database connections, when bundled with common API management techniques, APIs can allow for much more versatile and secure access that can be made public and shared outside the firewall.
Framework - There is no reason to hand-craft an API from scratch these days. There are numerous frameworks out their that are designed for rapidly deploying web APIs. Deploying APIs using a framework is only an option when you have the necessary technical and developer talent to be able to understand the setup of environment and follow the design patterns of each framework. When it comes to planning the deployment of an API using a framework, it is best to select one of the common frameworks written in the preferred language of the available developer and IT resources. Frameworks can be used to deploy data APIs from CSVs and databases, content from documents or custom code resources that allow access to more complex objects.
API Gateway - API gateways are enterprise quality solutions that are designed to expose API resources. Gateways are meant to provide a complete solution for exposing internal systems and connecting with external platforms. API gateways are often used to proxy and mediate existing API deployments, but may also provide solutions for connecting to other internal systems like databases, FTP, messaging and other common resources. Many public APIs are exposed using frameworks, most enterprise APIs are deployed via API gateways--supporting much larger ideployments.
API Proxy - API proxy are common place for taking an existing API interface, running it through an intermediary which allows for translations, transformations and other added services on top of API. An API proxy does not deploy an API, but can take existing resources like SOAP, XML-RPC and transform into more common RESTful APIs with JSON formats. Proxies provide other functions such as service composition, rate limiting, filtering and securing of API endpoints. API gateways are the preffered approach for the enterprise, and the companies that provide services support larger API deployments.
API Connector - Contrary to an API proxy, there are API solutions that are proxyless, while just allowing an API to connect or plugin to the advanced API resources. While proxies work in many situations, allowing APIs to be mediated and transformed into required interfaces, API connectors may be preferred in situations where data should not be routed through proxy machines. API connector solutions only connect to existing API implementations are easily integrated with existing API frameworks as well as web servers like Nginx.
Hosting - Hosting is all about where you are going to park your API. Usual deployments are on-premise within your company or data center, in a public cloud like Amazon Web Services or a hybrid of the two. Most of the existing service providers in the space support all types of hosting, but some companies, who have the required technical talent host their own API platforms. With HTTP being the transport in which modern web APIs put to use, sharing the same infrastructure as web sites, hosting APIs does not take any additional skills or resources, if you already have a web site or application hosting environment.
API Versioning - There are many different approaches to managing different version of web APIs. When embarking on API deployment you will have to make a decision about how each endpoint will be versioned and maintained. Each API service provider offers versioning solutions, but generally it is handled within the API URI or passed as an HTTP header. Versioning is an inevitable part of the API life-cycle and is better to be integrated by design as opposed to waiting until you are forced to make a evolution in your API interface.
Documentation - API documentation is an essential building block for all API endpoints. Quality, up to date documentation is essential for on-boarding developers and ensuring they successfully integrate with an API. Document needs to be derived from quality API designs, kept up to date and made accessible to developers via a portal. There are several tools available for automatically generting documentation and even what is called interactive documentation, that allows for developers to make live calls against an API while exploring the documentation. API documentation is part of every API deployment.
Code Samples - Second to documentation, code samples in a variety of programming languages is essential to a successful API integration. With quality API design, generating samples that can be used across multiple API resources is possible. Many of the emerging API service providers and the same tools that generate API documentation from JSON definitions can also auto generate code samples that can be used by developers. Generation of code samples in a variety of programming languages is a requirement during API deployment.
Scraping - Harvesting or scraping of data from an existing website, content or data source. While we all would like content and data sources to be machine readable, sometimes you have just get your hands dirty and scrape it. While I don't support scraping of content in all scenarios, and business sectors, but in the right situations scraping can provide a perfectly acceptable content or data source for deploying an API.
Container - The new virtualization movement, lead by Docket, and support by Amazon, Google, Red Hat, Microsoft, and many more, is providing new ways to package up APIs, and deploy as small, modular, virtualized containers.
Github - Github provides a simple, but powerful way to support API deployment, allowing for publsihing of a developer portal, documentation, code libraries, TOS, and all your supporting API business building blocks, that are necessary for API effort. At a minimum Github should be used to manage public code libraries, and engage with API consumers using Github's social features.
Terms of Use / Service - Terms of Use provide a legal framework for developers to operate within. They set the stage for the business development relationships that will occur within an API ecosystem. TOS should protect the API owners company, assets and brand, but should also provide assurances for developers who are building businesses on top of an API. Make sure an APIs TOS pass insepection with the lawyers, but also strike a healthy balance within the ecosystem and foster innovation.

If there are any features, service or tools you depend on when deploying your APIs, please let me know at @kinlane. I'm not trying to create an exhaustive list, I just want to get idea for what is available across the providers, and where the gaps are potentially. 

I'm feel like I'm finally getting a handle on the building blocks for API design, deployment, and management, and understanding the overlap in the different areas. I will revisit my design and management building blocks, and evolve my ideas of what my perfect API editor would look like, and how this fits in with API management infrastructure from 3Scale, and even API integration.

Disclosure: 3Scale is an API Evangelist partner.


Deploy An API From A Google Spreadsheet Using APISpark

Spreadsheet are the most used datastore in business. When Google came out with their web-based spreadsheet, it was a game changer (for those who have access), when it came to managing, collaborating and sharing small data sets.

When it comes to data management, not all of us live in the world of big data, and spreadsheets are a quick and dirty data store that gets the job done. As the web was maturing, Google saw an opportunity, and launched the labs version of Google Spreadsheets in mid 2006, bringing spreadsheets into the web 2.0 era of the Internet. In 2014, the next step, in the evolution of the spreadsheet, is to be able to plug spreadsheets directly into the API economy, allowing spreadsheet data stewards to make their valuable content and data available to web, mobile and Internet of things (Iot) developers via simple web APIs.

Google Spreadsheets allows for accessing data via a JSON feed natively, and I wrote about adding an API, plus management layer on top of a public or private Google Spreadsheet, but there is also an instant, cloud-based approach to deploying an API from Google Spreadsheet, using APISpark. Restlet has taken their open source REST framework, launched it as a service, and opened up the possibility for anyone to deploy an API, from an existing Google Spreadsheet—no coding necessary.

APISpark has provided both the API deployment, plus API management layer, spreadsheet owners will need. This is an important evolution in the API economy, because it allows people who are actually managing vital data to securely expose it, for use in applications, without needing any developer or IT resources. This will bring data stewards closer to the actual people who need their data, whether it be internally between systems or business units, externally with partners, or even publicly for anyone looking to use a dataset on a website or application.

A lot can be lost in translation, when a dataset has to go through IT, or developers before it can be made available as an API, not to mention the cost savings in being able to cut out the middleman. If we can equip any small business, enterprise, non-profit, or government data steward, who spends their day managing content and data in spreadsheets, with the ability to securely expose, and easily manage APIs from those data sources—imagine the fuel that will be fed to the API economy.

I am working on other walk-throughs, demonstrating how to expose datasets managed in Google Spreadsheets, as APIs, using APISpark. When I get up, I will publish here on API Evangelist, as well as on my API Deployment research site, for folks to learn from.

Disclosure: Restlet is an API Evangelist partner


API Design White Paper

Download as PDF

My research for API Evangelist spans 50+ projects, but my core research is focused on seven projects in API 101, history, design, deployment, management, discovery and integration. In each of these areas, I evaluate who the key players (companies and individuals) are, and the tools and services they produce.

Using my own, custom developed system, I monitor these key players, in all of the research areas, consuming blog posts, tweets, code commits, and much more, trying to establish a deep awareness in each of these fundamental layers of the API economy. The goal of my monitoring is to help me in producing blog posts (short form), and white papers (long form), while generating valuable analysis for my research, and increasing my own understanding and awareness of the API economy.

I have already produced white papers for API 101, History of APIs, API Deployment, and API Management, and I just now finished one for API Design. As with all of my research projects, and the resulting white papers, they are a work in progress, and meant to be a living snapshot of my research. I generate this white paper using the same tools I publish my API design research to Github. My CMS lets me format the static content, while also pulling dynamic content from my tracking system(s), and rolls it up into single PDF white paper, you can take with you and learn about the world of API design--it isn't perfect, but provides a good summary of my research.

The other white papers I've produced are all due for an update, but first I need to focus on producing the first version white paper for API discovery, and integration, both areas that are extremely relevant, fast moving layers of the API economy. Looking at the date on my other white papers, June 2013, it seems that summer is a good time to hibernate and produce these new, long form snapshots of my ongoing research into the API economy.


Contributing To The Deployment Lifecycle

Contributing To The Deployment Lifecycle

Automatically generating an API, from a machine readable definition, has long been the dream of API providers. In reality, this is much harder to achieve, than one might think, but there has been significant work from leading API design service providers and their developer communities, and in 2014, you can indeed auto generate an API from your API definition.

 API definitions like API Blueprint, RAML, or Swagger, do what they do best--describe your interface. How this merges with the actual deployment of an API backend, is not always evident. Depending on how you craft your API definition, it may contain your underlying data model, and for simpler data or content APIs, auto generation of a backend might be possible.

When it comes to more complex API resources, there will undoubtedly be much more magic behind the interface, requiring connection with unique backend systems, and code libraries. I’m confident that the API community will continue to produce API definition driven, connectors for common IT systems, further evolving on the vision of automatically generating of APIs from their machine readable API definitions—replacing much of what more traditional service gateways have always delivered.

There is also a new evolution in cloud computing, one that will contribute significantly to API deployment, being called containers. New approaches to deploying application resources, from Docker, and subsequently Amazon, Red Hat, Google, Microsoft, and others, will rapidly escalate how we deploy the APIs that we depend on. API definitions provide a perfect interface definition, for resources that are deployed as containers. We are just missing the linking between current container deployment definitions, and the evolving world of API definitions—a link that will come into focus in 2014.


If I Could Design My Perfect API Design Editor

I’ve been thinking a lot about API design lately, the services and tooling coming from Apiary, RAML and Swagger, and wanted to explore some thoughts around what I would consider to be killer features for the killer API design editor. Some of these thoughts are derived from the features I’ve seen in Apiary and RAML editor, and most recently the Swagger Editor, but I’d like to *riff* on a little bit and play with what could be the next generation of features.

While exploring my dream API design editor, I’d like to walk through each group of features, organized around my indentations and objectives around my API designs.

Getting Started
When kicking off the API design process, I want to be able to jumpstart the API design lifecycle from multiple sources. There will be many times that I want to start from a clean slate, but many times I will be working from existing patterns.

  • Blank Canvas - I want to start with a blank canvas, no patterns to follow today, I’m painting my masterpiece. 
  • Import Existing File - I have a loose API design file laying around, and I want to be able to open, import and get to work with it, in any of the formats. 
  • Fork From Gallery - I want to fork one of my existing API designs, that I have stored in my API design taller (I will outline below). 
  • Import From API Commons - Select an existing API design pattern from API Commons and import into editor, and API design gallery.

My goals in getting started with API design, will be centered around re-use the best patterns across the API space, as well as my own individual or company API design gallery. We are already mimicking much of this behavior, we just don’t have a central API design editor for managing these flows.

Editing My API Design
Now we get to the meat of the post, the editor. I have several things in mind when I’m actually editing a single API definition, functions I want, actions I want to take around my API design. These are just a handful of the editor specific features I’d love to see in my perfect API design editor.

  • Multi-Lingual - I want my editor to word with API definitions in API Blueprint, RAML and Swagger. I prefer to edit my API designs in JSON, but I know many people I work with will prefer markdown or YAML based, and my editor needs to support fluid editing between all popular formats. 
  • Internationalization - How will I deal with making my API resources available to developers around the world? Beyond API definition langugages, how do I actually make my interfaces accessible, and understood by consuers around the glob.e
  • Dictionary - I will outline my thoughts around a central dictionary below, but I want my editor to pull from a common dictionary, providing a standardized language that I work from, as well as my company when designing interfaces, data models, etc. 
  • Annotation - I want to be able to annotate various aspects of my API designs and have associated notes, conversation around these elements of my design. 
  • Highlight - Built in highlighting would be good to support annotations, but also just reference various layers of my API designs to highlighting during conversations with others, or even allowing the reverse engineer of my designs, complete with the notes and layers of the onions for others to follow. 
  • Source View - A view of my API design that allows me to see the underlying markdown, YAML, or JSON and directly edit the underlying API definition language. 
  • GUI View - A visual view of my API design, allowing for adding, editing and removing elements in an easy GUI interface, no source view necessary for designing APIs. 
  • Interactive View - A rendered visual view of my API, allowing me to play with either my live API or generated mock API, through interactive documentation within my editors. 
  • Save To Gallery - When I’m done working with my API designs, all roads lead to saving it to my gallery, once saved to my working space I can decide to take other actions. 
  • Suggestions - I want my editor to suggest the best patterns available to me from private and public sources. I shouldn't ever design my APIs in the dark.

The API design editor should work like most IDE’s we see today, but keep it simple, and reflect extensibility like GIthub’s Atom editor. My editor should give me full control over my API designs, and enable me to take action in many pre-defined or custom ways one could imagine.

Taking Action
My API designs represent the truth of my API, at any point within its lifecycle, from initial conception to deprecation. In my perfect editor I should be able to take meaningful actions around my API designs. For the purposes of this story I’m going to group actions into some meaningful buckets, that reflect the expanding areas of the API lifecycle. You will notice the four areas below, reflect the primary areas I track on via API Evangelist.

Design Actions
Early on in my API lifecycle, while I’m crafting new designs, I will need to take action around my designs. Designs actions will help me iterate on designs before I reach expensive deployment and management phases.

  • Mock Interface - With each of my API designs I will need to generate mock interfaces that I can use to play with what my API will deliver. I will also need to share this URL with other stakeholders, so that they can play with, and provide feedback on my API interface. 
  • Copy / Paste - API designs will evolve and branch out into other areas. I need to be able to copy / paste or fork my API designs, and my editor, and API gallery should keep track of these iterations so I don’t have to. The API space essentially copy and pastes common patterns, we just don’t have a formal way of doing it currently. 
  • Email Share - I want to easily share my API designs via email with other key stakeholders that will be part of the API lifecycle. Ideally I wouldn’t be emailing around the designs themselves, just pointers to the designs and tools for interacting within the lifecycle. 
  • Social Share - Sometimes the API design process all occur over common social networks, and in some cases be very public. I want to be able to easily share all my API designs via my most used social networks like Github, Twitter and LinkedIn. 
  • Collaboration - API design should not be done in isolation, and should be a collaborative process with all key stockholders. I would like to to even have Etherpad style real-time interactions around the design process with other users.

API design actions are the first stop, in the expanding API design lifecycle. Being able to easily generate mocks, share my interfaces and collaborate with other stakeholders. Allowing me to quickly, seamlessly take action throughout the early design cycles will save me money, time and resources early on—something that only become more costly and restrictive later on in the lifecycle.

Deployment Actions
Next station in the API design lifecycle, is being able to deploy APIs from my designs. Each of the existing API definition formats provide API deployment solutions, and with the evolution in cloud computing, we are seeing even more complete, modular ways to take action around your API designs.

  • Server - With each of my API designs, I should be able to generate server side code in the languages that I use most. I should be able to register specific frameworks, languages, and other defining aspects of my API server code, then generate the code and make available for download, or publish using Github and FTP. 
  • Container - Cloud computing has matured, producing a new way of deploying very modular architectural resources, giving rise to a new cloud movement, being called containers. Container virtualization will do for APIs, what APIs have done for companies in the last 14 years. Containers provide a very defined, self-contained way of deploying APIs from API design blueprints, ushering a new ay of deploy API resources in coming years.

I need help to deploy my APIs, and with container solutions like Docker, I should have predefined packages I can configure with my API designs, and deploy using popular container solutions from Google, Amazon, or other coud provider.

Management Actions
After I deploy an API I will need to use my API definitions as a guide for an increasing number of areas of my management process, not just the technical, but the business and politics of my API operations.

  • Documentation - Generating of interactive API documentation is what kicked off the popularity of API design, and importance of API definitions. Swagger provider the Swagger UI, and interactive, hands-on way of learning about what an API offered, but this wasn’t the only motivation—providing up to date documentation as well, added just the incentives API providers needed to generate machine readable documentation.
  • Code - Second to API documentation, providing code samples, libraries, and SDKS is one of the best ways you can eliminate friction when on boarding new API users. API definitions provide a machine readable set of instructions, for generating the code that is necessary throughout the API management portion of the API lifecycle. 
  • Embeddable - JavaScript provides a very meaningful way to demonstrate the value of APis, and embeddable JavaScript should always be part of the API lifecycle. Machine readable API definitions can easily generate visualizations that can be used in documentation, and other aspects of the API lifecycle.

I predict, with the increased adoption of machine readable API formats like API Blueprint, RAML and Swagger, we will see more layers of the API management process be expanded on, further automating how we manage APis.

Discovery Actions
Having your APIs found, and being able to find the right API design for integration, are two sides of an essential coin in the API lifecycle. We are just now beginning to get a handle on what is need when it comes to API discovery.

  • APIs.json - I should be able to organize API designs into groupings, and publish an APIs.json file for these groups. API designs should be able to be organized in multiple groups, organized by domain and sub-domain. 
  • API Commons - Thanks to Oracle, the copyright of API of API definitions will be part of the API lifecycle. I want the ability to manage and publish all of my designs to the API Commons, or any other commons for sharing of API designs.

The discovery of APIs has long been a problem, but is just now reaching the critical point where we have to start develop solutions for not just finding APIs, but also understanding what they offer, and the details of of the interface, so we can make sense of not just the technical, but business and political decisions around API driven resources.

Integration Actions
Flipping from providing APIs to consuming APIs, I envision a world where I can take actions around my API designs, that focus on the availability, and integration of valuable API driven resources. As an API provider, I need as much as assistance as I can, to look at my APIs from an external perspective, and being able to take action in this area will grow increasingly important.

  • Testing - Using my machine readable API definitions, I should be able to publish testing definitions, that allow the execution of common API testing patterns. I’d love to see providers like SmartBear, Runscope, APITools, and API Metrics offer services around the import of API design generated definitions. 
  • Monitoring - Just like API testing, I want to be able to generate definition that allow for the monitoring of API endpoints. My API monitoring tooling should allow for me to generate standard monitoring definitions, and import and run them in my API monitoring solution.

I’d say that API integration is the fastest growing area of the AP space, second only to API design itself. Understanding how an API operates, from an integrators perspective is valuable, not just to the integrator, but also the provider. I need to be thinking about integration issues early on in the API design lifecyle to minimize costly changes downstream.

Custom Actions
I’ve laid out some of the essential actions I’d like to be able to take around my API definitions, throughout the API lifecycle. I expect the most amount of extensibility from my API design editor, in the future, and should be able to extend in any way that I need.

  • Links - I need a dead simple way to take an API design, and publish to a single URL, from within my editor. This approach provides the minimum amount of extensibility I will need in the API design lifecycle. 
  • JavaScript - I will need to run JavaScript that I write against all of my API designs, generating specific results that I will need throughout the API design process. My editor should allow me to write, store and execute JavaScript against all my API designs. 
  • Marketplace - There should be a marketplace to find other custom actions I can take against my API designs. I want a way to publish my API actions to the marketplace, as well as browse other API actions, and add them to my own library.

We’ve reached a point where using API definitions like API Blueprint, RAML, and Swagger are common place, and being able to innovate around what actions we take throughout the API design lifecycle will be critical to the space moving forward, and how companies take action around their own APIs.

API Design Gallery
In my editor, I need a central location to store and manage all of my API designs. I’m calling this a gallery, because I do not want it only to be a closed off repository of designs, I want to encourage collaboration, and even public sharing of common API design patterns. I see several key API editor features I will need in my API design gallery.

  • Search - I need to be able to search for API designs, based upon their content, as well as other meta data I assign to my designs. I should be able to easily expose my search criteria, and assist potential API consumers in finding my API designs as well. 
  • Import - I should be able to import any API design from a local file, or provide a public URL and generate local copy of any API design. Many of my Api designs will be generated from an import of existing definition. 
  • Versioning - I want the API editor of the future to track all versioning of my API designs. Much like managing the code around my API, I need the interface definitions to be versioned, and the standard feature set for managing this process. 
  • Groups - I will be working on many API designs, will various stakeholders in the success of any API design. I need a set of features in my API design editor to help me manage multiple groups, and their access to my API designs. 
  • Domains - Much like the Internet itself, I need to organize my APIs by domain. I have numerous domains which I manage different groups of API resources. Generally I publish all of my API portals to Github under a specific domain, or sub-domain—I would like this level of control in my API design editor. 
  • Github - Github plays a central role in my API design lifecycle. I need my API design editor to help me manage everything, via public and private Github repository. Using the Github API, my API design editor should be able to store all relevant data on Github—seamlessly. 
  • Diff - What are the differences between my API designs? I would like to understand the difference between each of my API resource types, and versions of each API designs. It might be nice if I could see the difference between my API designs, and other public APIs I might consider as competitors. 
  • Public - The majority of my API designs will be public, but this won’t be the case with every designer. API designers should have the control over whether or not their API designs are public or private.

My API design gallery will be the central place i work from. Once I reach a critical mass of designs, I will have many of the patterns I need to design, deploy and manage my APIs. It will be important for me to have access to import the best patterns from public repositories like API Commons. To evolve as an API designer, I need to easily create, store, and evolve my own API designs, while also being influenced by the best patterns available in the public domain.

Embeddable Gallery
Simple visualization can be an effective tool in helping demonstrate the value an API delivers. I want to be able to manage open, API driven visualizations, using platforms like D3.js. I need an arsenal of embeddable, API driven visualizations to help tell the store of the API resources I provide, give me a gallery to manage them.

  • Search - I want to be able to search the meta data around the API embeddable tools I develop. I will have a wealth of graphs, charts, and more functional, JavaScript widgets I generate. 
  • Browse - Give me a way to group, and organize my embeddable tools. I want to be able to organize, group and share my embeddable tools, not just for my needs, but potentially to the public as well.

A picture is worth a thousand words, and being able to easily generate interactive visualizations, driven by API resources, that can be embedded anywhere is critical to my storytelling process. I will use embeddable tools to tell the story of my API, but my API consumers will also use these visualizations as part of their efforts, and hopefully develop their own as well.

API Dictionary
I need a common dictionary to work from when designing my APIs. I need to use consistent interface names, field names, parameters, headers, media types, and other definitions that will assist me in providing the best API experience possible.

  • Search - My dictionary should be available to me in any area of my API design editor, and in true IDE style, while I’m designing. Search of my dictionary, will be essential to my API design work, but also to the groups that I work with. 
  • Schema.org - There are plenty of existing patterns to follow when defining my APIs, and my editor should always assist me in adopting, and reusing any existing pattern I determine as relevant to my API design lifecycle, like Schema.org.
  • Dublin Core - How do I define the metadata surrounding my API designs? My editor should assist me to use common metadata patterns available like Dublin Core.
  • Media Types - The results of my API should conform to existing document representations, when possible. Being able to explore the existing media types available while designing my API would help me emulate existing patterns, rather than reinventing the wheel each time I design an API. 
  • Custom - My dictionary should be able to be driven by existing definitions, or allow me to import and and define my own vocabulary based upon my operations. I want to extend my dictionary to meet the unique demands of my API design lifecycle.

I want my API design process to be driven by a common dictionary that fits my unique needs, but borrows from the best patterns already available in the public space. We already emulate many of the common patterns we come across, we just don’t have any common dictionary to work from, enforcing healthy design via my editor.

An Editor For Just My Own API Design Process
This story has evolved over the last two weeks, as I spent time in San Francisco, discussing API design, then spending a great deal of time driving, and thinking about the API design lifecycle. This is all part of my research into the expanding world of API design, which will result in a white paper soon, and my intent is to just shed some light on what might be some of the future building blocks of the API design space. My thoughts are very much based in my own selfish API design needs, but based upon what i’m seeing in the growing API design space.

An Editor For A Collective API Design Process
With this story, I intend to help keep the API design process a collaborative, and when relevant a public affair. I want to ensure we work from existing patterns that are defined in the space, and as we iterative and evolve APIs, we collectively share our best patterns. You should not just be proud of your API designs, and willing to share publicly, you should demonstrate the due diligence that went into your design, attribute the patterns you used to contribute to your designs, and share back your own interpretation—encouraging re-use and sharing further downstream.

What Features Would Be Part of Your Perfect API Design Editor
This is my vision around the future of API design, and what I’d like to have in my editor—what is yours? What do you need as part of your API design process? Are API definitions part of the “truth” in your API lifecycle? I’d love to hear what tools and services you think should be made available, to assist us in designing our APIs.

Disclosure: I'm still editing and linking up this post. Stay tuned for updates.


What Are The Incentives For Creating Machine Readable API Definitions?

After #Gluecon in Colorado the other week, I have API design on the brain. A portion of the #APIStrat un-workshops were dedicated to API design related discussion, and API Design is also the most trafficked portion of API Evangelist this year, according to my Google Analytics.

At #Gluecon, 3Scale and API Evangelist announced our new API discovery project APIs.json, and associated tooling, API search engine APIs.io. For APIs.json, APIs.io, and API Commons to work, we are counting API providers, and API consumers creating machine readable API definitions.

With this in mind, I wanted to do some exploration--what would be possible incentives for creating machine readable API definitions?

JSON API Definition
Interactive Documentation
Server Side Code Deployment
Client Side Code generation
Design, Mocking, and Collaboration
Markdown Based API Definition
YAML Based API Definition
Reusability, Interoperability and Copyright
Testing & Monitoring
Discovery
Search

The importance of having an API definition of available resources, is increasing. It was hard to realize the value of defining APIs with the heavy, top down defined WSDL, and even its web counterpart WADL, but with these new approaches, other incentives are emerging—incentives that live throughout the API lifecycle.

The first tangible shift in this area was when Swagger released the Swagger UI, providing interactive documentation that was generated from a Swagger API definition. Apiary quickly moved the incentives to an earlier stage in the API design lifecycle with design, mocking and collaboration opportunities.

As the API design world continues to explode, I’m seeing a number of other incentives emerge for API providers to generate machine readable API definitions, and looking to find any incentives that I’m missing, as well as identify any opportunities in how I can encourage API designers to generate machine readable API definitions in whatever format they desire.


The Role Of Scraping In API Deployment

Scraping has been something I’ve done since I first started working on the web. Sometimes you just need some data or a piece of content that isn't available in a machine readable format, and the only way is to get it scrape it off a web page.

Scraping is widespread, but something very few individuals or companies will admit to doing. Just like writing scripts for pulling data from APIs, I write a lot of scripts that pull content and data from websites and RSS feeds. Even though I tend to write my own scripts for scraping, I’ve been closely watching the new breed of scraping tools like Scraperwiki:

ScraperWiki

ScraperWiki is a web-based platform for collaboratively building programs to extract and analyze public (online) data, in a wiki-like fashion. "Scraper" refers to screen scrapers, programs that extract data from websites. "Wiki" means that any user with programming experience can create or edit such programs for extracting new data, or for analyzing existing datasets. The main use of the website is providing a place for programmers and journalists to collaborate on analyzing public data

I was first attracted to Scraperwiki as a way to harvest Tweets, and further interested by their web and PDF extraction tools. Scraperwiki has already been around for a while, founded back in 2010, and recently there is a new wave of scraping tools that have emerged:

import.io

Importio turns the web into a database, releasing the vast potential of data trapped in websites. Allowing you to identify a website, select the data and treat it as a table in your database. In effect transform the data into a row and column format. You can then add more websites to your data set, the same as adding more rows and query in real-time to access the data.

Kimono

Kimono is a way to turn websites into structured APIs from your browser in seconds. You don’t need to write any code or install any software to extract data with Kimono. The easiest way to use Kimono is to add our bookmarklet to your browser’s bookmark bar. Then go to the website you want to get data from and click the bookmarklet. Select the data you want and Kimono does the rest.

Kimono and Import.io provide scraping tools for anyone, even non-developers to scrape content from web pages, but also allow you to deploy an API from the content. While it is easy to deploy APIs using data and content from the other scraping providers I track on, the new breed of scraping services focus on API deployment as end-goal.

At API Strategy & Practice in Amsterdam, the final panel of the event was called “toward 1 million APIs”, and scraping came up as one possible way that we will get to APIs at this scale. Sometimes the stewards or owners of data just don’t have the resources to deploy APIs, and the only way to deploy an API will be to scrape data and content and publish as web API--either internally or externally by 3rd party.

I have a research site setup to keep track of scraping news I come across, as well as any companies and tools I discover. Beyond ScraperWiki, Kimono and Import.io I’m watching these additional scraping services.

Alchemy

The product of over 50 person years of engineering effort, AlchemyAPI is a text mining platform providing the most comprehensive set of semantic analysis capabilities in the natural language processing field. Used over 3 billion times every month, AlchemyAPI enables customers to perform large-scale social media monitoring, target advertisements more effectively, track influencers and sentiment within the media, automate content aggregation and recommendation, make more accurate stock trading decisions, enhance business and government intelligence systems, and create smarter applications and services.

Common Crawl

Common Crawl is a non-profit foundation dedicated to providing an open repository of web crawl data that can be accessed and analyzed by everyone. Common Crawl Foundation is a California 501(c)(3) registered non-profit founded by Gil Elbaz with the goal of democratizing access to web information by producing and maintaining an open repository of web crawl data that is universally accessible and analyzable.

ConvExtra

Convextra allows you collect valuable data from internet and represents it in easy-to-use CVS format for forther utilization.

PageMunch

Page Munch is a simple API that allows you to turn webpages into rich, structured JSON. Easily extract photos, videos, event, author and other metadata from any page on the internet in milliseconds.

PromptCloud

PromptCloud opeartes on “Data as a Service” (DaaS) model and deals with large-scale data crawl and extraction, using cutting-edge technologies and cloud computing solutions (Nutch, Hadoop, Lucene, Cassandra, etc). Its proprietary software employs machine learning techniques to extract meaningful information from the web in desired format. These data could be from reviews, blogs, product catalogs, social sites, travel data—basically anything and everything on WWW. It’s a customized solution over simply being a mass-data crawler, so you only get the data you wish to see. The solution provides both deep crawl and refresh crawl of the web pages in a structured format.

Scrapinghub

Scrapinghub is a company that provides web crawling solutions, including a platform for running crawlers, a tool for building scrapers visually, data feed providers (DaaS) and a consulting team to help startups and enterprises build and maintain their web crawling infrastructures.

Screen Scraper

Copying text from a web page. Clicking links. Entering data into forms and submitting. Iterating through search results pages. Downloading files (PDF, MS Word, images, etc.).

Web Scrape Master

Scrape web without writing code for it; To create value from the sea of data being published over web. Data is Currency. API. Web scrape master provides a very simple API for retrieving scrape data.

If you know of scraping services I don't have listed, or scraping tools that aren't included in my research, please let me know. I think scraping services that get it right, will continue to play a vital role in API deployment and getting us to 1M APIs.


Common Building Blocks of Cloud APIs

I’ve been profiling the API management space for almost four years now, and one of the things I keep track of is what some of the common building blocks of API management are. Recently I’ve pushed into other areas like API design, integration and into payment APIs, trying to understand what the common elements providers are using to meet developer needs.

Usually I have to look through the sites of leading companies in the space, like the 38 payment API providers I’m tracking on to find all the building blocks that make up the space, but when it came to cloud computing it was different. While there are several providers in the space, there is but a single undisputed leader—Amazon Cloud Services. I was browsing through AWS yesterday and I noticed their new products & solutions menu, which I think has a pretty telling breakdown of the building blocks of cloud APIs.

Compute & Networking

Compute - Virtual Servers in the Cloud (Amazon EC2)

Auto Scaling - Automatic vertical scaling service (AutoScaling)

Load Balancing - Automatic load balancing service (Elastic Load Balancing)

Virtual Desktops - Virtual Desktops in the Cloud (Amazon WorkSpaces)

On-Premise - Isolated Cloud Resources (Amazon VPC)

DNS - Scalable Domain Name System (Amazon Route 53)

Network - Dedicated Network Connection to AWS (AWS Direct Connect)

Storage & CDN

Storage - Scalable Storage in the Cloud (Amazon S3)

Bulk Storage - Low-Cost Archive Storage in the Cloud (Amazon Glacier)

Storage Volumes - EC2 Block Storage Volumes (Amazon EBS)

Data Portability - Large Volume Data Transfer (AWS Import/Export)

On-Premise Storage - Integrates on-premises IT environments with Cloud storage (AWS Storage Gateway)

Content Delivery Network (CDN) - Global Content Delivery Network (Amazon CloudFront)

Database

Relational Database - Managed Relational Database Service for MySQL, Oracle, SQL Server, and PostgreSQL (Amazon RDS)

NoSQL Database - Fast, Predictable, Highly-scalable NoSQL data store (Amazon DynamoDB)

Data Caching - In-Memory Caching Service (Amazon ElastiCache)

Data Warehouse - Fast, Powerful, Fully Managed, Petabyte-scale Data Warehouse Service (Amazon Redshift)

Analytics

Hadoop - Hosted Hadoop Framework (Amazon EMR)

Real-Time - Real-Time Data Stream Processing (Amazon Kinesis)

Application Services

Application Streaming - Low-Latency Application Streaming (Amazon AppStream)

Search - Managed Search Service (Amazon CloudSearch)

Workflow - Workflow service for coordinating application components (Amazon SWF)

Messaging - Message Queue Service (Amazon SQS)

Email - Email Sending Service (Amazon SES)

Push Notifications - Push Notification Service (Amazon SNS)

Payments - API based payment service (Amazon FPS)

Media Transcoding - Easy-to-use scalable media transcoding (Amazon Elastic Transcoder)

Deployment & Management

Console - Web-Based User Interface (AWS Management Console)

Identity and Access - Configurable AWS Access Controls (AWS Identity and Access Management (IAM))

Change Tracking - User Activity and Change Tracking (AWS CloudTrail)

Monitoring - Resource and Application Monitoring (Amazon CloudWatch)

Containers - AWS Application Container (AWS Elastic Beanstalk)

Templates - Templates for AWS Resource Creation (AWS CloudFormation)

DevOps - DevOps Application Management Services (AWS OpsWorks)

Security - Ops Application Management Services (AWS OpsWorks)Security - Hardware-based Key Storage for Regulatory Compliance (AWS CloudHSM)

The reason I look through at these spaces in this way, is to better understand the common services that API providers are, that are really making developers lives easier. Through assembling a list of the common building blocks, it allows me look at the raw ingredients that makes things work, and not get hunt up with just companies and their products.

There is a lot to be learned from API pioneers like Amazon, and I think this list of building blocks provides a lot of insight into what API driven resources the are truly making the Internet operate in 2014.


A World Where Every Camera Is Connected To The Internet Via APIs

I look at a lot of APIs--some are crap, some make sense, a few are interesting, and every great once in a while you see an API that you know will be one of the next big API platforms. I’m reviewing one such API, Evercam.io.

I know that Evercam.io will be be big, because it bridges an increasingly ubiquitous technology—the camera. Whether its its for home or commercial usage, Internet connected cameras represents low hanging fruit for applying proven API techniques.

Evercam.io was born out of experience working at a cloud CCTV company, where the Evercam team realized the opportunity was about becoming a developer platform that enabled any developer to interact with potentially hundreds of types of cameras, while also applying modern API techniques to the world of security and webcams.

Cameras + Oauth 2.0 + API + App Store = Evercam.io

Storage
When it comes to video, the obvious API usage involve storage of the video, still photos, audio, and logs generated by a video camera. The cloud opens the ability to scale storage to match whatever needs the camera owner may have. Video potentially can be disk heavy, something that is perfectly suited for the cloud, as long as someone is willing to pay for the storage.

Connectivity
Cameras are just one node, in this fast growing world of Internet connected devices that you may find at home or in the business. The ability to sync up cameras and still photos with point of sale (POS) activity, sensors or even other cloud applications is significant. Zapier and IFTTT like actions or seamless reporting and analytics across systems will open up with API deployent.

Events
Evercam.io introduces events as a layer between the camera and users, allowing developers to define custom or scheduled events based upon what happens in a video. Examples include sending of MMS when camera receives SMS from owner, take picture of storefront or dining area each hour or during peak hours. Video is all about a sequence of events, and adding an API layer allows for an unlimited amount of slicing and dicing and custom defined of events that are meaningful to camera owners.

Logging
One of the significant benefits of API layers between camera and their access layers is the ability to log operations. Camera availability, access logs, events, actions and basically any interaction with a camera potentially can be logged. When used for security, this API layer expands on the existing definition of what a security camera is used for.

Marketplace
Any successful platform needs a marketplace, where developers can showcase the applications they've engineered on top of the platform, and even generate revenue through application sales and add-ons. Evercam.io has a marketplace out of the gate, modeled after the Chrome Web Store and Force.com, which charges developer s30% of revenue generated via the apps they publish to the marketplace.

Security
Deployment of an API for one or many cameras, provides a single point of security for all devices. This is only as good or bad as Evercam.io’s platform security, which only time will tell how solid it is. This one element will make or break platforms like Evercam, and is the area that will keep me up at night when thinking about API driven cameras.

Ok, now I’ve laid out some of what I feel will be key things that make Evercam.io significant. I think Evercam.io will be big because it represents a fairly obvious target for applying APIs, and the deployment of cameras, as well as the use of video is only going to grow--exponentially. I don’t just think that Evercam.io will be big because of the technological and business opportunities, I think it will be huge because of the political implications.

Our homes and automobiles will be increasingly wired with cameras, businesses will be operated, managed and secured through cameras, and governments will increasingly monitor citizens via Internet connected video devices. Camera APIs will be one of the most influential, yet silent players in our personal and professional lives, from here on forward—there is no escaping it.

Honestly all of this has me worried, but I’ve done my due diligence on the Evercam.io team, and feel like there isn’t a better crew I’d like to see help lead in this potentially frightening new world of Internet connected cameras. I try to help lead the API space by shining the light on some of the positive in the space, while calling out potentially negative practices. I think the Evercam.io team reflects these values, and I’m interested in seeing more of how they handle it in this potentially volatile aspect of our online world.

It can be easy to freak out over some of the potentials for exploitation via a platform like Evercam.io, but after reading their business plan, and looking through their site, you see examples like the agricultural scale that uses a camera and the Evercam.io API to take pictures of grain deliveries, providing the pictures as part of the grain sale process. These are every day uses that will have significant implications on the economy and become regular part of business operations.

After spending some time researching Evercam.io, and thinking about the world of Internet connected cameras, I’m intrigued. I’m interested in seeing what solutions get developed on the platform, how security is handled, and what issues arise in different scenarios when you connect cameras to the Internet, apply APIs and start building applications and logging tools that operate in this new API defined space.

I predict the Evercam.io ecosystem will grow rapidly, and be an interesting, and potentially scary place to watch our world be connected to the Internet via billions of tiny cameras, that track on every aspect of our personal, business and increasingly very public lives.


Tracking On Data.json Deployment Across Federal Agencies

I'm tracking on the evolution of Executive Order 13642 from last May, which was the White House directive to make open and machine readable the new default for government information. The piece that I'm tracking on specifically right now is around the OMB Memorandum M-13-13 Open Data Policy-Managing Information as an Asset, in which one of the items require agencies to publish a data.json file that provides a machine readable inventory of each agencies public data assets.

Much like the tracking I did around the digital strategy, I've stood up a monitoring script that I got from Philip Ashlock's Github, which I will be running daily to track on which agencies have published their data.json in anticipation of the November 30th deadline. A handful of agencies already have their data.json file up, others show a green check, but in reality their HTTP status codes are incorrect, as I've talked about before.

I'll re-run this script nightly and keep an eye on which agencies publish their data.json and highlight what types of data sets they've made available. I think in reality, the challenges faced in taking inventory of open data, getting them published will prevent many agencies from making the deadline. Something that was just made worse by the government shutdown in October.

Even with these challenges, I'm hopeful that agencies will surprise us and publish some amazing stuff.


Secure API Deployment From MySQL, JSON and Google Spreadsheets With 3Scale

I'm doing a lot more API deployments from dead simple data sources since I started working in the federal government. As part of these efforts I'm working to put together a simple toolkit that newbies to the API world can use to rapidly deploy APIs as well.

A couple of weeks ago I worked through the simple, open API implementations, and this week I want to show how to secure access to the API by requiring an AppID and AppKey which will allow you to track on who has access to the API.

I'm using 3Scale API Management infrastructure to secure the demos. 3Scale has a free base offering that allows anyone to get up and running requiring API keys, analytics and other essentials with very little investment.

Currently I have four separate deployment blueprints done:

All of these samples are in PHP and uses the Slim PHP REST framework. They are meant to be working examples that you can use to seed your own API deployment.

You can find the entire working repository, including Slim framework at Github.


API Deployment From MySQL, JSON, Github and Google Spreadsheets

I'm doing a lot more API deployments from dead simple data sources since I started working in the federal government. As part of these efforts I'm working to put together a simple toolkit that newbies to the API world can use to rapidly deploy APIs as well.

Currently I have four separate deployment blueprints done:

All of these samples are in PHP and uses the Slim PHP REST framework. They are meant to be working examples that you can use to seed your own API deployment.

I'm also including these in my government API workshop at #APIStrat this week, hoping to get other people equipped with the necessary skills and tools they need to get APIs in the wild.

You can find the entire working repository, including Slim framework at Github.


If there is an API deployment related story you'd like me to know about, you can submit as Github issue for this research project and I will consider adding as part of my research.