How can we help?

Design? Check. Strategy? Check. Bulletproof code? Check! People who can manage it in an agile and efficient manner? Check. Someone to help you create your next big product? Of course.


- 1201 18th St. Suite 250, Denver, CO 80202

Grand Rapids

- 125 Ottawa Ave NW Suite 270, Grand Rapids, MI 49503


BLOG:APIs on the Quick

APIs on the Quick

A quick look at JSON Server,, AnyPresence JustAPIs, DreamFactory & StrongLoop LoopBack.

At Universal Mind virtually every web or mobile application, we build consumes some sort of API in order to provide information to users. Here are 3 cases in which quickly assembled APIs help with our development process:

  • Prototyping - With many of our clients, we often engage in “ Vision Prototypes” before building out the entire solution. These are working prototypes that often need some base level services implemented in order to function.
  • Mock Services - In other cases, project scheduling requirements sometimes have us building the web and mobile applications before the “ real” API services are ready for integration. In this case, we need a set of “ mock” services to mimic the production services that will eventually be available.
  • Simple Solutions - Some small projects may have API needs that are quite simple and need to be built out rapidly with minimal investment.

In all 3 cases, the goal is to stand up an API layer quickly and focus most of the time on building the web or mobile client applications. In the past, these API layers were often hand coded which can result in project time being diverted towards the coding of the API layer rather than the client side solution we’re trying to build.

Over time, a series of solutions have emerged which enable the rapid development of APIs for consumption by web and mobile applications. Most all these tools provide a REST API to perform typical CRUD operations against some sort of data store. The purpose of this article is to present several solutions which we have used in the past. Some are suited for a role in quickly building an API layer to support the development of client applications and are not robust enough to support a production application. Others require slightly more initial investment but can ultimately form the basis of a production solution.

JSON Server

This has to be the fastest way to get an API up and running. You’ll have your first services up and running in under 1 minute. This is a free tool, powered by Node, and is available on Github. JSON server does just what the title implies. It serves JSON documents and allows you to create, read, update, and delete them (CRUD operations) with the expected REST HTTP verbs of POST, GET, PUT, and DELETE. You don’t need to create a database schema beforehand. Simply doing an HTTP POST to an endpoint like http://localhost:3000/customers will create documents in the “ customers” collection. Each document can have an ID and you can reference single documents by their ID. So we can read customer number 5 by just doing a GET to http://localhost:3000/customers/5 or update that customer by doing a PUT to the same endpoint. The URLs also allow you to supply query parameters to control what documents are returned and now they are sorted. For instance, the following query might return the top 10 customers in NY state with the largest outstanding balance: http://localhost:3000/customers?state=NY&_sort=balance&_order=DESC&_limit=10

In most cases you’ll just use JSON Server as a means to create a quick REST API to work with collections of documents. However, JSON Server can work as Node Express module and work with other Express middleware. This lets you add more routing, validation, and the ability to modify responses returned from JSON Server.

With JSON-Server, the data itself is stored in a simple JSON file called db.json that you can populate yourself just by editing the file. You can also seed the db.json by writing some code to load it with random data. I’ve used this feature along with Faker.js to generate 1000s of “employee” records for a test project. JSON Server is powered by lowdb which is a simple in-memory Javascript database. As such JSON-Server isn’t designed to deal with large amounts of data or very many concurrent users.

JSON-Server isn’t intended for production use. There is the scale issue mentioned above and it does not have very much in the way of security features to protect the data. I’ve limited my use this tool to stand up quick services for prototyping or just acting as the backend server while I’m learning new client side development tools. Depending on your needs it might also work well for low volume applications including those running as an embedded application. For demo/prototyping purposes JSON Server should probably be in any client side developer’s toolbox. To help you get started, the folks at also have a free tutorial video for you to watch.

NOTE: This article was written prior to Parse’s announcement on 1/28/16 that the service will be discontinued as of January 28, 2017. I’ve decided to keep the content in this article because Parse has open sourced the Parse server code and thus the content contained below can still be leveraged for utilizing the Parse server as a means to host an API. However, you can no longer rely on Parse to host your content for the long term. You’ll need to run the Parse server on Heroku, Amazon AWS, IBM BlueMix, or another provider of your choosing. is often thought of as a BAAS (Backend As A Service) provider for many web and mobile applications. With a Parse solution, your application is always hosted on Until the availability of Parse Server, it was not possible to run parse on your own servers or on a different cloud provider. Many developers focused on quickly building solutions and who do not wish to assume the added burden of managing the hosting their application turn to a BAAS like Parse. These BAAS providers are also often a great way to stand up an API quickly.

Parse stores your data in MongoDB but abstracts away the details of their Mongo implementation. As such you never interact with MongoDB directly. Instead, you use the Parse dashboard to define a number of “classes” which act like collections in MongoDB. Parse also provides a data browser to let you interact with the data that has been stored.


Once the collections have been established you can use the Parse REST API to interact with the collection. The REST API lets you perform the typical CRUD operations against the class you’ve defined in In addition, the REST API allows you to furnish additional query string parameters to query, limit and change the order of the results returned. For example, if we wanted to return the 10 most recently added Employees in NY state an HTTP GET could be issued like this -{"state":"NY"}&order=-createdAt&limit=10

In addition to the REST API, Parse also has several client platform-specific SDKs which cover iOS, Android, Javascript. However, if you are looking to use parse simply as a platform to temporarily hold your data while waiting for a full production API to be available you might be better off sticking with the REST API as the Parse SDKs are specific to Parse and won’t be usable against other endpoints.

Often you’ll want to add some custom logic to the API layer. Parse supports this in one of two ways. You can write “ Cloud Code” which supply “ triggers” that are called when specific events occur on your classes. Parse provides the ability to call your code before and after save and delete operations. The “ afterSave” trigger is fired when a save is issued against a class. To register your code to be called when something is saved you do something like this:

Parse.Cloud.afterSave("Todo", function (request) {
 var toDoTitle = request.object.get("title");
 console.log("Just updated: " + toDoTitle);

Parse also supports Cloud Code functions which can be invoked via the REST API. This lets you perform multiple Parse operations with one HTTP POST request. In your HTTP POST, you can pass a JSON object to a Cloud Code function which can then validate the request and perform multiple operations. In addition to creating triggers and functions to run on, you can also stand up a complete dynamic website on Parse using the Node Express framework. However, at this point you’ll be writing a fair amount of code and the notion of “ quick” may be lost. Still, it’s nice to have the option to expand into a full Node app if requirements warrant. Just be aware that if you’re a Node developer the current Parse implementation is still using Express 3 and does not support NPM modules. Express 3 is now officially outdated and most developers would prefer to use Express 4. Given these limitations, most Node developers will likely wish to host their applications elsewhere and Parse has established a partnership with Heroku for this purpose.

Parse has been a good choice for small and medium size solutions where you do not wish to host the API yourself and where your data can be housed within the environment. Parse has offered a pretty generous free tier of service which will remain in effect until the shutdown date of 1/29/2017. This includes 20GB of storage and support for 30 requests/per second. Higher transactions volumes and data storage levels can be quoted using their online pricing calculator. In a blog post a short while ago we showed an IoT solution for BBQ management which uses Parse to store the temperature measurement values and send push notifications. Parse worked out great for this project, but some projects may require the API be hosted elsewhere or access data that is not in Or perhaps you are storing data that must meet specific security requirements that is not certified for. The recently released Parse Server will now require you to host the server and an associated MongoDB database yourself. The long-term viability of the Parse Server is still unknown as Facebook has just released that code to the community and it will take time to see if a community moves forward with this open source solution. In the meantime, there are other solutions to consider as well…


This is a solution from AnyPresence who also supplies a full suite of products well beyond the API space. The JustAPIs solution provides a locally executing API gateway process that you can configure to reach remote endpoints or return static data. JustAPIs can work with endpoints that support MongoDB, PostgreSQL, Microsoft SQL Server, and MySQL. It can also work with REST and SOAP services.

JustAPIs is not a cloud solution though you can host the product on several cloud providers. For development, you can download a complete packaged solution that runs on Linux, OSX, or Windows. When running as a development server, your API configurations are stored in an embedded SQLite database. For production deployments, multiple servers can share a configuration by storing the configuration in a PostgreSQL database. The product is free for use in a limited development capacity, but production implementations require a paid license. The free licensing terms are not as generous as the other offerings mentioned in this article.

JustAPIs provides a somewhat different approach to building APIs for accessing things like a database. It provides a browser-based admin tool that allows you to define API endpoints and then writes small amounts of Javascript to validate incoming requests and route them to “ remote endpoints”. These remote endpoints can be a database backend or can be another HTTP REST service. JustAPIs provides the ability to orchestrate multiple backend requests from a single inbound API request as well. This feature permits you to create a more efficient API for mobile users by reducing the number of network requests a client app will have to make in order to carry out a given operation.

The screenshot below shows the definition of an API endpoint that retrieves customer records from a MongoDB instance.


The code you supply must first define the request to the remote endpoint. The following sample shows how we might treat an optional query string parameter to specify which state we’d like to obtain customers from.

var id = ;
var query = {};
if ("state" in request.query) {
log("You requested: " + request.query["state"]);
query = {state: request.query["state"]};
mongo.request = new AP.Mongo.Request();
mongo.request.find('customers', query) ;

Notice how the above code creates the query and ships it off to the configured remote endpoint to be fulfilled. JustAPIs contains various objects that let you work with both SQL and NoSQL databases. When a response from that endpoint is received your second Javascript code snippet is called -

log('Received response');
response.statusCode = 200;
response.body = JSON.stringify(;

In this case, we’re taking the data received from MongoDB and simply returning the entire response to the client. However, you can do additional post-processing of the response if desired. A key point with these code examples is that you must write some code in order for the queries to be executed. It’s necessary to establish code like this for each operation you wish to perform against the backend database. If you have a large number of different tables and operations to support against those tables it may take some time to define all the endpoints.

Speaking of code, while you can supply snippets of Javascript code as we’ve seen above, all code editing is done within the browser-based environment. As far as I can tell it’s not possible to use alternate editors, NPM, or any Javascript language transpilers with this solution. Debugging is essentially limited to using log statements. These limitations are worth taking into consideration if you feel your future needs will require more custom code. However, if your needs are fairly straightforward, you’re looking for a tool that can serve as an API orchestration layer and want to be up and running quickly JustAPIs might be the right solution. Keep in mind Anypresence offers more features beyond JustAPIs as part of its full product portfolio including API management tools, client developers SDKs, and a BAAS offering.


DreamFactory takes a different approach to building an API for accessing a database. It automatically builds an API for database tables or collections you register with its tooling. There is no need to write code snippets for each table or collection you wish to expose over a REST API. DreamFactory can run locally or on a Cloud hosting provider of your choosing. The product is free of charge, but support and an “ enterprise” version are available at additional cost. DreamFactory supports a surprisingly wide array of backend database services including MongoDB, CouchDB, Amazon Dynamo DB, Azure Table Storage, PostgreSQL, mySQL, SQLite, Oracle, MS SQL Server, IBM DB2, and SAP SQL Anywhere. It also provides REST API for Cloud file storage providers like Amazon S3, Azure Blob Storage, and Rackspace Cloud Files. You also have the ability to make calls to other REST APIs and it supports calls to SOAP web services as well.

To get started, you can download an installer from the DreamFactory site. Note that there are both Cloud and Local installers. DreamFactory will also host your services free of charge on their site, but with this option, the ability to create custom scripts to augment your API is not available. Once the product is installed you can launch the app from your browser at and proceed to the services tab. This is where connections to backend databases can be defined. Using the service panel, it’s easy to connect to an existing server. DreamFactory also bundles mySQL and MongoDB with the product install so you don’t even have to stand up a separate database server.

Once the service has been defined DreamFactory automatically uses Swagger to generate API documentation for the new service. The generated docs will look like this


Upon clicking on a row, you’ll be presented with a form that allows you to specify various options for what collection you’d like to obtain data from, specify filters, limits, and order the results.


After completing the form you can hit the “ Try it out” button and the query will be displayed in your browser along with the complete REST API call you would need it issue to obtain the same result from your application.


Note the structure of the URL that is used to interact with the MongoDB collection (table).'NY'&limit=10

All collections are accessed through a _table reference in the path with all the filter, order by, and limit options specified as query string parameters. To create a new customer entry we would issue an HTTP POST to with this body


If the document has been added to the MongoDB collection you’ll get an HTTP 200 response with a JSON response that looks something like this

 "resource": [
     "_id": "56991ce82454c96b46d63af1"

In addition to supporting the normal CRUD operations against SQL tables and NoSQL collections, the generated API supports the ability to query the database schema and even alter the schema to create new tables or change existing ones. All API access is controlled through roles so you can limit what operations each authenticated user can perform against the backend.

As with the JustAPIs solution you have the ability to write some pre and post processing logic that wrap your API requests. Using this, you can perform some validation or otherwise manipulate the response prior to sending it back to the requesting app. These scripts are written in Javascript and are managed through the browser admin interface. As with JustAPIs, you must use the browser based editor to maintain your scripts. There is no ability to use npm, transpilers, or source-level debuggers. On the plus side, DreamFactory ships with LoDash (a superset of Underscore.js) and you can also supply additional JS files that you can require() into your application. DreamFactory does provide the ability to write custom service endpoints which can be used to orchestrate service calls, but the Javascript for this is also managed in the browser admin interface and seems best suited for small scripts.

DreamFactory is a very quick way to get an API up and running against an existing data source. Its script support provides some ability to perform server side validation and orchestration, however, the current Javascript editing support and module management is not at the level that most server-side Javascript developers would expect. As such DreamFactory is a good choice for developers who want to rapidly stand up an API for data sources, but don’t require a lot of custom code for their API endpoints.

StrongLoop LoopBack

LoopBack takes more of a code-centric approach to building APIs. Its goal is to accelerate the development of Node applications that act as an API for databases. A LoopBack application is a Node app and can run wherever you are able to deploy Node applications. Much of your development work with LoopBack will be in Javascript code and the creation of JSON configuration files rather than the browser based IDE approach used by the other tools. StrongLoop provides a command line tool, called SLC, which can be used to generate the scaffolding for you application. The code generation is done via the popular Yeoman code generator used to scaffold projects with many Javascript frameworks.

LoopBack is available free of charge for deployment in both development and production environments. StrongLoop has additional products available for purchase via a subscription model that offer additional monitoring, profiling, and data source connectors. For example, the connections to Oracle, SQL Server, SOAP, ATG, and SharePoint are available only via a paid subscription. Connections to data sources like MongoDB, PostgreSQL, and mySQL are free of charge. There are also several community-supported data source connectors available to support things like CouchDB, ArangoDB, and SQLite.

Installing StrongLoop is as easy as entering one NPM command npm install -g strongloop. Now you’re ready to use the application generator to create a new application via the command slc loopback. This starts an application generator that will ask you a series of questions and then creates the LoopBack application in the directory of your choosing. As mentioned before, LoopBack is a framework for exposing backend data sources via a Node application. LoopBack exposes each table, or collection, in your backend data source as a “ model”. LoopBack must be configured to specifically expose each backend data source table as a model. The model defines the fields of the underlying data source that are exposed through the API. One way to do this is via the model generator tool which is also part of the slc command line interface slc loopback:model.

This command will prompt you with a series of questions to name the model and then specify each of the properties (fields) of that model. These would correspond to the column names of the fields in your backend data source table.

You’ll notice that once a model has been completed the /common/models directory will contain a JSON and JS file for each model you create. They are simple enough to edit in any text editor. If you prefer you can just create and update the files yourself rather than using the command line tools. StrongLoop also offers a tool called ARC which provides a browser-based UI to allow you edit the schemas for your models in addition to managing StrongLoop processes, displaying metrics, etc. Here’s a screenshot of the composer module editing the model for an example application


Another way to build models is through a process called discovery. Arc provides a discovery tool for building models from the schemas found in traditional SQL databases like mySQL, or PostgreSQL. This discovery tool does not work for NoSQL databases like MongoDB because the collections in NoSQL databases typically do not have a fixed schema. I’ve generally found it best to just build the model yourself based on your understanding of the document structure contained in the collection you wish to base the model upon.

If you are in a situation where the database tables don’t exist yet LoopBack can help you as well. A process called “ auto migration” will create the tables or collections on the backend to match your model definitions.

Once the models have been defined and your data sources have been configured you are ready to test out the REST API that LoopBack provides. Since your application is a node app you can simply start it from the command line using node.

This will start not only the REST API but also an API explorer app that will let you interact with the methods that are exposed on your models. LoopBack uses swagger to generate the API documentation UI. It looks very much like the API documentation that is generated by DreamFactory except this documentation is specific to each model (table) you have defined. You can access the API explorer at: http://localhost:3000/explorer.


Just like with DreamFactory there is a “ Try It Out” button that allows us to call the API and view the response. The screenshot below shows the contents of our Customers MongoDB collection.


Results can be filtered, ordered, and limited to a specific number of rows as well. The REST API syntax for this is a bit unusual and more complex AND/OR statements can be a bit tedious to specify directly via the REST API. Here’s a simple query that returns the customers in NY state ordered by last name.

http://localhost:3000/api/customers?filter[where][state]=NY&filter[order]=lname DESC&filter[limit]=10

One feature I really like about LoopBack is that you can easily augment the models to expose additional remote methods on the model. This allows for the creation of custom methods that can simplify the parameters callers of your API need to pass in order to retrieve the results they desire. To help write your remote methods LoopBack provides a Node API object for each of your models. This API provides a convenient way to call the underlying data source and return results. The snippet of JS code below shows how the Node API can be used to query a model for customers in a given state, order the result set by the last name, and return the first 10 results.

function getByState(state, cb) {
return Customer
where: { state: state },
order: 'lname DESC',
limit: 10

LoopBack further requires that you register each remote method you create so that it knows how to route the incoming request path to your Javascript function. That’s done using the remoteMethod() call as follows

'getByState', {
http: {
path: '/getByState/:state',
verb: 'get'
accepts: {
arg: 'state',
type: 'string',
http: {
source: 'path'
returns: [
arg: 'result',
type: 'object'

The above code registers a path of /getByState/:state where “ :state” is a parameter that will be passed to your function. This would be exposed as an endpoint like http://localhost:3000/api/customers/getByState/NY.

The remote methods may serve as a way for you to orchestrate several API calls into one. It is also possible to add “ hooks” which get triggered before and after a model is read, created, or updated. This allows you to perform validation against of the model prior to it being saved and also augment the return values from the model. In addition to hooks the LoopBack framework is based on the very popular Express framework. As such the concepts of Express can be applied here as well. One of these is the concept of middleware which allows you to insert functions that execute during the lifecycle of a request and can be used for authentication, validation, logging, and error handling. One of the nice things about being based on Express is that your LoopBack application doesn’t just have to serve up responses through the /api route. You are free to leverage your application to return a web UI as well just as you would with any other Express application. However, now that you’ve defined your models in LoopBack you can leverage the power of the Node API your models expose from your server side web generation code. LoopBack also provides a client directory where any static html, css, and JS files can be placed. These will be served to the user when they reach the server directly like http://localhost:3000/. In addition, the router for Express is exposed so you can define additional routes if you’d like the server side to process specific requests beyond the built-in /api path.

Since LoopBack is a Node application all the standard debugging tools for working with Node applications apply here as well. StrongLoop already maintains the popular node-inspector source level debugger for Node applications on GitHub. They’ve integrated that into the StrongLoop command line interface as well via the command “ slc debug”.

To make client side development easier, StrongLoop provides client SDKs for several client side technologies. These are focused on abstracting the underlying REST based calls over HTTP away from the client-side developer and allows them to think in terms of model objects similar to those exposed when defining the REST API. The currently supported SDKs include:

  • iOS
  • Android
  • Angular
  • Xamarin

More examples of using LoopBack can be found on their examples page. These quickly show how models are defined, relationships can be expressed between the models and ad-hoc queries run from the Node API and the various client SDKs.

Security in LoopBack is managed through users, permissions, and roles much like with the other products we’ve seen. LoopBack provides its own user model with facilities to support forgotten passwords etc. It is also possible to integrate other 3rd party means of authentication via the popular Passport authentication middleware for Node.

LoopBack provides a lot of capability but does require a bit more initial investment on your part. However, the end result is a flexible API that can grow with your needs to a full production solution. StrongLoop was recently acquired by IBM and is receiving increased investment in enhancing the product. I expect to see significant growth in LoopBack during 2016.


In this article, we’ve seen API solutions span from a simple turn key command line tool like json-server to a full featured application server like LoopBack. The choice of which tool to use is really dependent upon your needs. For very quick demos and prototypes, json-server may be totally adequate, but it’s not a solution that can really be leveraged for a production application. has been a good solution in the past if you are fine hosting your application with Parse and are ok with the way code and data is managed in Parse. However, given Parse’s recent shutdown announcement shows that the potential for some risk remains in choosing any Cloud-based provider. The choice between JustAPIs and DreamFactory is more about what you expect from the API tool. JustAPIs is not as much about automatically building a data access layer for you as it is about orchestrating and abstracting underlying remote services from the users of your API. DreamFactory takes the approach of quickly establishing a REST API for all the tables in your data source. Despite a small amount of additional investment in configuration time, LoopBack is likely to be chosen by developers already familiar with Node. It allows them to leverage skills they already have and can be customized to fit nearly any application requirement well into production.

There is one thing to keep in mind when using these tools. The quickly built CRUD abstractions over single tables these tools expose via their REST APIs may not be the best way to retrieve or update data for many situations. We often find the need to perform transactions which involve reading data from several tables and then ultimately updating 1 or more other tables. Writing the client side logic to perform all these operations over a REST API that only supports single table CRUD operations has several disadvantages. Among these are:

  • Duplication of logic: If you’ve got more than one client app - say Native iOS, Native Android, and web apps you have to duplicate the transaction logic in all application implementations. This adds to development time as the process has to be repeated by different developers skilled in a particular client-side technology.
  • Consistency of implementation: Care must be taken to ensure any code changes for a given transaction are consistently implemented across your various client apps and that those changes are deployed in a similar timeframe as well. Multiple implementations mean you have to QA this functionality on each client platform to ensure the consistency.
  • Network traffic: Making all those calls over HTTP can really slow things down. This is particularly over cellular networks where latency is likely to be quite high.

For vision prototypes and demos the above considerations may not be a big factor, but we have found them to be a significant issue in production applications. Your choice of API tooling for any production solution should take these issues into account. It is also important to note that there are Enterprise API Management Platforms like Apigee & MuleSoft that are primarily focused on large-scale production scenarios. These provide much more capability around security, usage metrics, developer API key provisioning, etc. Probably not the sort of things you need for a quick prototype, but well worth considering for a production solution. One thing is for certain; you have plenty of technology choices on the road to selecting a tool. Safe travels.