AIFeb 28, 2017

How to Implement Machine Learning in Your Apps

Jonathan Williamson and Andrea Curo

Machine learning (ML) has never been easier to pick up, yet developers and companies are still reluctant to adopt it. There is a belief that only Big Data scientists with doctorates and top-tier mathematic skills could understand how to use machine learning, which is not the case at all. If you can make a REST call, you can utilize machine learning. We recently attended the Big Android BBQ, a conference on Android development, where Margaret Maynard-Reid gave a talk called “Making Android Apps With Intelligence.” It was a great introduction to the current offerings for machine learning and artificial intelligence in the Android ecosystem. From her talk, we learned just how simple integrating machine learning in your applications can be.


At a basic level, machine learning describes the process of teaching a computer to do a task by giving it various input data sets and allowing a program to find patterns and connections in the data over time. The process of feeding the data sets is called “training,” and once the model is fully “trained” it can be given new data sets and make seemingly intelligent decisions or classifications on its own.

Machine learning is a very hot topic in the mobile development community as companies like Apple, Google, Microsoft, and Amazon release products with voice recognition and intelligent suggestions such as Siri, Google Assistant, Cortana, and Amazon Alexa. These products utilize machine learning to understand human speech/text and perform helpful actions. Other products like the Nest learning thermostat use machine learning to analyze users’ behavior to allow the product to understand users’ preferences automatically.

The barrier to entry for utilizing machine learning is getting smaller and smaller. Several companies have built fully trained machine learning APIs that you can begin using right away:



The first group of machine learning services that was discussed is built into the Google Play Services SDK. This means any Android developer can make use of these services in their apps for free today. Maynard-Reid gave an example and demo of one such service in her talk, Google’s Mobile Vision API. It allows developers to utilize the Android camera to detect faces, scan barcodes, and recognize text.



Maynard-Reid showed an example from the documentation where the Android team created an app that tracks faces and overlays googly eyes. This API could be quite useful for social apps (think Instagram filters), inventory handheld apps, and translation apps. The iOS platform also has a similar face detection library to bring platform parity to your applications.


The next group of services discussed in the talk was accessible via REST API calls. Like the Android Mobile Vision API, these services are pre-trained and ready-to-use for hundreds of interesting intelligence tasks. There are many of these REST services to choose from, each with varying pricing models (even including free options). The following services are great to know when getting started:


Google’s Machine Learning Platform includes Translation, Speech Recognition, Natural Language Processing, and Job Listing APIs, in addition to a REST version of the Vision API mentioned in the previous section. To start using the Google Machine Learning Platform, all you will need is a Google Cloud Platform account. With this account you can sign up for the free trial which gives you a $300 credit that you can use across the Google Cloud Platform over six months.

The pricing structure varies per API. For example, the Translate API charges $20 per 1 million characters when you send a total of 0-1.5 billion characters in a month. The Vision API charges anywhere between $0.60 and $1.50 per unit detected (i.e., Face, Label) depending on the total units processed per month. After your account is set up, you will need to obtain your API key, which you will use to authenticate your requests. With that done, you can start using their API.

The REST Vision API supports multiple types of requests including information about labels, text, and image properties to name a few. The following is an example of using the same API referenced to classify an image pulled from Google Vision Documentation:



“requests”: [


  “features”: [


    “type”: “LABEL_DETECTION”



  “image”: {

   “source”: {

    “gcsImageUri”: “gs://bucket/demo-image.jpg”





} The API response for this call includes a list of possible classifications and their likelihood.

Since obtaining an API key is fairly straightforward, Google Cloud API Platform is an ideal service for any newcomer to pick up. However, several of Google’s APIs are currently in beta and could be deprecated or backward-incompatible at a later date. For this reason, developers should take care in using certain APIs, such as the Speech API and custom Machine Learning APIs, for real-time applications while they are in beta.


IBM’s Watson Services have similar offerings to the Google Cloud Platform in the machine learning field. To get started using Watson Services you can include one of their SDKs for Java, iOS, Python, or Node.js in your project, or you can call the API directly with an API key. To use the SDKs in your application you will need a Bluemix account to obtain service credentials. The following is an example of classifying an image using the Java SDK and calling the API directly pulled from the Watson GitHub and Watson Visual Recognition Documentation respectively:

VisualRecognition service = new VisualRecognition(VisualRecognition.VERSION_DATE_2016_05_20); service.setApiKey("<api-key>"); System.out.println("Classify an image"); ClassifyImagesOptions options = new ClassifyImagesOptions.Builder() .images(new File("src/test/resources/visual_recognition/car.png")) .build(); VisualClassification result = service.classify(options).execute(); System.out.println(result); curl -X POST -F "images_file=@fruitbowl.jpg" "{api-key}&version=2016-05-20"

The response includes a list of possible classifications and their likelihood.

Watson gives the user a lot of flexibility when it comes to classification. In the example above, the demo image was sent over as multipart/form-data but we could have also sent a URL to the image or a .zip file containing multiple images. This endpoint also takes in a JSON file that contains specific classes to check against, helping to eliminate outliers from the response. Watson Visual Recognition API allows you to train models or classifiers from the same API, unlike Google Cloud API Platform, which requires the use of their Machine Learning API to create models. Watson Services are all available to be used even on the enterprise level. We would recommend using Watson Services for enterprise level applications, but not for students because it is more costly and requires using Bluemix.


Google Cloud Platform and Watson Services are just two of several services that offer machine learning APIs. Microsoft Cognitive Services and HPE Haven OnDemand also have similar offerings and price ranges. Choosing a machine learning REST service comes down to finding one that suits your needs. Watson give you more flexibility from a single API, however you are limited to a single month with Bluemix’s 30-day trial requiring you to upgrade to one of their payment models after a month. With Google Cloud Platform you are given a six month allowance, that you can spend as you want on any of their APIs. Google Cloud Platform is friendlier to newcomers with a handful of services with straightforward applications and a potentially longer trial period, but Google does not offer production-level stability or the same number of services as Watson does.

These powerful services can be incredible assets for any machine learning project on mobile apps, web apps, or other enterprise applications. Plus they have the benefit of a comprehensive development community and pre-built models that can make use of training data from others currently using the software.


The final group of machine learning services consists of platforms to help you build, train, and host your own predictive models. These platforms are the most unopinionated and also have the most flexibility. The downside is the somewhat overwhelming landscape of different algorithms and configurations needed to fine tune a machine learning application from scratch. However, if you already have existing knowledge of machine learning programming, several of these services offer high-powered and efficient computing resources to do data analysis and prediction. Examples of these “DIY” machine learning platforms include:

Google’s Machine Learning Platform and Amazon DSSTNE have pre-configurations set up for hosting on Google Cloud Services and Amazon Web Services respectively. By using one of these systems, business applications can move the computation load for machine learning from their normal hosting site to the cloud. This is especially useful for companies for which machine learning is not at the core of their business offering.

If the above services do not appeal to you, these alternative services offer their technologies at varying price points and include free trials.


If you’re looking for concrete resources of where to get started with machine learning, the following resources are recommended:


The current offerings for machine learning and artificial intelligence are vast and feature-rich. Even so, many companies and developers are hesitant to delve into the world of machine learning, perhaps paralyzed by choice. Our advice at Credera is to simply get started with an incremental improvement to an existing application using machine learning. For example, consider adding voice recognition to a mobile app, or utilize Amazon’s prediction APIs to build an intelligent product recommendation page for an ecommerce flow. It’s amazing how quickly machine learning can be adapted to fit into any industry or situation. If you need help with integrating machine learning or other artificial intelligence into your apps, we would love to help. Start a conversation with us at

Have a Question?

Please complete the Captcha