This page is part of the documentation for the Machine Learning Database.
It is a static snapshot of a Notebook which you can play with interactively by trying MLDB online now.
It's free and takes 30 seconds to get going.
This tutorial shows how we can use MLDB's TensorFlow integration to do image recognition. TensorFlow is Google's open source deep learning library.
We will load the Inception-v3 model to generate descriptive labels for an image. The Inception model is a deep convolutional neural network and was trained on the ImageNet Large Visual Recognition Challenge dataset, where the task was to classify images into 1000 classes.
To offer context and a basis for comparison, this notebook is inspired by TensorFlow's Image Recognition tutorial.
The notebook cells below use pymldb
's Connection
class to make REST API calls. You can check out the Using pymldb
Tutorial for more details.
from pymldb import Connection
mldb = Connection()
To load a pre-trained TensorFlow graphs in MLDB, we use the tensorflow.graph
function type.
Below, we start by creating two functions. First, the fetcher
function allows us to fetch a binary blob from a remote URL. Second, the inception
function that will be used to execute the trained network and that we parameterize in the following way:
archive
prefix and #
separator allow us to load a file inside a zip archive. (more details)fetch
function called with the url
parameter. When we call it later on, the image located at the specified URL will be downloaded and passed to the graph.softmax
layer is the last layer in the network so we specify that one.inceptionUrl = 'http://public.mldb.ai/models/inception_dec_2015.zip'
print mldb.put('/v1/functions/fetch', {
"type": 'fetcher',
"params": {}
})
print mldb.put('/v1/functions/inception', {
"type": 'tensorflow.graph',
"params": {
"modelFileUrl": 'archive+' + inceptionUrl + '#tensorflow_inception_graph.pb',
"inputs": 'fetch({url})[content] AS "DecodeJpeg/contents"',
"outputs": "softmax"
}
})
To demonstrate how to run the network on an image, we re-use the same image as in the Tensorflow tutorial, the picture of Admiral Grace Hopper:
The following query applies the inception
function on the URL of her picture:
amazingGrace = "https://www.tensorflow.org/versions/r0.7/images/grace_hopper.jpg"
mldb.query("SELECT inception({url: '%s'}) as *" % amazingGrace)
This is great! With only 3 REST calls we were able to run a deep neural network on an arbitrary image off the internet.
Not only is this function available in SQL queries within MLDB, but as all MLDB functions, it is also available as a REST endpoint. This means that when we created the inception
function above, we essentially created an real-time API running the Inception model that any external service or device can call to get predictions back.
The following REST call demonstrates how this looks:
result = mldb.get('/v1/functions/inception/application', input={"url": amazingGrace})
print result.url + '\n\n' + repr(result) + '\n'
import numpy as np
print "Shape:"
print np.array(result.json()["output"]["softmax"]["val"]).shape
Running the network gives us a 1008-dimensional vector. This is because the network was originally trained on the Image net categories and we created the inception
function to return the softmax layer which is the output of the model.
To allow us to interpret the predictions the network makes, we can import the ImageNet labels in an MLDB dataset like this:
print mldb.put("/v1/procedures/imagenet_labels_importer", {
"type": "import.text",
"params": {
"dataFileUrl": 'archive+' + inceptionUrl + '#imagenet_comp_graph_label_strings.txt',
"outputDataset": {"id": "imagenet_labels", "type": "sparse.mutable"},
"headers": ["label"],
"named": "lineNumber() -1",
"offset": 1,
"runOnCreation": True
}
})
The contents of the dataset look like this:
mldb.query("SELECT * FROM imagenet_labels LIMIT 5")
The labels line up with the softmax layer that we extract from the network. By joining the output of the network with the imagenet_labels
dataset, we can essentially label the output of the network.
The following query scores the image just as before, but then transposes the output and then joins the result to the labels dataset. We then sort on the score to keep only the 10 highest values:
mldb.query("""
SELECT scores.pred as score
NAMED imagenet_labels.label
FROM transpose(
(
SELECT flatten(inception({url: '%s'})[softmax]) as *
NAMED 'pred'
)
) AS scores
LEFT JOIN imagenet_labels ON
imagenet_labels.rowName() = scores.rowName()
ORDER BY score DESC
LIMIT 10
""" % amazingGrace)
You can now look at the Transfer Learning with Tensorflow demo.