1. Sign In

Universal Sentence Encoder with Keras and TensorFlow: The Absolute Basics

I'm an engineer. I appreciate the academic, the theoretical, the abstract, but I like to make things that work. I like to watch my creations come to life and be frustrated at having to support them.

We'll be walking through a very basic deep learning classifier that can give an idea of the full lifecycle. It isn't enough to simply build a tool. You must deploy it. You must babysit it and watch it grow. Any product or application has a lifespan and things are different at each stage.

The Code

If we were going in order I'd start with setting up your environment. But I'm in a hurry and you're in a hurry. Here's the meat & potatoes front and center.

    
import numpy as np
from tensorflow.keras import models
from tensorflow.keras import layers
from keras.utils.np_utils import to_categorical
import tensorflow_text
import tensorflow_hub as hub

embed = hub.load("https://tfhub.dev/google/universal-sentence-encoder-multilingual/3")

label2index = { 'other': 0, 'aerospace': 1, 'animals': 2, 'body_parts': 3 }
labels = list(label2index.keys())

data = [
    [ 'aerospace engineering is fun', 'aerospace' ],
    [ 'I like working on spaceships', 'aerospace' ],
    [ 'Jupiter can be mined', 'aerospace' ],
    [ 'NASA wants to pay us for moon rocks', 'aerospace' ],
    [ 'Jet fuel can melt space beams', 'aerospace' ],
    [ 'Engines will carry us to the stars', 'aerospace' ],
    [ 'Are dogs the best', 'animals' ],
    [ 'Maybe cats are the best (but they are not)', 'animals' ],
    [ 'There is a horse there', 'animals' ],
    [ 'I have a goat', 'animals' ],
    [ 'That pig is wild', 'animals' ],
    [ 'What is your favorite pet', 'animals' ],
    [ 'Is your arm okay?', 'body_parts' ],
    [ 'My legs are tired from biking', 'body_parts' ],
    [ 'Her hair was a tangled mess', 'body_parts' ],
    [ 'My eyes are working just fine thank you', 'body_parts' ],
    [ 'His head is bald', 'body_parts' ],
    [ 'Michelle Obama arms are the gold standard', 'body_parts' ]
]

X = np.array(embed([ row[0] for row in data ]))
Y = to_categorical([ label2index[row[1]] for row in data ])

# create model
model = models.Sequential()
model.add(layers.Dense(64, activation='relu', input_shape=(512,)))
model.add(layers.Dense(len(labels), activation='softmax'))

model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model.summary()

# train
model.fit(X, Y, epochs=4)

model.save('data/output/example')
        

Lines 1 - 6

These are our imports. You can conceptualize imports as including other code inside of your code. These packages will nearly always be installed with python's package manager pip. Installation happens at a system-level, meaning packages that are installed are installed on your machine and subsequently available to every python script on your computer. There are some nifty mechanisms these days that can change what 'machine' and 'computer' mean, but the concept will not change.

Line 8: Heavy Lifting

This code uses Keras which uses TensorFlow which uses TensorFlow Hub which uses a pre-trained model to provide sentence-level embeddings. If that sounds like word soup to you, don't worry about it. There's simply too much to unpack in one session. You'll get there.

For now you can conceptualize that we are loading a very neat component that does all the heavy lifting for us. Thanks, Google.

Lines 10 & 11

Training the model requires that we know ahead of time what our ground-truth 'labels' are. I like to explain this as 'buckets'. You can have any number of buckets that say anything you want, so long as you can precisely define what those things are. The problem is more difficult than you think.

Here we have assigned the label 'other' to value 0, 'aerospace' to value 1, 'animals' to 2, and 'body_parts' to 3. The 'keys' are the human-readable labels, while the values are the integers 0, 1, 2, and 3.

Lines 13 - 32: The Data

This is a comically small amount of data, and in the real world you'd never have it all mixed up in the code. Let's consider this data for illustrative purposes only. Your takeaway should be that there are pairs of example, label sets into rows.

Lines 34 & 35: X & Y

A lot of what you will encounter is convention. By convention X is your set of data points. Are they floats, integers, 1D, 2D, 3D matrices, etc? Who knows - doesn't matter right now. What matters is X is your data, and Y are your labels. Commonly these will be numpy arrays.

The heavy-lifting Universal Sentence Encoder is transforming our human-readable sentences to computer-interpretable values for X. Our labels will be converted to integer representations and 'one-hot encoded' for Y.

Lines 37 - 40: The Model

This is about as basic as a deep learning model will get. Sequentially, we are saying to pass our input to a densely-connected layer having 64 hidden units.

The output of that will pass through a REctified Linear Unit, and through a final densely-connected layer having as many hidden units as we have buckets (officially, 'labels'). The final layer is what will project probabilities over our labels. More on that later.

Lines 42 & 43

These models are all about settings. So many bits & pieces can be configured. I'm not even going to explain what they all are and do. To be honest it's difficult to keep up on them all. The good news is you can learn the basics and those variables will unfold for you later.

43 is much easier to digest. This will print a simple summary to the output.

Line 46: Train

Finally! This is the fun part - training the model! It takes as input our X data and Y known good labels. It will try to close the gap between what it thinks and what we know. The number of epochs is the number of times it iterates over your entire set.

On each pass it says "I thought this example was 'animals'. Was that right?". Then it checks and if it was correct it tries to 'remember' that if not it says "I will do better next time".

Line 48: Saving the Model

This is perhaps the most important part of the whole setup. An unsaved model is like a tree falling in the woods and nobody being around to hear it. You spent CPU cycles to train the model, and once it's gone you have to re-train it. Don't let it go to waste - save it to disk for future use.

Hand-Waving

I've done a lot of smoke & mirrors to make the example this compact and try to convey the basics in a small space. There's a lot of ground to cover and frankly I don't want you scared off. It's a very cool field, but unfortunately it still requires getting your hands quite dirty. I believe & hope that will change - perhaps you'll be part of that!

I didn't explan why we hard-coded the value 512, or what would happen if we changed 64 to something else. We really breezed through the model and what it means. Just take it on faith there's a lot going on, and you'll soon see that while this is indeed a very simple model it isn't a particularly accurate one.

Evaluation

I like to break these projects up into two phases: training and evaluation. The training phase is above - let's call it example.py. We'll make another file: example-val.py. The contents are below:

    
import numpy as np
import tensorflow as tf
import tensorflow_text
import tensorflow_hub as hub

embed = hub.load("https://tfhub.dev/google/universal-sentence-encoder-multilingual/3")
model = tf.keras.models.load_model('data/output/example')

label2index = { 'other': 0, 'aerospace': 1, 'animals': 2, 'body_parts': 3 }
labels = list(label2index.keys())

while (True):
    print('Type your evaluation phrase below')

    phrase = input()
    if phrase == 'quit' or phrase == 'exit':
        print('Bye!')
        break

    predictions = model.predict(embed([ phrase ]))
    distribution = predictions[0]
    label = labels[np.argmax(distribution)]

    print(f"\t{labels}")
    print(f">\t{distribution}")
    print(f">\t{label}")
    print()
    

Lines 1 - 6: Ya Seent It

Imports & heavy-lifting. You got this!

Line 7

This is us loading the previous model we trained. Now you see how to save and load a TensorFlow model.

Lines 9 & 10: Familiar!

Using exactly the same labels as before we can decode the model's output to a human-readable form.

Lines 12 - 18: REPL

Read. Eval. Print. Loop. This is a basic REPL that allows us a simplistic command-line interface to the model.

Lines 20 - 22: Predictions

This is really what we're here for isn't it? Everything else is simply prerequisite to get to this point. The input to the model must match the format you trained it with. Hence, our same embedder is used - Universal Sentence Encoder.

The model's output is an array representing each input to predict. Each element is an array having each element corresponding to the probability for the label in the respective position. That's a lot to digest, I know.

Line 20 returns this:

    
[
    [ 0.1, 0.5, 0.3, 0.1 ]
]
    

Go ahead and add those up. It will sum to 1. That's what softmax is doing - projecting probabilites over our labels (or classes).

So which one does it think is the appropriate label then? Remember that 'other' is 0, 'aerospace' is 1, 'animals' is 2, and 'body_parts' is 3. It believes our input was an example of 'aerospace'. Fairly confident in the answer at that.

Line 21 then is just taking the distribution from the first example (because we provided only one input). Line 22 is decoding the label given the highest probability in the distribution.

Lines 24 - 27

We are simply printing the results to the screen. 24 is a reminder of what the labels are, 25 is showing the distribution, and 26 is the ultimate label predicted given our input. Neat!

Run It

If you're not able to readily run this example don't worry about it. Here's the output from my session:

    
❯ python3 example-val.py
2020-09-23 16:00:06.959841: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7fa5b4ddc050 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-09-23 16:00:06.959873: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version
Type your evaluation phrase below
My fingers are sore
    ['other', 'aerospace', 'animals', 'body_parts']
>	[0.24144205 0.23775882 0.25318053 0.26761857]
>	body_parts

Type your evaluation phrase below
Vamos al espacio
    ['other', 'aerospace', 'animals', 'body_parts']
>	[0.22809552 0.2762022  0.25897524 0.236727  ]
>	aerospace

Type your evaluation phrase below
Where are the goats?
    ['other', 'aerospace', 'animals', 'body_parts']
>	[0.23615313 0.23608264 0.28547412 0.24229006]
>	animals

Type your evaluation phrase below
quit
Bye!
    

I cherry-picked these examples but even still you can see despite the training data saying nothing of 'fingers' belonging to 'body_parts' it was still able to accurately predict the label. If you do any amount of testing you'll very quickly discover the shortcomings. As a matter of fact that the distribution values are so close together it's pretty obvious this model needs more work.

Done!

Giving only six examples per label and still getting results is ludicrous. I hope you can imagine how this can be applied to larger problems given more data.

Learn to configure an environment suitable for running the example in the next installment setting up your TensorFlow environment.