In this tutorial, we will build an AutoML machine learning pipelines that takes in any dataset, uses transfer learning with hyperparameter optimization to build the best neural network

Transfer learning is a machine learning method where a model developed for a task is then used as the starting point for a model on a second task.

This approach is commonly used in computer vision tasks, especially when using popular pre-built networks like VGG, ResNet, and more. It leverages the knowledge captured while training those models — processes that probably took weeks. This way, it helps achieving great results even while using small datasets.

Step #1: Training

In this tutorial, we will use Keras as it already provides easy access to popular pre-built models. We will use VGG, ResNet and InceptionV3 as our default models, along with capabilities to quickly iterate and visualize each model.

import argparse
from keras.preprocessing.image import ImageDataGenerator
from keras.layers import Dense, Activation, Flatten, Dropout
from keras.models import Sequential, Model
from keras.optimizers import SGD, Adam
from keras.callbacks import TensorBoard

parser = argparse.ArgumentParser(description='set input arguments')

parser.add_argument('--height', action="store", dest='height', type=int, default=300)
parser.add_argument('--width', action="store", dest='width', type=int, default=300)
parser.add_argument('--datadir', action="store", dest='datadir', type=str, default="/data/")

parser.add_argument('--base_model', action="store", dest='base_model', type=str, default="ResNet50")
parser.add_argument('--epochs', action="store", dest='epochs', type=int, default=10)
parser.add_argument('--dropout', action="store", dest='dropout', type=float, default=0.5)
parser.add_argument('--batch_size', action="store", dest='batch_size', type=int, default=128)
parser.add_argument('--hidden', action="store", dest='hidden', type=int, default=512)
parser.add_argument('--learning_rate', action="store", dest='learning_rate', type=float, default=0.0001)
parser.add_argument('--fc_layers', action="store", dest='fc_layers', nargs='+', type=int, default=[1024,1024])
parser.add_argument('--classes', nargs='+')

args = parser.parse_args()
height = args.height
width = args.width
datadir = args.datadir

epochs        = args.epochs
batch_size    = args.batch_size
hidden_nodes  = args.hidden
dropout       = args.dropout
learning_rate = args.learning_rate
class_list = args.classes
base_model = args.base_model
fc_layers = args.fc_layers

num_train_images = 10000

if base_model == "ResNet50":
    from keras.applications.resnet50 import ResNet50, preprocess_input

    base_model = ResNet50(weights='imagenet',
                          input_shape=(height, width, 3))
elif base_model == "VGG16":
    from keras.applications.vgg16 import VGG16, preprocess_input

    base_model = VGG16(weights='imagenet',
                          input_shape=(height, width, 3))

elif base_model == "InceptionV3":
    from keras.applications.inception_v3 import InceptionV3, preprocess_input

    base_model = InceptionV3(weights='imagenet',
                          input_shape=(height, width, 3))

# DataGen

train_datagen =  ImageDataGenerator(

train_generator = train_datagen.flow_from_directory(
    directory=datadir + "/train/",
    target_size=(height, width),

test_generator = train_datagen.flow_from_directory(
    directory=datadir + "/test/",
    target_size=(height, width),

def build_finetune_model(base_model, dropout, fc_layers, num_classes):
    for layer in base_model.layers:
        layer.trainable = False

    x = base_model.output
    x = Flatten()(x)
    for fc in fc_layers:
        # New FC layer, random init
        x = Dense(fc, activation='relu')(x)
        x = Dropout(dropout)(x)

    # New softmax layer
    predictions = Dense(num_classes, activation='softmax')(x)
    finetune_model = Model(inputs=base_model.input, outputs=predictions)

    return finetune_model

finetune_model = build_finetune_model(base_model,

adam = Adam(lr=learning_rate)
finetune_model.compile(adam, loss='categorical_crossentropy', metrics=['accuracy'])

# Train
model = finetune_model.fit_generator(train_generator, epochs=epochs,
                                       steps_per_epoch=num_train_images // batch_size,
                                       shuffle=True, verbose=1, callbacks=[TensorBoard])

# Test

# test_loss = model.evaluate_generator(test_generator)

# Save model

Create a Training Task

We will now create the matching Task template through the UI. This way, we'll be able to share and reuse this component. Allowing to quickly build ML pipelines using just drag and drop.

  1. Click Flow
  2. Click Custom Task
  3. Type in the command, you'd like to run. In this case: python3
  4. Add the parameters. Those could be data parameters (height, width) or model parameters.
    You can also use comma seperated values and will automatically test all combinations, similarly to Grid Search 

Step #2: Validation

Next step in the pipeline is to validate the result. Before deploying models there are a lot of factors to check. Whether is simply model accuracy, performance — or more complex tests like model bias and fiarness. With Flows and the ability to have multi-step pipelines, you can quickly add a step of validation.

Accessing previous task's parameters and metrics

Tags: When a task starts, cnvrg will check if there is a task that ran before it. if there is, cnvrg will add the previous task's tags to the current taks's environment variables as:
CNVRG_TASK_NAME_TAG_KEY=TAG VALUE. i.e your previous task was named Validator and one of the tags was accuracy=0.6, so you could access it via:

Artifacts: Same in task's tags, if there is a previous taks to the current task -  the current task will clone the project, and then will download the previous task's artifacts to the exact location. if you use cnvrg default docker it will be under - /home/ds/notebooks. That way You current task will be able to read or load from the same path the artifacts and your code.

Step #3: Connecting the dots & Running

Connecting the dots

Flows are working as DAG, which means that task will run sequentially. You can use flows to build complex pipelines - but in this case we will use just two tasks. Train and Validation.


Now when the flow is ready to run - simply click the blue Run button on the top right. You will then be prompted with a confirmation window. will calculate how many experiments are needed to complete the flow (including hyperparameter optimization and grid-search).

Step #4: Analyzing and Sharing Results

Every run of a flow is automatically captured. This way, you track performance, accuracy, results, data versions, code versions, environment changes, and basically — everything.

To access results, go to the Experiments tab in your projects

Did this answer your question?