deepof.models.GaussianMixtureLatent

class deepof.models.GaussianMixtureLatent(*args, **kwargs)

Gaussian Mixture probabilistic latent space model.

Used to represent the embedding of motion tracking data in a mixture of Gaussians with a provided number of components, with means, covariances and weights. Implementation based on VaDE (https://arxiv.org/abs/1611.05148) and VaDE-SC (https://openreview.net/forum?id=RQ428ZptQfU).

__init__(input_shape: tuple, n_components: int, latent_dim: int, batch_size: int, kl_warmup: int = 5, kl_annealing_mode: str = 'linear', mc_kl: int = 100, mmd_warmup: int = 15, mmd_annealing_mode: str = 'linear', kmeans_loss: float = 0.0, reg_cluster_variance: bool = False, **kwargs)

Initialize the Gaussian Mixture Latent layer.

Parameters:
  • input_shape (tuple) – shape of the input data

  • n_components (int) – number of components in the Gaussian mixture.

  • latent_dim (int) – dimensionality of the latent space.

  • batch_size (int) – batch size for training.

  • kl_warmup (int) – number of epochs to warm up the KL divergence.

  • kl_annealing_mode (str) – mode to use for annealing the KL divergence. Must be one of “linear” and “sigmoid”.

  • mc_kl (int) – number of Monte Carlo samples to use for computing the KL divergence.

  • mmd_warmup (int) – number of epochs to warm up the MMD.

  • mmd_annealing_mode (str) – mode to use for annealing the MMD. Must be one of “linear” and “sigmoid”.

  • kmeans_loss (float) – weight of the Gram matrix regularization loss.

  • reg_cluster_variance (bool) – whether to penalize uneven cluster variances in the latent space.

  • **kwargs – keyword arguments passed to the parent class

Methods

__init__(input_shape, n_components, ...[, ...])

Initialize the Gaussian Mixture Latent layer.

add_loss(losses, **kwargs)

Add loss tensor(s), potentially dependent on layer inputs.

add_metric(value[, name])

Adds metric tensor to the layer.

add_update(updates)

Add update op(s), potentially dependent on layer inputs.

add_variable(*args, **kwargs)

Deprecated, do NOT use! Alias for add_weight.

add_weight([name, shape, dtype, ...])

Adds a new variable to the layer.

build(input_shape)

Builds the model based on input shapes received.

build_from_config(config)

call(inputs[, training])

Compute the output of the layer.

compile([optimizer, loss, metrics, ...])

Configures the model for training.

compile_from_config(config)

compute_loss([x, y, y_pred, sample_weight])

Compute the total loss, validate it, and return it.

compute_mask(inputs[, mask])

Computes an output mask tensor.

compute_metrics(x, y, y_pred, sample_weight)

Update metric states and collect all metrics to be returned.

compute_output_shape(input_shape)

Computes the output shape of the layer.

compute_output_signature(input_signature)

Compute the output tensor signature of the layer based on the inputs.

count_params()

Count the total number of scalars composing the weights.

evaluate([x, y, batch_size, verbose, ...])

Returns the loss value & metrics values for the model in test mode.

evaluate_generator(generator[, steps, ...])

Evaluates the model on a data generator.

export(filepath)

Create a SavedModel artifact for inference (e.g. via TF-Serving).

finalize_state()

Finalizes the layers state after updating layer weights.

fit([x, y, batch_size, epochs, verbose, ...])

Trains the model for a fixed number of epochs (dataset iterations).

fit_generator(generator[, steps_per_epoch, ...])

Fits the model on data yielded batch-by-batch by a Python generator.

from_config(config[, custom_objects])

Creates a layer from its config.

get_build_config()

get_compile_config()

get_config()

Returns the config of the Model.

get_input_at(node_index)

Retrieves the input tensor(s) of a layer at a given node.

get_input_mask_at(node_index)

Retrieves the input mask tensor(s) of a layer at a given node.

get_input_shape_at(node_index)

Retrieves the input shape(s) of a layer at a given node.

get_layer([name, index])

Retrieves a layer based on either its name (unique) or index.

get_metrics_result()

Returns the model's metrics values as a dict.

get_output_at(node_index)

Retrieves the output tensor(s) of a layer at a given node.

get_output_mask_at(node_index)

Retrieves the output mask tensor(s) of a layer at a given node.

get_output_shape_at(node_index)

Retrieves the output shape(s) of a layer at a given node.

get_weight_paths()

Retrieve all the variables and their paths for the model.

get_weights()

Retrieves the weights of the model.

load_weights(filepath[, skip_mismatch, ...])

Loads all layer weights from a saved files.

make_predict_function([force])

Creates a function that executes one step of inference.

make_test_function([force])

Creates a function that executes one step of evaluation.

make_train_function([force])

Creates a function that executes one step of training.

predict(x[, batch_size, verbose, steps, ...])

Generates output predictions for the input samples.

predict_generator(generator[, steps, ...])

Generates predictions for the input samples from a data generator.

predict_on_batch(x)

Returns predictions for a single batch of samples.

predict_step(data)

The logic for one inference step.

reset_metrics()

Resets the state of all the metrics in the model.

reset_states()

save(filepath[, overwrite, save_format])

Saves a model as a TensorFlow SavedModel or HDF5 file.

save_spec([dynamic_batch])

Returns the tf.TensorSpec of call args as a tuple (args, kwargs).

save_weights(filepath[, overwrite, ...])

Saves all layer weights.

set_weights(weights)

Sets the weights of the layer, from NumPy arrays.

summary([line_length, positions, print_fn, ...])

Prints a string summary of the network.

test_on_batch(x[, y, sample_weight, ...])

Test the model on a single batch of samples.

test_step(data)

The logic for one evaluation step.

to_json(**kwargs)

Returns a JSON string containing the network configuration.

to_yaml(**kwargs)

Returns a yaml string containing the network configuration.

train_on_batch(x[, y, sample_weight, ...])

Runs a single gradient update on a single batch of data.

train_step(data)

The logic for one training step.

with_name_scope(method)

Decorator to automatically enter the module name scope.

Attributes

activity_regularizer

Optional regularizer function for the output of this layer.

compute_dtype

The dtype of the layer's computations.

distribute_reduction_method

The method employed to reduce per-replica values during training.

distribute_strategy

The tf.distribute.Strategy this model was created under.

dtype

The dtype of the layer weights.

dtype_policy

The dtype policy associated with this layer.

dynamic

Whether the layer is dynamic (eager-only); set in the constructor.

inbound_nodes

Return Functional API nodes upstream of this layer.

input

Retrieves the input tensor(s) of a layer.

input_mask

Retrieves the input mask tensor(s) of a layer.

input_shape

Retrieves the input shape(s) of a layer.

input_spec

InputSpec instance(s) describing the input format for this layer.

jit_compile

Specify whether to compile the model with XLA.

layers

losses

List of losses added using the add_loss() API.

metrics

Return metrics added using compile() or add_metric().

metrics_names

Returns the model's display labels for all outputs.

name

Name of the layer (string), set in the constructor.

name_scope

Returns a tf.name_scope instance for this class.

non_trainable_variables

Sequence of non-trainable variables owned by this module and its submodules.

non_trainable_weights

List of all non-trainable weights tracked by this layer.

outbound_nodes

Return Functional API nodes downstream of this layer.

output

Retrieves the output tensor(s) of a layer.

output_mask

Retrieves the output mask tensor(s) of a layer.

output_shape

Retrieves the output shape(s) of a layer.

run_eagerly

Settable attribute indicating whether the model should run eagerly.

state_updates

Deprecated, do NOT use!

stateful

submodules

Sequence of all sub-modules.

supports_masking

Whether this layer supports computing a mask using compute_mask.

trainable

trainable_variables

Sequence of trainable variables owned by this module and its submodules.

trainable_weights

List of all trainable weights tracked by this layer.

updates

variable_dtype

Alias of Layer.dtype, the dtype of the weights.

variables

Returns the list of all layer variables/weights.

weights

Returns the list of all layer variables/weights.

__init__(input_shape: tuple, n_components: int, latent_dim: int, batch_size: int, kl_warmup: int = 5, kl_annealing_mode: str = 'linear', mc_kl: int = 100, mmd_warmup: int = 15, mmd_annealing_mode: str = 'linear', kmeans_loss: float = 0.0, reg_cluster_variance: bool = False, **kwargs)

Initialize the Gaussian Mixture Latent layer.

Parameters:
  • input_shape (tuple) – shape of the input data

  • n_components (int) – number of components in the Gaussian mixture.

  • latent_dim (int) – dimensionality of the latent space.

  • batch_size (int) – batch size for training.

  • kl_warmup (int) – number of epochs to warm up the KL divergence.

  • kl_annealing_mode (str) – mode to use for annealing the KL divergence. Must be one of “linear” and “sigmoid”.

  • mc_kl (int) – number of Monte Carlo samples to use for computing the KL divergence.

  • mmd_warmup (int) – number of epochs to warm up the MMD.

  • mmd_annealing_mode (str) – mode to use for annealing the MMD. Must be one of “linear” and “sigmoid”.

  • kmeans_loss (float) – weight of the Gram matrix regularization loss.

  • reg_cluster_variance (bool) – whether to penalize uneven cluster variances in the latent space.

  • **kwargs – keyword arguments passed to the parent class