deepof.models.get_TCN_decoder
- deepof.models.get_TCN_decoder(input_shape: tuple, latent_dim: int, conv_filters: int = 64, kernel_size: int = 4, conv_stacks: int = 1, conv_dilations: tuple = (8, 4, 2, 1), padding: str = 'causal', use_skip_connections: bool = True, dropout_rate: int = 0, activation: str = 'relu')
Return a Temporal Convolutional Network (TCN) decoder.
Builds a neural network that can be used to decode a latent space into a sequence of motion tracking instances. Each layer contains a residual block with a convolutional layer and a skip connection. See the following paper for more details: https://arxiv.org/pdf/1803.01271.pdf,
- Parameters:
input_shape – shape of the input data
latent_dim – dimensionality of the latent space
conv_filters – number of filters in the TCN layers
kernel_size – size of the convolutional kernels
conv_stacks – number of TCN layers
conv_dilations – list of dilation factors for each TCN layer
padding – padding mode for the TCN layers
use_skip_connections – whether to use skip connections between TCN layers
dropout_rate – dropout rate for the TCN layers
activation – activation function for the TCN layers
- Returns:
a keras model that can be trained to decode a latent space into a sequence of motion tracking instances using temporal convolutional networks.
- Return type:
keras.Model