deepof.data.TableDict

class deepof.data.TableDict(tabs: Dict, typ: str, arena: str | None = None, arena_dims: array | None = None, animal_ids: List = ('',), center: str | None = None, connectivity: Graph | None = None, polar: bool | None = None, exp_conditions: dict | None = None, propagate_labels: bool = False, propagate_annotations: Dict | bool = False)

Main class for storing a single dataset as a dictionary with individuals as keys and pandas.DataFrames as values.

Includes methods for generating training and testing datasets for the supervised and unsupervised models.

__init__(tabs: Dict, typ: str, arena: str | None = None, arena_dims: array | None = None, animal_ids: List = ('',), center: str | None = None, connectivity: Graph | None = None, polar: bool | None = None, exp_conditions: dict | None = None, propagate_labels: bool = False, propagate_annotations: Dict | bool = False)

Store single datasets as dictionaries with individuals as keys and pandas.DataFrames as values.

Includes methods for generating training and testing datasets for the autoencoders.

Parameters:
  • tabs (Dict) – Dictionary of pandas.DataFrames with individual experiments as keys.

  • typ (str) – Type of the dataset. Examples are “coords”, “dists”, and “angles”. For logging purposes only.

  • arena (str) – Type of the arena. Must be one of “circular-autodetect”, “circular-manual”, or “polygon-manual”. Handled internally.

  • arena_dims (np.array) – Dimensions of the arena in mm.

  • animal_ids (list) – list of animal ids.

  • center (str) – Type of the center. Handled internally.

  • polar (bool) – Whether the dataset is in polar coordinates. Handled internally.

  • exp_conditions (dict) – dictionary with experiment IDs as keys and experimental conditions as values.

  • propagate_labels (bool) – Whether to propagate phenotypic labels from the original experiments to the transformed dataset.

  • propagate_annotations (Dict) – Dictionary of annotations to propagate. If provided, the supervised annotations of the individual experiments are propagated to the dataset.

Methods

__init__(tabs, typ[, arena, arena_dims, ...])

Store single datasets as dictionaries with individuals as keys and pandas.DataFrames as values.

clear()

copy()

filter_condition(exp_filters)

Return a subset of the original table_dict object, containing only videos belonging to the specified experimental condition.

filter_id([selected_id])

Filter a TableDict object to keep only those columns related to the selected id.

filter_videos(keys)

Return a subset of the original table_dict object, containing only the specified keys.

fromkeys([value])

Create a new dictionary with keys from iterable and values set to value.

get(key[, default])

Return the value for key if key is in the dictionary, else default.

get_training_set(current_table_dict[, ...])

Generate training and test sets as numpy.array objects for model training.

items()

keys()

merge(*args[, ignore_index])

Take a number of table_dict objects and merges them to the current one.

pca([n_components, kernel])

Return a training set generated from the 2D original data (time x features) and a PCA projection to a n_components space.

pop(k[,d])

If key is not found, default is returned if given, otherwise KeyError is raised

popitem()

Remove and return a (key, value) pair as a 2-tuple.

preprocess([automatic_changepoints, ...])

Preprocess the loaded dataset before feeding to unsupervised embedding models.

random_projection([n_components, kernel])

Return a training set generated from the 2D original data (time x features) and a random projection to a n_components space.

setdefault(key[, default])

Insert key with a value of default if key is not in the dictionary.

umap([n_components])

Return a training set generated from the 2D original data (time x features) and a PCA projection to a n_components space.

update([E, ]**F)

If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]

values()

__init__(tabs: Dict, typ: str, arena: str | None = None, arena_dims: array | None = None, animal_ids: List = ('',), center: str | None = None, connectivity: Graph | None = None, polar: bool | None = None, exp_conditions: dict | None = None, propagate_labels: bool = False, propagate_annotations: Dict | bool = False)

Store single datasets as dictionaries with individuals as keys and pandas.DataFrames as values.

Includes methods for generating training and testing datasets for the autoencoders.

Parameters:
  • tabs (Dict) – Dictionary of pandas.DataFrames with individual experiments as keys.

  • typ (str) – Type of the dataset. Examples are “coords”, “dists”, and “angles”. For logging purposes only.

  • arena (str) – Type of the arena. Must be one of “circular-autodetect”, “circular-manual”, or “polygon-manual”. Handled internally.

  • arena_dims (np.array) – Dimensions of the arena in mm.

  • animal_ids (list) – list of animal ids.

  • center (str) – Type of the center. Handled internally.

  • polar (bool) – Whether the dataset is in polar coordinates. Handled internally.

  • exp_conditions (dict) – dictionary with experiment IDs as keys and experimental conditions as values.

  • propagate_labels (bool) – Whether to propagate phenotypic labels from the original experiments to the transformed dataset.

  • propagate_annotations (Dict) – Dictionary of annotations to propagate. If provided, the supervised annotations of the individual experiments are propagated to the dataset.