deepof.annotation_utils.supervised_tagging

deepof.annotation_utils.supervised_tagging(coord_object: deepof_coordinates, raw_coords: deepof_table_dict, coords: deepof_table_dict, dists: deepof_table_dict, speeds: deepof_table_dict, full_features: dict, video: str, trained_model_path: str | None = None, center: str = 'Center', params: dict = {}) DataFrame

Output a dataframe with the registered motives per frame.

If specified, produces a labeled video displaying the information in real time

Parameters:
  • coord_object (deepof.data.coordinates) – coordinates object containing the project information

  • raw_coords (deepof.data.table_dict) – table_dict with raw coordinates

  • coords (deepof.data.table_dict) – table_dict with already processed (centered and aligned) coordinates

  • dists (deepof.data.table_dict) – table_dict with already processed distances

  • speeds (deepof.data.table_dict) – table_dict with already processed speeds

  • full_features (dict) – dictionary with

  • video (str) – string name of the experiment to tag

  • trained_model_path (str) – path indicating where all pretrained models are located

  • center (str) – Body part to center coordinates on. “Center” by default.

  • params (dict) – dictionary to overwrite the default values of the parameters of the functions that the rule-based pose estimation utilizes. See documentation for details.

Returns:

table with traits as columns and frames as rows. Each value is a boolean indicating trait detection at a given time

Return type:

tag_df (pandas.DataFrame)