codes.tune package#

Submodules#

codes.tune.evaluate_study module#

codes.tune.evaluate_study.load_model_test_losses(model_path)#

Load the test losses from the model checkpoint.

Parameters:

model_path (str) – Path to the model checkpoint.

Returns:

Test losses.

Return type:

np.ndarray

codes.tune.evaluate_study.load_study_config(study_name)#

Load the YAML config used by the study (optuna_config.yaml).

Return type:

dict

codes.tune.evaluate_study.main()#

Main function to evaluate an Optuna study and its top models. Usually, viewing the study database with Optuna Dashboard is more informative.

codes.tune.evaluate_study.moving_average(data, window_size)#

Compute the moving average of a 1D array.

Parameters:
  • data (np.ndarray) – 1D array to compute the moving average.

  • window_size (int) – Size of the window for the moving average.

Returns:

Moving average of the input data.

Return type:

np.ndarray

Raises:

ValueError – If the window size is not a positive integer.

codes.tune.evaluate_study.parse_arguments()#

Parse command-line arguments.

codes.tune.evaluate_study.plot_test_losses(test_losses, labels, study_name, window_size=5)#

Plot the test losses of the top models.

Parameters:
  • test_losses (list[np.ndarray]) – List of test losses.

  • labels (list[str]) – List of labels for the test losses.

  • study_name (str) – Name of the study.

  • window_size (int, optional) – Size of the window for the moving average. Defaults to 5.

Return type:

None

codes.tune.evaluate_tuning module#

codes.tune.evaluate_tuning.evaluate_tuning(study_name)#

Evaluate the tuning step by generating loss plots for each surrogate model.

This function looks for folders in “tuned/<study_name>/models”. Each folder should correspond to a surrogate model (e.g., “FullyConnected”, “LatentPoly”, etc.). It then loads all .pth files within each folder, extracts the loss trajectories (test_loss), extracts the trial number from the filename, and generates a loss plot.

Parameters:

study_name (str) – Name of the study (e.g., “primordialtest”).

Return type:

None

codes.tune.evaluate_tuning.load_loss_history(model_path)#

Load loss histories from a saved model file.

The saved file is expected to be in the custom format, where the loss histories and other attributes are stored under the “attributes” key.

Parameters:

model_path (str) – Path to the .pth file.

Returns:

(train_loss, test_loss, n_epochs)

Return type:

tuple

codes.tune.evaluate_tuning.main()#

Main function to evaluate tuning.

Reads the study name from command-line arguments, processes each surrogate folder in tuned/<study_name>/models, and generates loss plots saved to tuned/<study_name>/.

codes.tune.evaluate_tuning.parse_args()#

Parse command-line arguments.

Returns:

Parsed arguments containing study_name.

Return type:

argparse.Namespace

codes.tune.evaluate_tuning.plot_losses(loss_histories, epochs, labels, title='Losses', save=False, conf=None, surr_name=None, mode='main', percentage=2.0, show_title=True)#

Plot the loss trajectories for multiple models using their actual lengths.

Each loss trajectory is plotted over its own length (i.e. trial-specific number of epochs), rather than forcing all trajectories to the length of the shortest one. The global y-axis limits are determined from the valid (nonzero) portions of each trajectory after excluding the initial percentage of epochs.

Parameters:
  • loss_histories (tuple[np.ndarray, ...]) – Tuple of loss history arrays.

  • epochs (int) – Total number of training epochs (used for labeling only).

  • labels (tuple[str, ...]) – Labels for each loss history.

  • title (str) – Title for the plot.

  • save (bool) – Whether to save the plot as an image file.

  • conf (dict | None) – Configuration dictionary (used for naming output files).

  • surr_name (str | None) – Surrogate model name.

  • mode (str) – Mode for labeling (e.g., “main” or surrogate name).

  • percentage (float) – Percentage of initial epochs to exclude from min/max y-value calculation.

  • show_title (bool) – Whether to display the title.

Return type:

None

codes.tune.optuna_fcts module#

codes.tune.optuna_fcts.create_objective(config, study_name, device_queue)#

Create the objective function for Optuna.

Parameters:
  • config (dict) – Configuration dictionary.

  • study_name (str) – Name of the study.

  • device_queue (queue.Queue) – Queue of available devices.

Returns:

Objective function for Optuna.

Return type:

function

codes.tune.optuna_fcts.get_activation_function(name)#

Get the activation function module from its name. Required for Optuna to suggest activation functions.

Parameters:

name (str) – Name of the activation function.

Returns:

Activation function module.

Return type:

nn.Module

codes.tune.optuna_fcts.load_yaml_config(config_path)#

Load a YAML configuration file.

Parameters:

config_path (str) – Path to the YAML configuration file.

Returns:

Configuration dictionary.

Return type:

dict

codes.tune.optuna_fcts.make_optuna_params(trial, optuna_params)#

Make Optuna suggested parameters from the optuna_config.yaml file.

Parameters:
  • trial (optuna.Trial) – Optuna trial object.

  • optuna_params (dict) – Optuna parameters dictionary.

Returns:

Suggested parameters.

Return type:

dict

codes.tune.optuna_fcts.training_run(trial, device, config, study_name)#

Run the training for a single Optuna trial and return the loss. In multi-objective mode, also returns the mean inference time.

Parameters:
  • trial (optuna.Trial) – Optuna trial object.

  • device (str) – Device to run the training on.

  • config (dict) – Configuration dictionary.

  • study_name (str) – Name of the study.

Returns:

Loss value in single objective mode. tuple[float, float]: (loss, mean_inference_time) in multi objective mode.

Return type:

float

Module contents#

codes.tune.create_objective(config, study_name, device_queue)#

Create the objective function for Optuna.

Parameters:
  • config (dict) – Configuration dictionary.

  • study_name (str) – Name of the study.

  • device_queue (queue.Queue) – Queue of available devices.

Returns:

Objective function for Optuna.

Return type:

function

codes.tune.get_activation_function(name)#

Get the activation function module from its name. Required for Optuna to suggest activation functions.

Parameters:

name (str) – Name of the activation function.

Returns:

Activation function module.

Return type:

nn.Module

codes.tune.load_model_test_losses(model_path)#

Load the test losses from the model checkpoint.

Parameters:

model_path (str) – Path to the model checkpoint.

Returns:

Test losses.

Return type:

np.ndarray

codes.tune.load_study_config(study_name)#

Load the YAML config used by the study (optuna_config.yaml).

Return type:

dict

codes.tune.load_yaml_config(config_path)#

Load a YAML configuration file.

Parameters:

config_path (str) – Path to the YAML configuration file.

Returns:

Configuration dictionary.

Return type:

dict

codes.tune.make_optuna_params(trial, optuna_params)#

Make Optuna suggested parameters from the optuna_config.yaml file.

Parameters:
  • trial (optuna.Trial) – Optuna trial object.

  • optuna_params (dict) – Optuna parameters dictionary.

Returns:

Suggested parameters.

Return type:

dict

codes.tune.moving_average(data, window_size)#

Compute the moving average of a 1D array.

Parameters:
  • data (np.ndarray) – 1D array to compute the moving average.

  • window_size (int) – Size of the window for the moving average.

Returns:

Moving average of the input data.

Return type:

np.ndarray

Raises:

ValueError – If the window size is not a positive integer.

codes.tune.plot_test_losses(test_losses, labels, study_name, window_size=5)#

Plot the test losses of the top models.

Parameters:
  • test_losses (list[np.ndarray]) – List of test losses.

  • labels (list[str]) – List of labels for the test losses.

  • study_name (str) – Name of the study.

  • window_size (int, optional) – Size of the window for the moving average. Defaults to 5.

Return type:

None

codes.tune.training_run(trial, device, config, study_name)#

Run the training for a single Optuna trial and return the loss. In multi-objective mode, also returns the mean inference time.

Parameters:
  • trial (optuna.Trial) – Optuna trial object.

  • device (str) – Device to run the training on.

  • config (dict) – Configuration dictionary.

  • study_name (str) – Name of the study.

Returns:

Loss value in single objective mode. tuple[float, float]: (loss, mean_inference_time) in multi objective mode.

Return type:

float