codes.benchmark.bench_utils

codes.benchmark.bench_utils#

Functions

check_benchmark(conf)

Check whether there are any configuration issues with the benchmark.

check_surrogate(surrogate, conf)

Check whether the required models for the benchmark are present in the expected directories.

clean_metrics(metrics, conf)

Clean the metrics dictionary to remove problematic entries.

convert_dict_to_scientific_notation(d[, ...])

Convert all numerical values in a dictionary to scientific notation.

convert_to_standard_types(data)

Recursively convert data to standard types that can be serialized to YAML.

count_trainable_parameters(model)

Count the number of trainable parameters in the model.

discard_numpy_entries(d)

Recursively remove dictionary entries that contain NumPy arrays.

flatten_dict(d[, parent_key, sep])

Flatten a nested dictionary.

format_seconds(seconds)

Format a duration given in seconds as hh:mm:ss.

format_time(mean_time, std_time)

Format mean and std time consistently in ns, µs, ms, or s.

get_model_config(surr_name, config)

Get the model configuration for a specific surrogate model from the dataset folder.

get_required_models_list(surrogate, conf)

Generate a list of required models based on the configuration settings.

get_surrogate(surrogate_name)

Check if the surrogate model exists.

load_model(model, training_id, surr_name, ...)

Load a trained surrogate model.

make_comparison_csv(metrics, config)

Generate a CSV file comparing metrics for different surrogate models.

measure_memory_footprint(model, inputs)

Measure the memory footprint of the model during the forward and backward pass.

read_yaml_config(config_path)

Read the YAML configuration file.

write_metrics_to_yaml(surr_name, conf, metrics)

Write the benchmark metrics to a YAML file.