codes.surrogates package

Contents

codes.surrogates package#

Submodules#

codes.surrogates.surrogate_classes module#

codes.surrogates.surrogates module#

Module contents#

class codes.surrogates.AbstractSurrogateModel(device=None, n_quantities=29, n_timesteps=100, config=None)#

Bases: ABC, Module

Abstract base class for surrogate models. This class implements the basic structure of a surrogate model and defines the methods that need to be implemented by the subclasses for it to be compatible with the benchmarking framework. For more information, see https://codes-docs.web.app/documentation.html#add_model.

Parameters:
  • device (str, optional) – The device to run the model on. Defaults to None.

  • n_quantities (int, optional) – The number of quantities. Defaults to 29.

  • n_timesteps (int, optional) – The number of timesteps. Defaults to 100.

  • config (dict, optional) – The configuration dictionary. Defaults to {}.

train_loss#

The training loss.

Type:

float

test_loss#

The test loss.

Type:

float

MAE#

The mean absolute error.

Type:

float

normalisation#

The normalisation parameters.

Type:

dict

train_duration#

The training duration.

Type:

float

device#

The device to run the model on.

Type:

str

n_quantities#

The number of quantities.

Type:

int

n_timesteps#

The number of timesteps.

Type:

int

L1#

The L1 loss function.

Type:

nn.L1Loss

config#

The configuration dictionary.

Type:

dict

forward(inputs

Any) -> tuple[Tensor, Tensor]: Forward pass of the model.

prepare_data(

dataset_train: np.ndarray, dataset_test: np.ndarray | None, dataset_val: np.ndarray | None, timesteps: np.ndarray, batch_size: int, shuffle: bool,

) -> tuple[DataLoader, DataLoader, DataLoader]

Gets the data loaders for training, testing, and validation.

fit(

train_loader: DataLoader, test_loader: DataLoader, epochs: int | None, position: int, description: str,

) -> None

Trains the model on the training data. Sets the train_loss and test_loss attributes.

predict(data_loader

DataLoader) -> tuple[Tensor, Tensor]: Evaluates the model on the given data loader.

save(

model_name: str, subfolder: str, training_id: str, data_params: dict,

) -> None

Saves the model to disk.

load(training_id

str, surr_name: str, model_identifier: str) -> None: Loads a trained surrogate model.

setup_progress_bar(epochs

int, position: int, description: str) -> tqdm: Helper function to set up a progress bar for training.

denormalize(data

Tensor) -> Tensor: Denormalizes the data back to the original scale.

denormalize(data)#

Denormalize the data.

Parameters:

data (np.ndarray) – The data to denormalize.

Returns:

The denormalized data.

Return type:

np.ndarray

abstract fit(train_loader, test_loader, epochs, position, description)#

Perform the training of the model. Sets the train_loss and test_loss attributes.

Parameters:
  • train_loader (DataLoader) – The DataLoader object containing the training data.

  • test_loader (DataLoader) – The DataLoader object containing the testing data.

  • epochs (int) – The number of epochs to train the model for.

  • position (int) – The position of the progress bar.

  • description (str) – The description of the progress bar.

Return type:

None

abstract forward(inputs)#

Forward pass of the model.

Parameters:

inputs (Any) – The input data as recieved from the dataloader.

Returns:

The model predictions and the targets.

Return type:

tuple[Tensor, Tensor]

classmethod get_registered_classes()#

Returns the list of registered surrogate model classes.

Return type:

list[type[AbstractSurrogateModel]]

load(training_id, surr_name, model_identifier, model_dir=None)#

Load a trained surrogate model.

Parameters:
  • training_id (str) – The training identifier.

  • surr_name (str) – The name of the surrogate model.

  • model_identifier (str) – The identifier of the model (e.g., ‘main’).

Return type:

None

Returns:

None. The model is loaded in place.

predict(data_loader)#

Evaluate the model on the given dataloader.

Parameters:

data_loader (DataLoader) – The DataLoader object containing the data the model is evaluated on.

Returns:

The predictions and targets.

Return type:

tuple[Tensor, Tensor]

abstract prepare_data(dataset_train, dataset_test, dataset_val, timesteps, batch_size, shuffle, dummy_timesteps=True)#

Prepare the data for training, testing, and validation. This method should return the DataLoader objects for the training, testing, and validation data.

Parameters:
  • dataset_train (np.ndarray) – The training dataset.

  • dataset_test (np.ndarray) – The testing dataset.

  • dataset_val (np.ndarray) – The validation dataset.

  • timesteps (np.ndarray) – The timesteps.

  • batch_size (int) – The batch size.

  • shuffle (bool) – Whether to shuffle the data.

  • dummy_timesteps (bool) – Whether to use dummy timesteps. Defaults to True.

Returns:

The DataLoader objects for the

training, testing, and validation data.

Return type:

tuple[DataLoader, DataLoader, DataLoader]

classmethod register(surrogate)#

Registers a surrogate model class into the registry.

save(model_name, base_dir, training_id)#

Save the model to disk.

Parameters:
  • model_name (str) – The name of the model.

  • subfolder (str) – The subfolder to save the model in.

  • training_id (str) – The training identifier.

  • data_params (dict) – The data parameters.

Return type:

None

setup_progress_bar(epochs, position, description)#

Helper function to set up a progress bar for training.

Parameters:
  • epochs (int) – The number of epochs.

  • position (int) – The position of the progress bar.

  • description (str) – The description of the progress bar.

Returns:

The progress bar.

Return type:

tqdm

class codes.surrogates.BranchNet(input_size, hidden_size, output_size, num_hidden_layers, activation=ReLU())#

Bases: Module

Class that defines the branch network for the MultiONet model.

Parameters:
  • input_size (int) – The input size for the network.

  • hidden_size (int) – The number of hidden units in each layer.

  • output_size (int) – The number of output units.

  • num_hidden_layers (int) – The number of hidden layers.

forward(x)#

Forward pass for the branch network.

Parameters:

x (torch.Tensor) – The input tensor.

Return type:

Tensor

class codes.surrogates.ChemDataset(raw_data, timesteps, device)#

Bases: Dataset

Dataset class for the latent neural ODE model. The data is a 3D tensor with dimensions (batch, timesteps, species). The dataset also returns the timesteps for the data, as they are requred for the integration.

class codes.surrogates.Decoder(out_features, latent_features=5, coder_layers=3, coder_width=32, activation=ReLU())#

Bases: Module

Fully connected decoder network that maps the latent representation back to the original output space.

The network mirrors the encoder structure, using a specified number of hidden layers (coder_layers) with uniform width (coder_width) and ends with a linear mapping to the output features followed by Tanh.

Parameters:
  • out_features (int) – Number of output features.

  • latent_features (int) – Dimension of the latent representation.

  • coder_layers (int) – Number of hidden layers.

  • coder_width (int) – Number of neurons in each hidden layer.

  • activation (nn.Module) – Activation function.

forward(x)#

Forward pass to decode the latent representation into output features.

Return type:

Tensor

class codes.surrogates.Encoder(in_features, latent_features=5, coder_layers=3, coder_width=32, activation=ReLU())#

Bases: Module

Fully connected encoder network that maps input features to a lower-dimensional latent space.

The architecture consists of a specified number of hidden layers (coder_layers) with uniform width (coder_width) and ends with a linear mapping to the latent space followed by a Tanh activation.

Parameters:
  • in_features (int) – Number of input features.

  • latent_features (int) – Dimension of the latent representation.

  • coder_layers (int) – Number of hidden layers.

  • coder_width (int) – Number of neurons in each hidden layer.

  • activation (nn.Module) – Activation function.

forward(x)#

Forward pass to encode the input into the latent space.

Return type:

Tensor

class codes.surrogates.FullyConnected(device=None, n_quantities=29, n_timesteps=100, config=None)#

Bases: AbstractSurrogateModel

create_dataloader(dataset, timesteps, batch_size, shuffle=False)#

Create a DataLoader with optimized memory-safe shuffling and batching.

Parameters:
  • dataset (np.ndarray) – The data to load. Shape: (n_samples, n_timesteps, n_quantities).

  • timesteps (np.ndarray) – The timesteps. Shape: (n_timesteps,).

  • batch_size (int) – The batch size.

  • shuffle (bool, optional) – Whether to shuffle the data. Defaults to False.

Returns:

A DataLoader with precomputed batches.

Return type:

DataLoader

epoch(data_loader, criterion, optimizer)#
Return type:

float

fit(train_loader, test_loader, epochs, position=0, description='Training FullyConnected', multi_objective=False)#

Train the FullyConnected model.

Parameters:
  • train_loader (DataLoader) – The DataLoader object containing the training data.

  • test_loader (DataLoader) – The DataLoader object containing the test data.

  • epochs (int, optional) – The number of epochs to train the model.

  • position (int) – The position of the progress bar.

  • description (str) – The description for the progress bar.

  • multi_objective (bool) – Whether multi-objective optimization is used. If True, trial.report is not used (not supported by Optuna).

Return type:

None

Returns:

None. The training loss, test loss, and MAE are stored in the model.

forward(inputs)#

Forward pass for the FullyConnected model.

Parameters:

inputs (tuple[torch.Tensor, torch.Tensor]) – (x, targets) - ‘targets’ is included for a consistent interface

Return type:

Tensor

Returns:

(outputs, targets)

prepare_data(dataset_train, dataset_test, dataset_val, timesteps, batch_size, shuffle=True, dummy_timesteps=True)#

Prepare the data for the predict or fit methods.

Parameters:
  • dataset_train (np.ndarray) – Training data.

  • dataset_test (np.ndarray | None) – Test data (optional).

  • dataset_val (np.ndarray | None) – Validation data (optional).

  • timesteps (np.ndarray) – Timesteps.

  • batch_size (int) – Batch size.

  • shuffle (bool, optional) – Whether to shuffle the data. Defaults to True.

  • dummy_timesteps (bool, optional) – Whether to use dummy timesteps. Defaults to True.

Returns:

DataLoader for training, test, and validation data.

Return type:

tuple[DataLoader, DataLoader | None, DataLoader | None]

setup_optimizer_and_scheduler()#

Utility function to set up the optimizer and (optionally) scheduler.

Return type:

Optimizer

class codes.surrogates.FullyConnectedNet(input_size, hidden_size, output_size, num_hidden_layers, activation=ReLU())#

Bases: Module

forward(inputs)#

Define the computation performed at every call.

Should be overridden by all subclasses. :rtype: Tensor

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class codes.surrogates.LatentNeuralODE(device=None, n_quantities=29, n_timesteps=100, model_config=None)#

Bases: AbstractSurrogateModel

LatentNeuralODE is a class that represents a latent neural ordinary differential equation model. It includes an encoder, decoder, and neural ODE. The integrator is implemented by the torchode framework.

model#

The neural network model wrapped in a ModelWrapper object.

Type:

ModelWrapper

config#

The configuration for the model.

Type:

LatentNeuralODEBaseConfig

forward(inputs)#

Takes whatever the dataloader outputs, performs a forward pass through the model and returns the predictions with the respective targets.

prepare_data(dataset_train, dataset_test, dataset_val, timesteps, batch_size,

shuffle): Prepares the data for training by creating a DataLoader object.

fit(train_loader, test_loader, epochs, position, description)#

Fits the model to the training data. Sets the train_loss and test_loss attributes.

fit(train_loader, test_loader, epochs, position=0, description='Training LatentNeuralODE', multi_objective=False)#

Fits the model to the training data. Sets the train_loss and test_loss attributes. After 10 epochs, the loss weights are renormalized to scale the individual loss terms.

Parameters:
  • train_loader (DataLoader) – The data loader for the training data.

  • test_loader (DataLoader) – The data loader for the test data.

  • epochs (int | None) – The number of epochs to train the model. If None, uses the value from the config.

  • position (int) – The position of the progress bar.

  • description (str) – The description for the progress bar.

  • multi_objective (bool) – Whether multi-objective optimization is used. If True, trial.report is not used (not supported by Optuna).

Return type:

None

fit_profile(train_loader, test_loader, epochs, position=0, description='Training LatentNeuralODE with Profiling', profile_enabled=True, profile_save_path='chrome_trace_profile.json', profile_batches=2, profile_epoch=2)#

Fits the model to the training data with optional profiling for a limited scope. Only used if renamed to fit in the main code (and renamed the original fit to something else).

Parameters:
  • train_loader (DataLoader) – The data loader for the training data.

  • test_loader (DataLoader | None) – The data loader for the test data.

  • epochs (int) – The number of epochs to train the model.

  • position (int) – The position of the progress bar.

  • description (str) – The description for the progress bar.

  • profile_enabled (bool) – Whether to enable PyTorch profiling.

  • profile_save_path (str) – Path to save the profiling data.

  • profile_batches (int) – Number of batches to profile in the specified epoch.

  • profile_epoch (int) – The epoch at which profiling is performed.

Return type:

None

Returns:

None. The training loss, test loss, and MAE are stored in the model.

forward(inputs)#

Takes whatever the dataloader outputs, performs a forward pass through the model and returns the predictions with the respective targets.

Parameters:

inputs (Any) – the data from the dataloader

Returns:

predictions and targets

Return type:

tuple[torch.Tensor, torch.Tensor]

prepare_data(dataset_train, dataset_test, dataset_val, timesteps, batch_size=128, shuffle=True, dummy_timesteps=True)#

Prepares the data for training by creating DataLoader objects.

Parameters:
  • dataset_train (np.ndarray) – The training dataset.

  • dataset_test (np.ndarray) – The test dataset.

  • dataset_val (np.ndarray) – The validation dataset.

  • timesteps (np.ndarray) – The array of timesteps.

  • batch_size (int) – The batch size for the DataLoader.

  • shuffle (bool) – Whether to shuffle the training data.

Returns:

  • DataLoader for training data.

  • DataLoader for test data (None if no test data provided).

  • DataLoader for validation data (None if no validation data provided).

Return type:

tuple[DataLoader, DataLoader | None, DataLoader | None]

class codes.surrogates.LatentPoly(device=None, n_quantities=29, n_timesteps=100, model_config=None)#

Bases: AbstractSurrogateModel

LatentPoly class for training a polynomial model on latent space trajectories.

This model includes an encoder, decoder, and a learnable polynomial applied on the latent space. The architecture is chosen based on the version flag in the configuration.

config#

The configuration for the model.

Type:

LatentPolynomialBaseConfig

model#

The wrapped model (encoder, decoder, polynomial).

Type:

PolynomialModelWrapper

device#

Device for training.

Type:

str

fit(train_loader, test_loader, epochs, position=0, description='Training LatentPoly', multi_objective=False)#

Fit the model to the training data.

Parameters:
  • train_loader (DataLoader) – The data loader for the training data.

  • test_loader (DataLoader) – The data loader for the test data.

  • epochs (int | None) – The number of epochs to train the model. If None, uses the value from the config.

  • position (int) – The position of the progress bar.

  • description (str) – The description for the progress bar.

  • multi_objective (bool) – Whether multi-objective optimization is used. If True, trial.report is not used (not supported by Optuna).

Return type:

None

forward(inputs)#

Perform a forward pass through the model.

Parameters:

inputs (tuple) – Tuple containing the input tensor and timesteps.

Returns:

(Predictions, Targets)

Return type:

tuple[torch.Tensor, torch.Tensor]

prepare_data(dataset_train, dataset_test, dataset_val, timesteps, batch_size=128, shuffle=True, dummy_timesteps=True)#

Prepare DataLoaders for training, testing, and validation.

Parameters:
  • dataset_train (np.ndarray) – Training dataset.

  • dataset_test (np.ndarray | None) – Test dataset.

  • dataset_val (np.ndarray | None) – Validation dataset.

  • timesteps (np.ndarray) – Array of timesteps.

  • batch_size (int) – Batch size.

  • shuffle (bool) – Whether to shuffle training data.

  • dummy_timesteps (bool) – Whether to use dummy timesteps.

Returns:

DataLoaders for training, test, and validation datasets.

Return type:

tuple

class codes.surrogates.ModelWrapper(config, n_quantities)#

Bases: Module

Wraps the encoder, decoder, and neural ODE into a single model. Chooses architecture based on the config.model_version flag.

static deriv(x)#

Calculate the numerical derivative.

Parameters:

x (torch.Tensor) – The input tensor.

Returns:

The numerical derivative.

Return type:

torch.Tensor

classmethod deriv2(x)#

Calculate the numerical second derivative.

Parameters:

x (torch.Tensor) – The input tensor.

Returns:

The numerical second derivative.

Return type:

torch.Tensor

classmethod deriv2_loss(x_true, x_pred)#

Difference between the curvature of the predicted and true trajectories.

Parameters:
  • x_true (torch.Tensor) – The true trajectory.

  • x_pred (torch.Tensor) – The predicted trajectory

Returns:

The second derivative loss.

Return type:

torch.Tensor

classmethod deriv_loss(x_true, x_pred)#

Difference between the slopes of the predicted and true trajectories.

Parameters:
  • x_true (torch.Tensor) – The true trajectory.

  • x_pred (torch.Tensor) – The predicted trajectory

Returns:

The derivative loss.

Return type:

torch.Tensor

forward(x, t_range)#

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

identity_loss(x)#

Calculate the identity loss (Encoder -> Decoder).

Parameters:

x (torch.Tensor) – The input tensor.

Returns:

The identity loss.

Return type:

torch.Tensor

static l2_loss(x_true, x_pred)#

Calculate the L2 loss.

Parameters:
  • x_true (torch.Tensor) – The true trajectory.

  • x_pred (torch.Tensor) – The predicted trajectory

Returns:

The L2 loss.

Return type:

torch.Tensor

renormalize_loss_weights(x_true, x_pred)#

Renormalize the loss weights based on the current loss values so that they are accurately weighted based on the provided weights. To be used once after a short burn in phase.

Parameters:
  • x_true (torch.Tensor) – The true trajectory.

  • x_pred (torch.Tensor) – The predicted trajectory

total_loss(x_true, x_pred)#

Calculate the total loss based on the loss weights.

Parameters:
  • x_true (torch.Tensor) – The true trajectory.

  • x_pred (torch.Tensor) – The predicted trajectory

Returns:

The total loss.

Return type:

torch.Tensor

class codes.surrogates.MultiONet(device=None, n_quantities=29, n_timesteps=100, config=None)#

Bases: OperatorNetwork

Class that implements the MultiONet model. It differs from a standard DeepONet in that it has multiple outputs, which are obtained by splitting the outputs of branch and trunk networks and calculating the scalar product of the splits.

Parameters:
  • device (str, optional) – The device to use for training (e.g., ‘cpu’, ‘cuda:0’).

  • n_quantities (int, optional) – The number of quantities.

  • n_timesteps (int, optional) – The number of timesteps.

  • config (dict, optional) – The configuration for the model.

  • information (The configuration must provide the following)

  • trunk_input_size (-) – The input size for the trunk network.

  • hidden_size (-) – The number of hidden units in each layer of the branch and trunk networks.

  • branch_hidden_layers (-) – The number of hidden layers in the branch network.

  • trunk_hidden_layers (-) – The number of hidden layers in the trunk network.

  • output_factor (-) – The factor by which the number of outputs is multiplied.

  • learning_rate (-) – The learning rate for the optimizer.

  • schedule (-) – Whether to use a learning rate schedule.

  • regularization_factor (-) – The regularization factor for the optimizer.

  • masses (-) – The masses for mass conservation loss.

  • massloss_factor (-) – The factor for the mass conservation loss.

Raises:

TypeError – Invalid configuration for MultiONet model.

create_dataloader(data, timesteps, batch_size, shuffle=False)#

Create a DataLoader with optimized memory-safe shuffling using pre-allocated buffers and direct slicing.

Parameters:
  • data (np.ndarray) – The data to load. Must have shape (n_samples, n_timesteps, n_quantities).

  • timesteps (np.ndarray) – The timesteps. Shape: (n_timesteps,).

  • batch_size (int) – The batch size.

  • shuffle (bool, optional) – Whether to shuffle the data. Defaults to False.

Returns:

A DataLoader with precomputed batches.

Return type:

DataLoader

create_dataloader_n(data, timesteps, batch_size, shuffle=False)#

Create a DataLoader for the given data.

Parameters:
  • data (np.ndarray) – The data to load. Must have shape (n_samples, n_timesteps, n_quantities).

  • timesteps (np.ndarray) – The timesteps.

  • batch_size (int, optional) – The batch size.

  • shuffle (bool, optional) – Whether to shuffle the data.

epoch(data_loader, criterion, optimizer)#

Perform one training epoch.

Parameters:
  • data_loader (DataLoader) – The DataLoader object containing the training data.

  • criterion (nn.Module) – The loss function.

  • optimizer (torch.optim.Optimizer) – The optimizer.

Returns:

The total loss for the training step.

Return type:

float

epoch_profile(data_loader, criterion, optimizer, profiler=None, profile_batches=0)#

Perform one training epoch, with optional profiling for a limited number of batches.

Parameters:
  • data_loader (DataLoader) – The DataLoader object containing the training data.

  • criterion (nn.Module) – The loss function.

  • optimizer (torch.optim.Optimizer) – The optimizer.

  • profiler (torch.profiler.profile, optional) – The profiler to use for profiling.

  • profile_batches (int, optional) – Number of batches to profile in this epoch.

Returns:

The total loss for the training step.

Return type:

float

fit(train_loader, test_loader, epochs, position=0, description='Training DeepONet', multi_objective=False)#

Train the MultiONet model.

Parameters:
  • train_loader (DataLoader) – The DataLoader object containing the training data.

  • test_loader (DataLoader) – The DataLoader object containing the test data.

  • epochs (int, optional) – The number of epochs to train the model.

  • position (int) – The position of the progress bar.

  • description (str) – The description for the progress bar.

  • multi_objective (bool) – Whether multi-objective optimization is used. If True, trial.report is not used (not supported by Optuna).

Return type:

None

Returns:

None. The training loss, test loss, and MAE are stored in the model.

fit_profile(train_loader, test_loader, epochs, position=0, description='Training DeepONet', profile_enabled=True, profile_save_path='chrome_trace_profile.json', profile_batches=10)#

Train the MultiONet model with optional profiling for a limited scope.

Parameters:
  • train_loader (DataLoader) – The DataLoader object containing the training data.

  • test_loader (DataLoader) – The DataLoader object containing the test data.

  • epochs (int) – The number of epochs to train the model.

  • position (int) – The position of the progress bar.

  • description (str) – The description for the progress bar.

  • profile_enabled (bool) – Whether to enable PyTorch profiling.

  • profile_save_path (str) – Path to save the profiling data.

  • profile_batches (int) – Number of batches to profile in the second epoch.

Return type:

None

Returns:

None. The training loss, test loss, and MAE are stored in the model.

forward(inputs)#

Forward pass for the MultiONet model.

Parameters:

inputs (tuple) – The input tuple containing branch_input, trunk_input, and targets.

Returns:

The model outputs and the targets.

Return type:

tuple

prepare_data(dataset_train, dataset_test, dataset_val, timesteps, batch_size, shuffle=True, dummy_timesteps=True)#

Prepare the data for the predict or fit methods. Note: All datasets must have shape (n_samples, n_timesteps, n_quantities).

Parameters:
  • dataset_train (np.ndarray) – The training data.

  • dataset_test (np.ndarray) – The test data.

  • dataset_val (np.ndarray, optional) – The validation data.

  • timesteps (np.ndarray) – The timesteps.

  • batch_size (int, optional) – The batch size.

  • shuffle (bool, optional) – Whether to shuffle the data.

  • dummy_timesteps (bool, optional) – Whether to create a dummy timestep array.

Returns:

The training, test, and validation DataLoaders.

Return type:

tuple

setup_criterion()#

Utility function to set up the loss function for training.

Returns:

The loss function.

Return type:

callable

setup_optimizer_and_scheduler()#

Utility function to set up the optimizer and scheduler for training.

Parameters:

epochs (int) – The number of epochs to train the model.

Returns:

The optimizer and scheduler.

Return type:

tuple (torch.optim.Optimizer, torch.optim.lr_scheduler._LRScheduler)

class codes.surrogates.ODE(input_shape, output_shape, activation, ode_layers, ode_width, tanh_reg)#

Bases: Module

Neural ODE module that defines the ODE function for latent dynamics.

The network is a feedforward network with a specified number of hidden layers (ode_layers) and uniform width (ode_width). Optionally applies a scaled tanh regularization.

Parameters:
  • input_shape (int) – Input dimension (should match latent_features).

  • output_shape (int) – Output dimension (should match latent_features).

  • activation (nn.Module) – Activation function.

  • ode_layers (int) – Number of hidden layers.

  • ode_width (int) – Number of neurons in each hidden layer.

  • tanh_reg (bool) – Whether to apply scaled tanh regularization.

forward(t, x)#

Forward pass for the ODE network.

Parameters:
  • t (torch.Tensor) – Time tensor (unused in this implementation).

  • x (torch.Tensor) – Input latent state.

Returns:

Output latent state.

Return type:

torch.Tensor

class codes.surrogates.Polynomial(degree, dimension)#

Bases: Module

Learnable polynomial model.

degree#

Degree of the polynomial.

Type:

int

dimension#

Dimension of the in- and output.

Type:

int

coef#

Linear layer representing polynomial coefficients.

Type:

nn.Linear

t_matrix#

Time matrix for polynomial evaluation.

Type:

torch.Tensor

forward(t)#

Evaluate the polynomial at given timesteps.

Parameters:

t (torch.Tensor) – Time tensor.

Returns:

Evaluated polynomial.

Return type:

torch.Tensor

class codes.surrogates.TrunkNet(input_size, hidden_size, output_size, num_hidden_layers, activation=ReLU())#

Bases: Module

Class that defines the trunk network for the MultiONet model.

Parameters:
  • input_size (int) – The input size for the network.

  • hidden_size (int) – The number of hidden units in each layer.

  • output_size (int) – The number of output units.

  • num_hidden_layers (int) – The number of hidden layers.

forward(x)#

Forward pass for the trunk network.

Parameters:

x (torch.Tensor) – The input tensor.

Return type:

Tensor