corrct.param_tuning
Aided regularization parameter estimation.
@author: Nicola VIGANÒ, Computational Imaging group, CWI, The Netherlands, and ESRF - The European Synchrotron, Grenoble, France
Module Contents
Classes
Base class for parameter tuning classes. |
|
L-curve regularization parameter estimation helper. |
|
Cross-validation hyper-parameter estimation class. |
Functions
Create a random mask for cross-validation. |
|
Compute hyper-parameter values within an interval. |
|
Parabolic fit of objective function costs for the different hyper-parameter values. |
Data
API
- corrct.param_tuning.num_threads
‘round(…)’
- corrct.param_tuning.NDArrayFloat
None
- corrct.param_tuning.create_random_test_mask(data_shape: Sequence[int], test_fraction: float = 0.05, dtype: numpy.typing.DTypeLike = np.float32) corrct.param_tuning.NDArrayFloat [source]
Create a random mask for cross-validation.
Parameters
data_shape : Sequence[int] The shape of the data. test_fraction : float, optional The fraction of pixels to mask. The default is 0.05. data_dtype : DTypeLike, optional The data type of the mask. The default is np.float32.
Returns
data_test_mask : NDArrayFloat The pixel mask.
- corrct.param_tuning.get_lambda_range(start: float, end: float, num_per_order: int = 4, aligned_order: bool = True) corrct.param_tuning.NDArrayFloat [source]
Compute hyper-parameter values within an interval.
Parameters
start : float First hyper-parameter value. end : float Last hyper-parameter value. num_per_order : int, optional Number of steps per order of magnitude. The default is 4. aligned_order : bool, optional Whether to align the 1 of each order of magnitude or to the given start value. The default is True.
Returns
NDArrayFloat List of hyper-parameter values.
- corrct.param_tuning.fit_func_min(hp_vals: Union[numpy.typing.ArrayLike, corrct.param_tuning.NDArrayFloat], f_vals: corrct.param_tuning.NDArrayFloat, f_stds: Optional[corrct.param_tuning.NDArrayFloat] = None, scale: Literal[linear, log] = 'log', verbose: bool = False, plot_result: bool = False) tuple[float, float] [source]
Parabolic fit of objective function costs for the different hyper-parameter values.
Parameters
hp_vals : Union[ArrayLike, NDArrayFloat] Hyper-parameter values. f_vals : NDArrayFloat Objective function costs of each hyper-parameter value. f_stds : NDArrayFloat, optional Objective function cost standard deviations of each hyper-parameter value. It is only used for plotting purposes. The default is None. scale : str, optional Scale of the fit. Options are: “log” | “linear”. The default is “log”. verbose : bool, optional Whether to produce verbose output, by default False plot_result : bool, optional Whether to plot the result, by default False
Returns
min_hp_val : float Expected hyper-parameter value of the fitted minimum. min_f_val : float Expected objective function cost of the fitted minimum.
- class corrct.param_tuning.BaseParameterTuning(dtype: numpy.typing.DTypeLike = np.float32, parallel_eval: bool = True, verbose: bool = False, plot_result: bool = False)[source]
Bases:
abc.ABC
Base class for parameter tuning classes.
Initialization
Initialize a base helper class.
Parameters
dtype : DTypeLike, optional Type of the data, by default np.float32 parallel_eval : bool, optional Whether to evaluate results in parallel, by default True verbose : bool, optional Whether to produce verbose output, by default False plot_result : bool, optional Whether to plot the results, by default False
- _solver_spawning_functionls: Optional[Callable]
None
- _solver_calling_function: Optional[Callable[[Any], tuple[corrct.param_tuning.NDArrayFloat, corrct.solvers.SolutionInfo]]]
None
- property solver_spawning_function: Callable
Return the locally stored solver spawning function.
- property solver_calling_function: Callable[[Any, ...], tuple[corrct.param_tuning.NDArrayFloat, corrct.solvers.SolutionInfo]]
Return the locally stored solver calling function.
- static get_lambda_range(start: float, end: float, num_per_order: int = 4) corrct.param_tuning.NDArrayFloat [source]
Compute regularization weights within an interval.
Parameters
start : float First regularization weight. end : float Last regularization weight. num_per_order : int, optional Number of steps per order of magnitude. The default is 4.
Returns
NDArrayFloat List of regularization weights.
- compute_reconstruction_and_loss(hp_val: float, *args: Any, **kwds: Any) tuple[float, corrct.param_tuning.NDArrayFloat] [source]
Compute objective function cost for the given hyper-parameter value.
Parameters
hp_val : float hyper-parameter value. *args : Any Optional positional arguments for the reconstruction. **kwds : Any Optional keyword arguments for the reconstruction.
Returns
cost : float Cost of the given regularization weight. rec : NDArray Reconstruction at the given weight.
- compute_reconstruction_error(hp_vals: Union[numpy.typing.ArrayLike, corrct.param_tuning.NDArrayFloat], gnd_truth: corrct.param_tuning.NDArrayFloat) tuple[corrct.param_tuning.NDArrayFloat, corrct.param_tuning.NDArrayFloat] [source]
Compute the reconstruction errors for each hyper-parameter values against the ground truth.
Parameters
hp_vals : Union[ArrayLike, NDArrayFloat] List of hyper-parameter values. gnd_truth : NDArrayFloat Expected reconstruction.
Returns
err_l1 : NDArrayFloat l1-norm errors for each reconstruction. err_l2 : NDArrayFloat l2-norm errors for each reconstruction.
- abstract compute_loss_values(hp_vals: Union[numpy.typing.ArrayLike, corrct.param_tuning.NDArrayFloat]) corrct.param_tuning.NDArrayFloat [source]
Compute the objective function costs for a list of hyper-parameter values.
Parameters
hp_vals : Union[ArrayLike, NDArrayFloat] List of hyper-parameter values.
Returns
NDArrayFloat Objective function cost for each hyper-parameter value.
- class corrct.param_tuning.LCurve(loss_function: Callable, data_dtype: numpy.typing.DTypeLike = np.float32, parallel_eval: bool = True, verbose: bool = False, plot_result: bool = False)[source]
Bases:
corrct.param_tuning.BaseParameterTuning
L-curve regularization parameter estimation helper.
Initialization
Create an L-curve regularization parameter estimation helper.
Parameters
loss_function : Callable The loss function for the computation of the L-curve values. data_dtype : DTypeLike, optional Type of the input data. The default is np.float32. parallel_eval : bool, optional Compute loss and error values in parallel. The default is False. verbose : bool, optional Print verbose output. The default is False. plot_result : bool, optional Plot results. The default is False.
Raises
ValueError In case ‘loss_function’ is not callable or does not expose at least one argument.
- compute_loss_values(hp_vals: Union[numpy.typing.ArrayLike, corrct.param_tuning.NDArrayFloat]) corrct.param_tuning.NDArrayFloat [source]
Compute objective function values for all hyper-parameter values.
Parameters
hp_vals : Union[ArrayLike, NDArrayFloat] Hyper-parameter values to use for computing the different objective function values.
Returns
f_vals : NDArrayFloat Objective function cost for each hyper-parameter value.
- class corrct.param_tuning.CrossValidation(data_shape: Sequence[int], dtype: numpy.typing.DTypeLike = np.float32, cv_fraction: float = 0.1, num_averages: int = 7, parallel_eval: bool = True, verbose: bool = False, plot_result: bool = False)[source]
Bases:
corrct.param_tuning.BaseParameterTuning
Cross-validation hyper-parameter estimation class.
Initialization
Create a cross-validation hyper-parameter estimation helper.
Parameters
data_shape : Sequence[int] Shape of the projection data. data_dtype : DTypeLike, optional Type of the input data. The default is np.float32. cv_fraction : float, optional Fraction of detector points to use for the leave-out set. The default is 0.1. num_averages : int, optional Number of averages random leave-out sets to use. The default is 7. parallel_eval : bool, optional Compute loss and error values in parallel. The default is False. verbose : bool, optional Print verbose output. The default is False. plot_result : bool, optional Plot results. The default is False.
- compute_loss_values(hp_vals: Union[numpy.typing.ArrayLike, corrct.param_tuning.NDArrayFloat]) tuple[corrct.param_tuning.NDArrayFloat, corrct.param_tuning.NDArrayFloat, corrct.param_tuning.NDArrayFloat] [source]
Compute objective function values for all requested hyper-parameter values…
Parameters
params : Union[ArrayLike, NDArrayFloat] Hyper-parameter values (e.g. regularization weight) to evaluate.
Returns
f_avgs : NDArrayFloat Average objective function costs for each regularization weight. f_stds : NDArrayFloat Standard deviation of objective function costs for each regularization weight. f_vals : NDArrayFloat Objective function costs for each regularization weight.
- fit_loss_min(hp_vals: Union[numpy.typing.ArrayLike, corrct.param_tuning.NDArrayFloat], f_vals: corrct.param_tuning.NDArrayFloat, f_stds: Optional[corrct.param_tuning.NDArrayFloat] = None, scale: Literal[linear, log] = 'log') tuple[float, float] [source]
Parabolic fit of objective function costs for the different hyper-parameter values.
Parameters
hp_vals : Union[ArrayLike, NDArrayFloat] Hyper-parameter values. f_vals : NDArrayFloat Objective function costs of each hyper-parameter value. f_stds : NDArrayFloat, optional Objective function cost standard deviations of each hyper-parameter value. It is only used for plotting purposes. The default is None. scale : str, optional Scale of the fit. Options are: “log” | “linear”. The default is “log”.
Returns
min_hp_val : float Expected hyper-parameter value of the fitted minimum. min_f_val : float Expected objective function cost of the fitted minimum.