corrct package
Subpackages
- corrct.alignment package
- corrct.physics package
- corrct.processing package
Submodules
corrct.attenuation module
Incident beam and emidded radiation attenuation support.
@author: Nicola VIGANÒ, Computational Imaging group, CWI, The Netherlands, and ESRF - The European Synchrotron, Grenoble, France
- class corrct.attenuation.AttenuationVolume(incident_local: ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy.floating]] | None, emitted_local: ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy.floating]] | None, angles_rot_rad: ~numpy._typing._array_like._SupportsArray[~numpy.dtype[~typing.Any]] | ~numpy._typing._nested_sequence._NestedSequence[~numpy._typing._array_like._SupportsArray[~numpy.dtype[~typing.Any]]] | bool | int | float | complex | str | bytes | ~numpy._typing._nested_sequence._NestedSequence[bool | int | float | complex | str | bytes], angles_det_rad: ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy.floating]] | ~numpy._typing._array_like._SupportsArray[~numpy.dtype[~typing.Any]] | ~numpy._typing._nested_sequence._NestedSequence[~numpy._typing._array_like._SupportsArray[~numpy.dtype[~typing.Any]]] | bool | int | float | complex | str | bytes | ~numpy._typing._nested_sequence._NestedSequence[bool | int | float | complex | str | bytes] = 1.5707963267948966, dtype: ~numpy.dtype[~typing.Any] | None | type[~typing.Any] | ~numpy._typing._dtype_like._SupportsDType[~numpy.dtype[~typing.Any]] | str | tuple[~typing.Any, int] | tuple[~typing.Any, ~typing.SupportsIndex | ~collections.abc.Sequence[~typing.SupportsIndex]] | list[~typing.Any] | ~numpy._typing._dtype_like._DTypeDict | tuple[~typing.Any, ~typing.Any] = <class 'numpy.float32'>)[source]
Bases:
object
Attenuation volume computation class.
- angles_det_rad: ndarray[Any, dtype[floating]]
- angles_rot_rad: ndarray[Any, dtype[floating]]
- compute_maps(use_multithreading: bool = True, verbose: bool = True) None [source]
Compute the correction maps for each angle.
- Parameters:
use_multithreading (bool, optional) – Use multi-threading for computing the attenuation maps. The default is True.
verbose (bool, optional) – Show verbose output. The default is True.
- dtype: dtype[Any] | None | type[Any] | _SupportsDType[dtype[Any]] | str | tuple[Any, int] | tuple[Any, SupportsIndex | Sequence[SupportsIndex]] | list[Any] | _DTypeDict | tuple[Any, Any]
- emitted_local: ndarray[Any, dtype[floating]] | None
- get_maps(roi: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes] | None = None, rot_ind: int | slice | Sequence[int] | ndarray[Any, dtype[integer]] | None = None, det_ind: int | slice | Sequence[int] | ndarray[Any, dtype[integer]] | None = None) ndarray[Any, dtype[_ScalarType_co]] [source]
Return the attenuation maps.
- Parameters:
roi (ArrayLike, optional) – The region-of-interest to select. The default is None.
rot_ind (int, optional) – A specific rotation index, if only one is to be desired. The default is None.
det_ind (int, optional) – A specific detector index, if only one is to be desired. The default is None.
- Returns:
The attenuation maps.
- Return type:
NDArray
- get_projector_args(roi: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes] | None = None, rot_ind: int | slice | Sequence[int] | ndarray[Any, dtype[integer]] | None = None, det_ind: int | slice | Sequence[int] | ndarray[Any, dtype[integer]] | None = None) dict[str, ndarray[Any, dtype[_ScalarType_co]]] [source]
Return the projector arguments.
- Parameters:
roi (ArrayLike, optional) – The region-of-interest to select. The default is None.
rot_ind (int, optional) – A specific rotation index, if only one is to be desired. The default is None.
det_ind (int, optional) – A specific detector index, if only one is to be desired. The default is None.
- Returns:
A dictionary containing the attenuation maps and the detector angle.
- Return type:
dict[str, NDArray]
- incident_local: ndarray[Any, dtype[floating]] | None
- maps: ndarray[Any, dtype[_ScalarType_co]]
- plot_map(ax: Axes, rot_ind: int, det_ind: int = 0, slice_ind: int | None = None, axes: Sequence[int] | ndarray[Any, dtype[integer]] = (-2, -1)) Sequence[float] [source]
Plot the requested attenuation map.
- Parameters:
ax (matplotlib axes) – The axes where to plot.
rot_ind (int) – Rotation angle index.
det_ind (int, optional) – Detector angle index. The default is 0.
slice_ind (Optional[int], optional) – Volume slice index (for 3D volumes). The default is None.
axes (Sequence[int] | NDArray, optional) – Axes of the slice. The default is (-2, -1).
- Returns:
The extent of the axes plot (min-max coords).
- Return type:
Sequence[float]
- Raises:
ValueError – In case a slice index is not passed for a 3D volume.
- vol_shape_zyx: ndarray[Any, dtype[_ScalarType_co]]
corrct.data_terms module
Data fidelity classes.
@author: Nicola VIGANÒ, Computational Imaging group, CWI, The Netherlands, and ESRF - The European Synchrotron, Grenoble, France
- class corrct.data_terms.DataFidelityBase(background: float | ndarray[Any, dtype[floating]] | None = None)[source]
Bases:
ABC
Define the DataFidelity classes interface.
- abstract apply_proximal(dual: ndarray[Any, dtype[floating]]) None [source]
Apply the proximal in the dual domain.
- Parameters:
dual (NDArrayFloat) – The dual solution
- assign_data(data: float | ndarray[Any, dtype[floating]] | None = None, sigma: float | ndarray[Any, dtype[floating]] = 1.0) None [source]
Initialize the data bias, and sigma of the data term.
- Parameters:
data (Union[float, NDArrayFloat, None], optional) – The data bias, by default None
sigma (Union[float, NDArrayFloat], optional) – The sigma, by default 1.0
- background: ndarray[Any, dtype[floating]] | None
- compute_data_dual_dot(dual: ndarray[Any, dtype[floating]], mask: ndarray[Any, dtype[floating]] | None = None) float [source]
Compute the dot product of the data bias and the dual solution.
- Parameters:
dual (NDArrayFloat) – The dual solution.
mask (Union[NDArrayFloat, None], optional) – Mask of the dual domain, by default None
- Returns:
The dot product between the data bias and the dual solution
- Return type:
float
- abstract compute_primal_dual_gap(proj_primal: ndarray[Any, dtype[floating]], dual: ndarray[Any, dtype[floating]], mask: ndarray[Any, dtype[floating]] | None = None) float [source]
Compute the primal-dual gap of the current solution.
- Parameters:
proj_primal (NDArrayFloat) – The projected primal solution (in the dual domain)
dual (NDArrayFloat) – The dual solution
mask (Union[NDArrayFloat, None], optional) – Mask in the dual domain, by default None
- Returns:
The primal-dual gap
- Return type:
float
- compute_residual(proj_primal: ndarray[Any, dtype[floating]], mask: ndarray[Any, dtype[floating]] | None = None) ndarray[Any, dtype[floating]] [source]
Compute the residual in the dual domain.
- Parameters:
proj_primal (NDArrayFloat) – Projection of the primal solution
mask (Union[NDArrayFloat, None], optional) – Mask of the dual domain, by default None
- Returns:
The residual
- Return type:
NDArrayFloat
- abstract compute_residual_norm(dual: ndarray[Any, dtype[floating]]) float [source]
Compute the norm of the residual.
- Parameters:
dual (NDArrayFloat) – The residual in the dual domain.
- Returns:
The residual norm.
- Return type:
float
- data: ndarray[Any, dtype[floating]] | None
- info() str [source]
Return the data-fidelity info.
- Returns:
Data fidelity info string.
- Return type:
str
- initialize_dual() ndarray[Any, dtype[floating]] [source]
Initialize the dual domain solution.
- Returns:
A zero array with the dimensions of the dual domain.
- Return type:
NDArrayFloat
- lower() str [source]
Return the lower case name of the data-fidelity.
- Returns:
Lower case string name of the data-fidelity.
- Return type:
str
- sigma: float | ndarray[Any, dtype[floating]]
- sigma_data: ndarray[Any, dtype[floating]] | None
- class corrct.data_terms.DataFidelity_Huber(local_error, background=None, l2_axis=None)[source]
Bases:
DataFidelityBase
Huber-norm data-fidelity class. Given a parameter a: l2-norm for x < a, and l1-norm for x > a.
- apply_proximal(dual)[source]
Apply the proximal in the dual domain.
- Parameters:
dual (NDArrayFloat) – The dual solution
- assign_data(data, sigma=1.0)[source]
Initialize the data bias, and sigma of the data term.
- Parameters:
data (Union[float, NDArrayFloat, None], optional) – The data bias, by default None
sigma (Union[float, NDArrayFloat], optional) – The sigma, by default 1.0
- compute_primal_dual_gap(proj_primal, dual, mask=None)[source]
Compute the primal-dual gap of the current solution.
- Parameters:
proj_primal (NDArrayFloat) – The projected primal solution (in the dual domain)
dual (NDArrayFloat) – The dual solution
mask (Union[NDArrayFloat, None], optional) – Mask in the dual domain, by default None
- Returns:
The primal-dual gap
- Return type:
float
- class corrct.data_terms.DataFidelity_KL(background: float | ndarray[Any, dtype[floating]] | None = None)[source]
Bases:
DataFidelityBase
KullbackLeibler data-fidelity class.
- apply_proximal(dual)[source]
Apply the proximal in the dual domain.
- Parameters:
dual (NDArrayFloat) – The dual solution
- compute_primal_dual_gap(proj_primal, dual, mask=None)[source]
Compute the primal-dual gap of the current solution.
- Parameters:
proj_primal (NDArrayFloat) – The projected primal solution (in the dual domain)
dual (NDArrayFloat) – The dual solution
mask (Union[NDArrayFloat, None], optional) – Mask in the dual domain, by default None
- Returns:
The primal-dual gap
- Return type:
float
- class corrct.data_terms.DataFidelity_l1(background=None)[source]
Bases:
DataFidelityBase
l1-norm data-fidelity class.
- apply_proximal(dual, weight=1.0)[source]
Apply the proximal in the dual domain.
- Parameters:
dual (NDArrayFloat) – The dual solution
- compute_primal_dual_gap(proj_primal, dual, mask=None)[source]
Compute the primal-dual gap of the current solution.
- Parameters:
proj_primal (NDArrayFloat) – The projected primal solution (in the dual domain)
dual (NDArrayFloat) – The dual solution
mask (Union[NDArrayFloat, None], optional) – Mask in the dual domain, by default None
- Returns:
The primal-dual gap
- Return type:
float
- class corrct.data_terms.DataFidelity_l12(background=None, l2_axis=0)[source]
Bases:
DataFidelity_l1
l12-norm data-fidelity class.
- class corrct.data_terms.DataFidelity_l1b(local_error, background=None)[source]
Bases:
DataFidelity_l1
l1-norm ball data-fidelity class.
- class corrct.data_terms.DataFidelity_l2(background: float | ndarray[Any, dtype[floating]] | None = None)[source]
Bases:
DataFidelityBase
l2-norm data-fidelity class.
- apply_proximal(dual: ndarray[Any, dtype[floating]]) None [source]
Apply the proximal in the dual domain.
- Parameters:
dual (NDArrayFloat) – The dual solution
- assign_data(data: float | ndarray[Any, dtype[floating]] | None = None, sigma: float | ndarray[Any, dtype[floating]] = 1.0) None [source]
Initialize the data bias, and sigma of the data term.
- Parameters:
data (Union[float, NDArrayFloat, None], optional) – The data bias, by default None
sigma (Union[float, NDArrayFloat], optional) – The sigma, by default 1.0
- compute_primal_dual_gap(proj_primal: ndarray[Any, dtype[floating]], dual: ndarray[Any, dtype[floating]], mask: ndarray[Any, dtype[floating]] | None = None) float [source]
Compute the primal-dual gap of the current solution.
- Parameters:
proj_primal (NDArrayFloat) – The projected primal solution (in the dual domain)
dual (NDArrayFloat) – The dual solution
mask (Union[NDArrayFloat, None], optional) – Mask in the dual domain, by default None
- Returns:
The primal-dual gap
- Return type:
float
- class corrct.data_terms.DataFidelity_l2b(local_error: float | ndarray[Any, dtype[floating]], background: float | ndarray[Any, dtype[floating]] | None = None)[source]
Bases:
DataFidelity_l2
l2-norm ball data-fidelity class.
- apply_proximal(dual: ndarray[Any, dtype[floating]]) None [source]
Apply the proximal in the dual domain.
- Parameters:
dual (NDArrayFloat) – The dual solution
- assign_data(data: float | ndarray[Any, dtype[floating]] | None, sigma: float | ndarray[Any, dtype[floating]] = 1.0)[source]
Initialize the data bias, and sigma of the data term.
- Parameters:
data (Union[float, NDArrayFloat, None], optional) – The data bias, by default None
sigma (Union[float, NDArrayFloat], optional) – The sigma, by default 1.0
- compute_primal_dual_gap(proj_primal: ndarray[Any, dtype[floating]], dual: ndarray[Any, dtype[floating]], mask: ndarray[Any, dtype[floating]] | None = None) float [source]
Compute the primal-dual gap of the current solution.
- Parameters:
proj_primal (NDArrayFloat) – The projected primal solution (in the dual domain)
dual (NDArrayFloat) – The dual solution
mask (Union[NDArrayFloat, None], optional) – Mask in the dual domain, by default None
- Returns:
The primal-dual gap
- Return type:
float
- compute_residual(proj_primal: ndarray[Any, dtype[floating]], mask: ndarray[Any, dtype[floating]] | None = None) ndarray[Any, dtype[floating]] [source]
Compute the residual in the dual domain.
- Parameters:
proj_primal (NDArrayFloat) – Projection of the primal solution
mask (Union[NDArrayFloat, None], optional) – Mask of the dual domain, by default None
- Returns:
The residual
- Return type:
NDArrayFloat
- class corrct.data_terms.DataFidelity_ln(background=None, ln_axes: ~typing.Sequence[int] = (1, -1), spectral_norm: ~corrct.data_terms.DataFidelityBase = <corrct.data_terms.DataFidelity_l1 object>)[source]
Bases:
DataFidelityBase
nuclear-norm data-fidelity class.
- apply_proximal(dual)[source]
Apply the proximal in the dual domain.
- Parameters:
dual (NDArrayFloat) – The dual solution
- compute_primal_dual_gap(proj_primal, dual, mask=None)[source]
Compute the primal-dual gap of the current solution.
- Parameters:
proj_primal (NDArrayFloat) – The projected primal solution (in the dual domain)
dual (NDArrayFloat) – The dual solution
mask (Union[NDArrayFloat, None], optional) – Mask in the dual domain, by default None
- Returns:
The primal-dual gap
- Return type:
float
- class corrct.data_terms.DataFidelity_wl2(weights: float | ndarray[Any, dtype[floating]], background: float | ndarray[Any, dtype[floating]] | None = None)[source]
Bases:
DataFidelity_l2
Weighted l2-norm data-fidelity class.
- assign_data(data: float | ndarray[Any, dtype[floating]] | None, sigma: float | ndarray[Any, dtype[floating]] = 1.0)[source]
Initialize the data bias, and sigma of the data term.
- Parameters:
data (Union[float, NDArrayFloat, None], optional) – The data bias, by default None
sigma (Union[float, NDArrayFloat], optional) – The sigma, by default 1.0
- compute_residual(proj_primal, mask: float | ndarray[Any, dtype[floating]] | None = None)[source]
Compute the residual in the dual domain.
- Parameters:
proj_primal (NDArrayFloat) – Projection of the primal solution
mask (Union[NDArrayFloat, None], optional) – Mask of the dual domain, by default None
- Returns:
The residual
- Return type:
NDArrayFloat
corrct.denoisers module
Advanced denoising methods.
@author: Nicola VIGANÒ, Computational Imaging group, CWI, The Netherlands, and ESRF - The European Synchrotron, Grenoble, France
- corrct.denoisers.denoise_image(img: ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy._typing._array_like._ScalarType_co]], reg_weight: float | ~numpy._typing._array_like._SupportsArray[~numpy.dtype[~typing.Any]] | ~numpy._typing._nested_sequence._NestedSequence[~numpy._typing._array_like._SupportsArray[~numpy.dtype[~typing.Any]]] | bool | int | complex | str | bytes | ~numpy._typing._nested_sequence._NestedSequence[bool | int | float | complex | str | bytes] | ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy._typing._array_like._ScalarType_co]] = 0.01, psf: ~numpy._typing._array_like._SupportsArray[~numpy.dtype[~typing.Any]] | ~numpy._typing._nested_sequence._NestedSequence[~numpy._typing._array_like._SupportsArray[~numpy.dtype[~typing.Any]]] | bool | int | float | complex | str | bytes | ~numpy._typing._nested_sequence._NestedSequence[bool | int | float | complex | str | bytes] | ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy._typing._array_like._ScalarType_co]] | None = None, pix_weights: ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy._typing._array_like._ScalarType_co]] | None = None, iterations: int = 250, regularizer: ~typing.Callable = <function <lambda>>, lower_limit: float | None = None, verbose: bool = False) ndarray[Any, dtype[_ScalarType_co]] [source]
Denoise an image.
Image denoiser based on (flat or weighted) least-squares, with wavelet minimization regularization. The weighted least-squares requires the local pixel-wise weights. It can be used to denoise sinograms and projections.
- Parameters:
img (NDArray) – The image or sinogram to denoise.
reg_weight (Union[float, ArrayLike, NDArray], optional) – Weight of the regularization term. The default is 1e-2. If a sequence / array is passed, all the different values will be tested. The one minimizing the error over the cross-validation set will be chosen and returned.
pix_weights (Union[ArrayLike, NDArray, None], optional) – The local weights of the pixels, for a weighted least-squares minimization. If None, a standard least-squares minimization is performed. The default is None.
iterations (int, optional) – Number of iterations. The default is 250.
regularizer (Callable, optional) – The one-argument constructor of a regularizer. The default is the DWL regularizer.
lower_limit (Optional[float], optional) – Lower clipping limit of the image. The default is None.
verbose (bool, optional) – Turn verbosity on. The default is False.
- Returns:
Denoised image or sinogram.
- Return type:
NDArray
corrct.filters module
Filtered back-projection filters.
@author: Nicola VIGANÒ, Computational Imaging group, CWI, The Netherlands, and ESRF - The European Synchrotron, Grenoble, France
- class corrct.filters.BasisOptions[source]
Bases:
ABC
,Mapping
Options for the different types of bases.
- class corrct.filters.BasisOptionsBlocks(binning_start: int | None = 2, binning_type: str = 'exponential', order: int = 1, normalized: bool = True)[source]
Bases:
BasisOptions
Options for the wavelet bases.
- binning_start: int | None = 2
- binning_type: str = 'exponential'
- normalized: bool = True
- order: int = 1
- class corrct.filters.BasisOptionsWavelets(wavelet: str = 'bior2.2', level: int = 5, norm: float = 1.0)[source]
Bases:
BasisOptions
Options for the wavelet bases.
- level: int = 5
- norm: float = 1.0
- wavelet: str = 'bior2.2'
- class corrct.filters.Filter(fbp_filter: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes] | ndarray[Any, dtype[floating]] | None, pad_mode: str, use_rfft: bool, dtype: dtype[Any] | None | type[Any] | _SupportsDType[dtype[Any]] | str | tuple[Any, int] | tuple[Any, SupportsIndex | Sequence[SupportsIndex]] | list[Any] | _DTypeDict | tuple[Any, Any])[source]
Bases:
ABC
Base FBP filter.
- apply_filter(data_wu: ndarray[Any, dtype[_ScalarType_co]], fbp_filter: ndarray[Any, dtype[_ScalarType_co]] | None = None) ndarray[Any, dtype[_ScalarType_co]] [source]
Apply the filter to the data_wu.
- Parameters:
data_wu (NDArray) – The sinogram.
fbp_filter (NDArray, optional) – The filter to use. The default is None
- Returns:
The filtered sinogram.
- Return type:
NDArray
- abstract compute_filter(data_wu: ndarray[Any, dtype[_ScalarType_co]]) None [source]
Compute the FBP filter for the given data.
- Parameters:
data_wu (NDArray) – The reference sinogram / projection data.
- dtype: dtype[Any] | None | type[Any] | _SupportsDType[dtype[Any]] | str | tuple[Any, int] | tuple[Any, SupportsIndex | Sequence[SupportsIndex]] | list[Any] | _DTypeDict | tuple[Any, Any]
- fbp_filter: ndarray[Any, dtype[floating]]
- property filter_fourier: ndarray[Any, dtype[floating]]
Fourier representation of the filter.
- Returns:
The filter in Fourier.
- Return type:
NDArray[np.floating]
- property filter_real: ndarray[Any, dtype[floating]]
Real-space representation of the filter.
- Returns:
The filter in real-space.
- Return type:
NDArray[np.floating]
- get_padding_size(data_wu_shape: Sequence[int]) int [source]
Compute the projection padding size for a linear convolution.
- Parameters:
data_wu_shape (Sequence[int]) – The shape of the data
- Returns:
The padding size of the last dimension.
- Return type:
int
- property num_filters: int
- pad_mode: str
- to_fourier(data_wu: ndarray[Any, dtype[_ScalarType_co]]) ndarray[Any, dtype[_ScalarType_co]] [source]
- use_rfft: bool
- class corrct.filters.FilterCustom(fbp_filter: ~numpy._typing._array_like._SupportsArray[~numpy.dtype[~typing.Any]] | ~numpy._typing._nested_sequence._NestedSequence[~numpy._typing._array_like._SupportsArray[~numpy.dtype[~typing.Any]]] | bool | int | float | complex | str | bytes | ~numpy._typing._nested_sequence._NestedSequence[bool | int | float | complex | str | bytes] | ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy.floating]] | None, pad_mode: str = 'constant', use_rfft: bool = True, dtype: ~numpy.dtype[~typing.Any] | None | type[~typing.Any] | ~numpy._typing._dtype_like._SupportsDType[~numpy.dtype[~typing.Any]] | str | tuple[~typing.Any, int] | tuple[~typing.Any, ~typing.SupportsIndex | ~collections.abc.Sequence[~typing.SupportsIndex]] | list[~typing.Any] | ~numpy._typing._dtype_like._DTypeDict | tuple[~typing.Any, ~typing.Any] = <class 'numpy.float32'>)[source]
Bases:
Filter
Custom FBP filter.
- class corrct.filters.FilterFBP(filter_name: str = 'ramp', pad_mode: str = 'constant', use_rfft: bool = True, dtype: ~numpy.dtype[~typing.Any] | None | type[~typing.Any] | ~numpy._typing._dtype_like._SupportsDType[~numpy.dtype[~typing.Any]] | str | tuple[~typing.Any, int] | tuple[~typing.Any, ~typing.SupportsIndex | ~collections.abc.Sequence[~typing.SupportsIndex]] | list[~typing.Any] | ~numpy._typing._dtype_like._DTypeDict | tuple[~typing.Any, ~typing.Any] = <class 'numpy.float32'>)[source]
Bases:
Filter
Traditional FBP filter.
- FILTERS = ('ramp', 'shepp-logan', 'cosine', 'hamming', 'hann')
- compute_filter(data_wu: ndarray[Any, dtype[_ScalarType_co]]) None [source]
Compute the traditional FBP filter for the given data.
- Parameters:
data_wu (NDArray) – The reference sinogram / projection data.
- filter_name: str
- class corrct.filters.FilterMR(projector: ~corrct.operators.BaseTransform, binning_type: str = 'exponential', binning_start: int | None = 2, lambda_smooth: float | None = None, pad_mode: str = 'constant', use_rfft: bool = True, dtype: ~numpy.dtype[~typing.Any] | None | type[~typing.Any] | ~numpy._typing._dtype_like._SupportsDType[~numpy.dtype[~typing.Any]] | str | tuple[~typing.Any, int] | tuple[~typing.Any, ~typing.SupportsIndex | ~collections.abc.Sequence[~typing.SupportsIndex]] | list[~typing.Any] | ~numpy._typing._dtype_like._DTypeDict | tuple[~typing.Any, ~typing.Any] = <class 'numpy.float32'>)[source]
Bases:
Filter
Data dependent FBP filter.
This is a simplified implementation from:
[1] Pelt, D. M., & Batenburg, K. J. (2014). Improving filtered backprojection reconstruction by data-dependent filtering. Image Processing, IEEE Transactions on, 23(11), 4750-4762.
Code inspired by: https://github.com/dmpelt/pymrfbp
- binning_start: int | None
- binning_type: str
- compute_filter(data_wu: ndarray[Any, dtype[_ScalarType_co]]) None [source]
Compute the filter.
- Parameters:
data_wu (NDArray) – The sinogram.
projector (ProjectorOperator) – The projector used in the FBP.
- initialize(data_wu_shape: Sequence[int]) None [source]
Initialize filter.
- Parameters:
data_wu_shape (Sequence[int]) – Shape of the data.
- is_initialized: bool
- lambda_smooth: float | None
- projector: BaseTransform
- corrct.filters.create_basis(num_pixels: int, binning_start: int | None = 2, binning_type: str = 'exponential', normalized: bool = False, order: int = 1, dtype: ~numpy.dtype[~typing.Any] | None | type[~typing.Any] | ~numpy._typing._dtype_like._SupportsDType[~numpy.dtype[~typing.Any]] | str | tuple[~typing.Any, int] | tuple[~typing.Any, ~typing.SupportsIndex | ~collections.abc.Sequence[~typing.SupportsIndex]] | list[~typing.Any] | ~numpy._typing._dtype_like._DTypeDict | tuple[~typing.Any, ~typing.Any] = <class 'numpy.float32'>) ndarray[Any, dtype[_ScalarType_co]] [source]
Compute filter basis matrix.
- Parameters:
num_pixels (int) – Number of filter fixels.
binning_start (Optional[int], optional) – Starting displacement of the binning, by default 2.
binning_type (str, optional) – Type of pixel binning, by default “exponential”.
normalized (bool, optional) – Whether to normalize the bins by the window size, by default True.
order (int, optional) – Order of the basis functions. Only 0 and 1 supported, by default 1.
dtype (DTypeLike, optional) – Data type, by default np.float32.
- Returns:
The filter basis.
- Return type:
NDArray
- corrct.filters.create_basis_wavelet(num_pixels: int, wavelet: str = 'bior2.2', level: int = 5, norm: float = 1.0, dtype: ~numpy.dtype[~typing.Any] | None | type[~typing.Any] | ~numpy._typing._dtype_like._SupportsDType[~numpy.dtype[~typing.Any]] | str | tuple[~typing.Any, int] | tuple[~typing.Any, ~typing.SupportsIndex | ~collections.abc.Sequence[~typing.SupportsIndex]] | list[~typing.Any] | ~numpy._typing._dtype_like._DTypeDict | tuple[~typing.Any, ~typing.Any] = <class 'numpy.float32'>) ndarray[Any, dtype[_ScalarType_co]] [source]
Compute filter basis matrix. :param num_pixels: Number of wavelet filters. :type num_pixels: int :param wavelet: The wavelet to use, by default “bior4.4”. :type wavelet: str, optional :param level: The decomposition level to reach, by default 5. :type level: int, optional :param norm: The norm to use, by default 1.0. :type norm: float, optional :param dtype: Data type, by default np.float32. :type dtype: DTypeLike, optional
- Returns:
The filter basis.
- Return type:
NDArray
corrct.models module
Define all the models used through-out the code.
@author: Nicola VIGANÒ, Computational Imaging group, CWI, The Netherlands, and ESRF - The European Synchrotron, Grenoble, France
- class corrct.models.ProjectionGeometry(geom_type: str, src_pos_xyz: ndarray[Any, dtype[_ScalarType_co]], det_pos_xyz: ndarray[Any, dtype[_ScalarType_co]], det_u_xyz: ndarray[Any, dtype[_ScalarType_co]], det_v_xyz: ndarray[Any, dtype[_ScalarType_co]], rot_dir_xyz: ndarray[Any, dtype[_ScalarType_co]], pix2vox_ratio: float = 1, det_shape_vu: ndarray[Any, dtype[_ScalarType_co]] | None = None)[source]
Bases:
Geometry
Store the projection geometry.
- copy() ProjectionGeometry [source]
Deepcopy an existing geometry.
- Returns:
The new instance of ProjectionGeometry
- Return type:
- det_pos_xyz: ndarray[Any, dtype[_ScalarType_co]]
- det_shape_vu: ndarray[Any, dtype[_ScalarType_co]] | None = None
- det_u_xyz: ndarray[Any, dtype[_ScalarType_co]]
- det_v_xyz: ndarray[Any, dtype[_ScalarType_co]]
- geom_type: str
- get_3d() ProjectionGeometry [source]
Return the 3D version of the geometry.
- Returns:
The new geometry.
- Return type:
- static get_default_parallel(*, geom_type: str = '3d', rot_axis_shift_pix: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes] | None = None, rot_axis_dir: str | _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | bytes | _NestedSequence[bool | int | float | complex | str | bytes] = 'clockwise') ProjectionGeometry [source]
Generate the default geometry for parallel beam.
- Parameters:
geom_type (str, optional) – The geometry type. The default is “parallel3d”.
rot_axis_shift_pix (Optional[ArrayLike], optional) – Rotation axis shift in pixels. The default is None.
rot_axis_dir (Union[str, ArrayLike], optional) – Rotation axis direction. It can be either a string or a direction. The default is “clockwise”.
- Returns:
The default parallel-beam geometry.
- Return type:
- get_field_scaled(field_name: str) ndarray[Any, dtype[_ScalarType_co]] [source]
Return the a field content, scaled by the pix2vox ratio.
- Parameters:
field_name (str) – Name of the field to access.
- Returns:
The scaled field.
- Return type:
NDArray
- get_pre_weights(det_shape_vu: Sequence[int] | ndarray[Any, dtype[_ScalarType_co]] | None = None) ndarray[Any, dtype[_ScalarType_co]] | None [source]
Compute the pre-weights of the projector geometry (notably for cone-beam geometries).
- Parameters:
det_shape_vu (Sequence[int] | NDArray | None, optional) – Shape of the detector in [V]U coordinates, by default None
- Returns:
The computed detector weights
- Return type:
NDArray | None
- property ndim: int
Return the number of dimensions of the geometry.
- Returns:
The number of dimensions.
- Return type:
int
- pix2vox_ratio: float = 1
- project_displacement_to_detector(disp_zyx: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes]) ndarray[Any, dtype[_ScalarType_co]] [source]
Project a given displacement vector in the volume coordinates, over the detector.
- Parameters:
disp_zyx (ArrayLike) – The displacement vector in volume coordinates.
- Returns:
The projection on u (and if applicable v) coordinates.
- Return type:
NDArray
- Raises:
ValueError – When projection geometry and vector dimensions don match.
- rot_dir_xyz: ndarray[Any, dtype[_ScalarType_co]]
- rotate(angles_w_rad: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes], patch_astra_2d: bool = False) ProjectionGeometry [source]
Rotate the geometry by the given angle(s).
- Parameters:
angles_w_rad (ArrayLike) – Rotation angle(s) in radians.
- Returns:
The rotated geometry.
- Return type:
- set_detector_shape_vu(vu: int | Sequence[int] | ndarray[Any, dtype[_ScalarType_co]]) None [source]
Set the detector VU shape.
- Parameters:
vu (int | Sequence[int] | NDArray) – The VU shape of the projection data.
- set_detector_shifts_vu(det_pos_vu: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes] | ndarray[Any, dtype[_ScalarType_co]] | None = None, cor_pos_u: float | None = None, det_dist_y: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes] = 0.0) None [source]
Set the detector position in XZ, from VU (vertical, horizontal) coordinates.
- Parameters:
det_pos_vu (ArrayLike | NDArray | None) – Detector vertical and horizontal positions. Vertical is optional.
cor_pos_u (float | None) – Center of rotation position along U.
det_dist_y (ArrayLike, optional) – Detector distance from origin along Y. The default is 0.0.
- set_detector_tilt(angles_t_rad: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes] | ndarray[Any, dtype[_ScalarType_co]], tilt_axis: Sequence[float] | ndarray[Any, dtype[_ScalarType_co]] = (0, 1, 0), tilt_source: bool = False) None [source]
Rotate the detector by the given angle(s) and axis(axes).
- Parameters:
angles_t_rad (ArrayLike | NDArray) – Rotation angle(s) in radians.
tilt_axis (Sequence[float] | NDArray, optional) – The tilt axis or axes. The default is (0, 1, 0)
tilt_source (bool, optional) – Whether to also tilt the source. The default is False.
Notes
When applying multiple axes, they will be applied in order. This means that the application is not going to be independent.
- set_source_shifts_vu(src_pos_vu: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes] | ndarray[Any, dtype[_ScalarType_co]] | None = None) None [source]
Set the source position in XZ, from VU (vertical, horizontal) coordinates.
- Parameters:
src_pos_vu (ArrayLike | NDArray | None) – Source vertical and horizontal positions. Vertical is optional.
- src_pos_xyz: ndarray[Any, dtype[_ScalarType_co]]
- class corrct.models.VolumeGeometry(_vol_shape_xyz: ndarray[Any, dtype[_ScalarType_co]], vox_size: float = 1.0)[source]
Bases:
Geometry
Store the volume geometry.
- property extent: Sequence[float]
Return extent of the volume.
- Returns:
The extent of the volume [-x, +x, -y, +y, [-z, +z]].
- Return type:
Sequence[float]
- get_3d() VolumeGeometry [source]
Return the 3D version of the geometry.
- Returns:
The new geometry.
- Return type:
- static get_default_from_data(data: ndarray[Any, dtype[_ScalarType_co]], data_format: str = 'dvwu') VolumeGeometry [source]
Generate a default volume geometry from the data shape.
- Parameters:
data (NDArray) – The data.
data_format (str, optional) – The ordering and meaning of the dimensions in the data. The deault is “dvwu”.
- Returns:
The default volume geometry.
- Return type:
- static get_default_from_volume(volume: ndarray[Any, dtype[_ScalarType_co]]) VolumeGeometry [source]
Generate a default volume geometry from the given volume.
- Parameters:
volume (NDArray) – The volume.
- Returns:
The default volume geometry.
- Return type:
- is_3D() bool [source]
Tell whether this is a 3D geometry.
- Returns:
Whether this is a 3D geometry or not.
- Return type:
bool
- is_square() bool [source]
Compute whether the volume is square in XY.
- Returns:
True is the volume is square in XY.
- Return type:
bool
- property mask_shape: ndarray[Any, dtype[_ScalarType_co]]
Return the XY volume shape for circular masks.
- Returns:
Shape of the XY volume.
- Return type:
NDArray
- property shape_xyz: ndarray[Any, dtype[_ScalarType_co]]
Return the volume shape (XYZ).
- Returns:
Shape of the volume (XYZ).
- Return type:
NDArray
- property shape_zxy: ndarray[Any, dtype[_ScalarType_co]]
Return the volume shape (ZXY).
The swap between X and Y is imposed by the astra-toolbox.
- Returns:
Shape of the volume (ZXY).
- Return type:
NDArray
- vox_size: float = 1.0
- corrct.models.combine_shifts_vu(shifts_v: ndarray[Any, dtype[_ScalarType_co]], shifts_u: ndarray[Any, dtype[_ScalarType_co]]) ndarray[Any, dtype[_ScalarType_co]] [source]
Combine vertical and horizontal shifts.
- Parameters:
shifts_v (NDArray) – The vertical shifts
shifts_u (NDArray) – The horizontal shifts
- Returns:
The combined shifts
- Return type:
NDArray
- corrct.models.get_prj_geom_cone(*, src_to_sam_dist: float, rot_axis_shift_pix: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes] | ndarray[Any, dtype[_ScalarType_co]] | None = None, rot_axis_dir: str | _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | bytes | _NestedSequence[bool | int | float | complex | str | bytes] | ndarray[Any, dtype[_ScalarType_co]] = 'clockwise', data_shape: Sequence[int] | ndarray[Any, dtype[_ScalarType_co]] | None = None, data_format: str = 'dvwu') ProjectionGeometry [source]
Generate the default geometry for parallel beam.
- Parameters:
geom_type (str, optional) – The geometry type. The default is “parallel3d”.
rot_axis_shift_pix (ArrayLike | NDArray | None, optional) – Rotation axis shift in pixels. The default is None.
rot_axis_dir (str | ArrayLike | NDArray, optional) – Rotation axis direction. It can be either a string or a direction. The default is “clockwise”.
- Returns:
The default cone-beam geometry.
- Return type:
- corrct.models.get_prj_geom_parallel(*, geom_type: str = '3d', rot_axis_shift_pix: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes] | ndarray[Any, dtype[_ScalarType_co]] | None = None, rot_axis_dir: str | _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | bytes | _NestedSequence[bool | int | float | complex | str | bytes] | ndarray[Any, dtype[_ScalarType_co]] = 'clockwise', data_shape: Sequence[int] | ndarray[Any, dtype[_ScalarType_co]] | None = None, data_format: str = 'dvwu') ProjectionGeometry [source]
Generate the default geometry for parallel beam.
- Parameters:
geom_type (str, optional) – The geometry type. The default is “parallel3d”.
rot_axis_shift_pix (ArrayLike | NDArray | None, optional) – Rotation axis shift in pixels. The default is None.
rot_axis_dir (str | ArrayLike | NDArray, optional) – Rotation axis direction. It can be either a string or a direction. The default is “clockwise”.
- Returns:
The default parallel-beam geometry.
- Return type:
- corrct.models.get_rot_axis_dir(rot_axis_dir: str | _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | bytes | _NestedSequence[bool | int | float | complex | str | bytes] | ndarray[Any, dtype[_ScalarType_co]] = 'clockwise') ndarray[Any, dtype[_ScalarType_co]] [source]
Process the requested rotation axis direction and return a meaningful value.
- Parameters:
rot_axis_dir (Union[str, ArrayLike, NDArray], optional) – The requested direction, by default “clockwise”
- Returns:
The vector corresponding to the rotation direction.
- Return type:
NDArray
- Raises:
ValueError – In case of malformed direction.
- corrct.models.get_vol_geom_from_data(data: ndarray[Any, dtype[_ScalarType_co]], padding_u: int | Sequence[int] | ndarray[Any, dtype[_ScalarType_co]] = 0, data_format: str = 'dvwu') VolumeGeometry [source]
Generate a default volume geometry from the data shape.
- Parameters:
data (NDArray) – The data.
padding_u (int | Sequence[int])
data_format (str, optional) – The ordering and meaning of the dimensions in the data. The default is “dvwu”.
- Returns:
The default volume geometry.
- Return type:
- corrct.models.get_vol_geom_from_volume(volume: ndarray[Any, dtype[_ScalarType_co]]) VolumeGeometry [source]
Generate a default volume geometry from the given volume.
- Parameters:
volume (NDArray) – The volume.
- Returns:
The default volume geometry.
- Return type:
corrct.operators module
Operators module.
@author: Nicola VIGANÒ, Computational Imaging group, CWI, The Netherlands, and ESRF - The European Synchrotron, Grenoble, France
- class corrct.operators.BaseTransform(*args, **kwargs)[source]
Bases:
LinearOperator
,ABC
Base operator class.
It implements the linear operator behavior that can be used with the solvers in the .solvers module, and by the solvers in scipy.sparse.linalg.
- Parameters:
dir_shape (NDArrayInt) – Shape of the direct space.
adj_shape (NDArrayInt) – Shape of the adjoint space
- is_dir_operator
Flag indicating if the operator is a direct operator.
- Type:
bool
Notes
It assumes that the fields dir_shape and adj_shape have been set during the initialization of the derived classes.
- absolute() BaseTransform [source]
Return the absolute value of the operator.
- Returns:
The absolute value operator.
- Return type:
- adj_shape: ndarray[Any, dtype[integer]]
- dir_shape: ndarray[Any, dtype[integer]]
- explicit() ndarray[Any, dtype[_ScalarType_co]] [source]
Return the explicit transformation matrix associated with the operator.
- Returns:
The explicit transformation matrix, as a NumPy array.
- Return type:
NDArray
- rmatvec(x: ndarray[Any, dtype[_ScalarType_co]]) ndarray[Any, dtype[_ScalarType_co]] [source]
Implement the direct operator for row vectors from the left.
- Parameters:
x (NDArray) – Either row from the left or column from the right on transpose.
- Returns:
Result of applying the direct operator for row vectors.
- Return type:
NDArray
- class corrct.operators.BaseWaveletTransform(*args, **kwargs)[source]
Bases:
BaseTransform
,ABC
Base Wavelet transform.
- axes: ndarray[Any, dtype[integer]]
- labels: list[str]
- wavelet: str
- class corrct.operators.ProjectorOperator(*args, **kwargs)[source]
Bases:
BaseTransform
Base projector class that fixes the projection interface.
- abstract bp(x: ndarray[Any, dtype[_ScalarType_co]]) ndarray[Any, dtype[_ScalarType_co]] [source]
Define the interface for the back-projection.
- Parameters:
x (NDArray) – Input projection data.
- Returns:
The back-projected volume.
- Return type:
NDArray
- abstract fp(x: ndarray[Any, dtype[_ScalarType_co]]) ndarray[Any, dtype[_ScalarType_co]] [source]
Define the interface of the forward-projection.
- Parameters:
x (NDArray) – Input volume.
- Returns:
The projection data.
- Return type:
NDArray
- get_pre_weights() ndarray[Any, dtype[_ScalarType_co]] | None [source]
Compute the pre-weights of the projector geometry (notably for cone-beam geometries).
- Returns:
The computed detector weights
- Return type:
Union[NDArray, None]
- property prj_shape: ndarray[Any, dtype[integer]]
Expose the adjoint space shape as projection shape.
- Returns:
The projection shape.
- Return type:
NDArray
- property vol_shape: ndarray[Any, dtype[integer]]
Expose the direct space shape as volume shape.
- Returns:
The volume shape.
- Return type:
NDArray
- class corrct.operators.TransformConvolution(*args, **kwargs)[source]
Bases:
BaseTransform
Convolution operator.
- Parameters:
x_shape (ArrayLike) – Shape of the direct space.
kernel (ArrayLike) – The convolution kernel.
pad_mode (str, optional) – The padding mode to use for the linear convolution. The default is “edge”.
is_symm (bool, optional) – Whether the operator is symmetric or not. The default is True.
flip_adjoint (bool, optional) – Whether the adjoint kernel should be flipped. The default is False. This is useful when the kernel is not symmetric.
- absolute() TransformConvolution [source]
Return the convolution operator using the absolute value of the kernel coefficients.
- Returns:
The absolute value of the convolution operator.
- Return type:
- flip_adjoint: bool
- is_symm: bool
- kernel: ndarray[Any, dtype[_ScalarType_co]]
- pad_mode: str
- class corrct.operators.TransformDecimatedWavelet(*args, **kwargs)[source]
Bases:
BaseWaveletTransform
Decimated wavelet Transform operator.
- class corrct.operators.TransformDiagonalScaling(*args, **kwargs)[source]
Bases:
BaseTransform
Diagonal scaling operator.
- absolute() TransformDiagonalScaling [source]
Return the projection operator using the absolute value of the projection coefficients.
- Returns:
The absolute value operator
- Return type:
- scale: ndarray[Any, dtype[_ScalarType_co]]
- class corrct.operators.TransformFourier(*args, **kwargs)[source]
Bases:
BaseTransform
Fourier transform operator.
- class corrct.operators.TransformFunctions(*args, **kwargs)[source]
Bases:
BaseTransform
Transform class that uses callables.
- absolute() TransformFunctions [source]
Compute the absolute value of the operator. Raise an error, because not supported.
- Raises:
AttributeError – Not supported operation.
- class corrct.operators.TransformGradient(*args, **kwargs)[source]
Bases:
BaseTransform
Gradient operator.
- Parameters:
x_shape (ArrayLike) – Shape of the data to be transformed.
axes (Optional[ArrayLike], optional) – Axes along which to do the gradient. The default is None.
pad_mode (str, optional) – Padding mode of the gradient. The default is “edge”.
- class corrct.operators.TransformIdentity(*args, **kwargs)[source]
Bases:
BaseTransform
Identity operator.
- class corrct.operators.TransformLaplacian(*args, **kwargs)[source]
Bases:
BaseTransform
Laplacian transform operator.
- Parameters:
x_shape (ArrayLike) – Shape of the data to be transformed.
axes (ArrayLike, optional) – Axes along which to do the Laplacian. The default is None.
pad_mode (str, optional) – Padding mode of the Laplacian. The default is “edge”.
- class corrct.operators.TransformSVD(*args, **kwargs)[source]
Bases:
BaseTransform
Singular value decomposition based decomposition operator.
- U: ndarray[Any, dtype[_ScalarType_co]] | None
- Vt: ndarray[Any, dtype[_ScalarType_co]] | None
- class corrct.operators.TransformStationaryWavelet(*args, **kwargs)[source]
Bases:
BaseWaveletTransform
Stationary wavelet Transform operator.
corrct.param_tuning module
Aided regularization parameter estimation.
@author: Nicola VIGANÒ, Computational Imaging group, CWI, The Netherlands, and ESRF - The European Synchrotron, Grenoble, France
- class corrct.param_tuning.BaseRegularizationEstimation(dtype: ~numpy.dtype[~typing.Any] | None | type[~typing.Any] | ~numpy._typing._dtype_like._SupportsDType[~numpy.dtype[~typing.Any]] | str | tuple[~typing.Any, int] | tuple[~typing.Any, ~typing.SupportsIndex | ~collections.abc.Sequence[~typing.SupportsIndex]] | list[~typing.Any] | ~numpy._typing._dtype_like._DTypeDict | tuple[~typing.Any, ~typing.Any] = <class 'numpy.float32'>, parallel_eval: bool = True, verbose: bool = False, plot_result: bool = False)[source]
Bases:
ABC
Base class for regularization parameter estimation class.
- abstract compute_loss_values(hp_vals: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes] | ndarray[Any, dtype[floating]]) ndarray[Any, dtype[floating]] [source]
Compute the objective function costs for a list of hyper-parameter values.
- Parameters:
hp_vals (Union[ArrayLike, NDArrayFloat]) – List of hyper-parameter values.
- Returns:
Objective function cost for each hyper-parameter value.
- Return type:
NDArrayFloat
- compute_reconstruction_and_loss(hp_val: float, *args: Any, **kwds: Any) tuple[float, ndarray[Any, dtype[floating]]] [source]
Compute objective function cost for the given hyper-parameter value.
- Parameters:
hp_val (float) – hyper-parameter value.
*args (Any) – Optional positional arguments for the reconstruction.
**kwds (Any) – Optional keyword arguments for the reconstruction.
- Returns:
cost (float) – Cost of the given regularization weight.
rec (NDArray) – Reconstruction at the given weight.
- compute_reconstruction_error(hp_vals: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes] | ndarray[Any, dtype[floating]], gnd_truth: ndarray[Any, dtype[floating]]) tuple[ndarray[Any, dtype[floating]], ndarray[Any, dtype[floating]]] [source]
Compute the reconstruction errors for each hyper-parameter values against the ground truth.
- Parameters:
hp_vals (Union[ArrayLike, NDArrayFloat]) – List of hyper-parameter values.
gnd_truth (NDArrayFloat) – Expected reconstruction.
- Returns:
err_l1 (NDArrayFloat) – l1-norm errors for each reconstruction.
err_l2 (NDArrayFloat) – l2-norm errors for each reconstruction.
- static get_lambda_range(start: float, end: float, num_per_order: int = 4) ndarray[Any, dtype[floating]] [source]
Compute regularization weights within an interval.
- Parameters:
start (float) – First regularization weight.
end (float) – Last regularization weight.
num_per_order (int, optional) – Number of steps per order of magnitude. The default is 4.
- Returns:
List of regularization weights.
- Return type:
NDArrayFloat
- property solver_calling_function: Callable[[Any], tuple[ndarray[Any, dtype[floating]], SolutionInfo]]
Return the locally stored solver calling function.
- property solver_spawning_function: Callable
Return the locally stored solver spawning function.
- class corrct.param_tuning.CrossValidation(data_shape: ~typing.Sequence[int], dtype: ~numpy.dtype[~typing.Any] | None | type[~typing.Any] | ~numpy._typing._dtype_like._SupportsDType[~numpy.dtype[~typing.Any]] | str | tuple[~typing.Any, int] | tuple[~typing.Any, ~typing.SupportsIndex | ~collections.abc.Sequence[~typing.SupportsIndex]] | list[~typing.Any] | ~numpy._typing._dtype_like._DTypeDict | tuple[~typing.Any, ~typing.Any] = <class 'numpy.float32'>, cv_fraction: float = 0.1, num_averages: int = 7, parallel_eval: bool = True, verbose: bool = False, plot_result: bool = False)[source]
Bases:
BaseRegularizationEstimation
Cross-validation hyper-parameter estimation class.
- compute_loss_values(hp_vals: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes] | ndarray[Any, dtype[floating]]) tuple[ndarray[Any, dtype[floating]], ndarray[Any, dtype[floating]], ndarray[Any, dtype[floating]]] [source]
Compute objective function values for all requested hyper-parameter values..
- Parameters:
params (Union[ArrayLike, NDArrayFloat]) – Hyper-parameter values (e.g. regularization weight) to evaluate.
- Returns:
f_avgs (NDArrayFloat) – Average objective function costs for each regularization weight.
f_stds (NDArrayFloat) – Standard deviation of objective function costs for each regularization weight.
f_vals (NDArrayFloat) – Objective function costs for each regularization weight.
- fit_loss_min(hp_vals: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes] | ndarray[Any, dtype[floating]], f_vals: ndarray[Any, dtype[floating]], f_stds: ndarray[Any, dtype[floating]] | None = None, scale: Literal['linear', 'log'] = 'log') tuple[float, float] [source]
Parabolic fit of objective function costs for the different hyper-parameter values.
- Parameters:
hp_vals (Union[ArrayLike, NDArrayFloat]) – Hyper-parameter values.
f_vals (NDArrayFloat) – Objective function costs of each hyper-parameter value.
f_stds (NDArrayFloat, optional) – Objective function cost standard deviations of each hyper-parameter value. It is only used for plotting purposes. The default is None.
scale (str, optional) – Scale of the fit. Options are: “log” | “linear”. The default is “log”.
- Returns:
min_hp_val (float) – Expected hyper-parameter value of the fitted minimum.
min_f_val (float) – Expected objective function cost of the fitted minimum.
- class corrct.param_tuning.LCurve(loss_function: ~typing.Callable, data_dtype: ~numpy.dtype[~typing.Any] | None | type[~typing.Any] | ~numpy._typing._dtype_like._SupportsDType[~numpy.dtype[~typing.Any]] | str | tuple[~typing.Any, int] | tuple[~typing.Any, ~typing.SupportsIndex | ~collections.abc.Sequence[~typing.SupportsIndex]] | list[~typing.Any] | ~numpy._typing._dtype_like._DTypeDict | tuple[~typing.Any, ~typing.Any] = <class 'numpy.float32'>, parallel_eval: bool = True, verbose: bool = False, plot_result: bool = False)[source]
Bases:
BaseRegularizationEstimation
L-curve regularization parameter estimation helper.
- compute_loss_values(hp_vals: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes] | ndarray[Any, dtype[floating]]) ndarray[Any, dtype[floating]] [source]
Compute objective function values for all hyper-parameter values.
- Parameters:
hp_vals (Union[ArrayLike, NDArrayFloat]) – Hyper-parameter values to use for computing the different objective function values.
- Returns:
f_vals – Objective function cost for each hyper-parameter value.
- Return type:
NDArrayFloat
- corrct.param_tuning.create_random_test_mask(data_shape: ~typing.Sequence[int], test_fraction: float = 0.05, dtype: ~numpy.dtype[~typing.Any] | None | type[~typing.Any] | ~numpy._typing._dtype_like._SupportsDType[~numpy.dtype[~typing.Any]] | str | tuple[~typing.Any, int] | tuple[~typing.Any, ~typing.SupportsIndex | ~collections.abc.Sequence[~typing.SupportsIndex]] | list[~typing.Any] | ~numpy._typing._dtype_like._DTypeDict | tuple[~typing.Any, ~typing.Any] = <class 'numpy.float32'>) ndarray[Any, dtype[floating]] [source]
Create a random mask for cross-validation.
- Parameters:
data_shape (Sequence[int]) – The shape of the data.
test_fraction (float, optional) – The fraction of pixels to mask. The default is 0.05.
data_dtype (DTypeLike, optional) – The data type of the mask. The default is np.float32.
- Returns:
data_test_mask – The pixel mask.
- Return type:
NDArrayFloat
- corrct.param_tuning.fit_func_min(hp_vals: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes] | ndarray[Any, dtype[floating]], f_vals: ndarray[Any, dtype[floating]], f_stds: ndarray[Any, dtype[floating]] | None = None, scale: Literal['linear', 'log'] = 'log', verbose: bool = False, plot_result: bool = False) tuple[float, float] [source]
Parabolic fit of objective function costs for the different hyper-parameter values.
- Parameters:
hp_vals (Union[ArrayLike, NDArrayFloat]) – Hyper-parameter values.
f_vals (NDArrayFloat) – Objective function costs of each hyper-parameter value.
f_stds (NDArrayFloat, optional) – Objective function cost standard deviations of each hyper-parameter value. It is only used for plotting purposes. The default is None.
scale (str, optional) – Scale of the fit. Options are: “log” | “linear”. The default is “log”.
verbose (bool, optional) – Whether to produce verbose output, by default False
plot_result (bool, optional) – Whether to plot the result, by default False
- Returns:
min_hp_val (float) – Expected hyper-parameter value of the fitted minimum.
min_f_val (float) – Expected objective function cost of the fitted minimum.
- corrct.param_tuning.get_lambda_range(start: float, end: float, num_per_order: int = 4, aligned_order: bool = True) ndarray[Any, dtype[floating]] [source]
Compute hyper-parameter values within an interval.
- Parameters:
start (float) – First hyper-parameter value.
end (float) – Last hyper-parameter value.
num_per_order (int, optional) – Number of steps per order of magnitude. The default is 4.
aligned_order (bool, optional) – Whether to align the 1 of each order of magnitude or to the given start value. The default is True.
- Returns:
List of hyper-parameter values.
- Return type:
NDArrayFloat
corrct.projectors module
Tomographic projectors.
@author: Nicola VIGANÒ, Computational Imaging group, CWI, The Netherlands, and ESRF - The European Synchrotron, Grenoble, France
- class corrct.projectors.ProjectorAttenuationXRF(*args, **kwargs)[source]
Bases:
ProjectorUncorrected
Attenuation corrected projection class for XRF, with multi-detector support.
It includes the computation of the attenuation volumes.
- att_vol_angles: ndarray[Any, dtype[floating]]
- bp(sino: ndarray[Any, dtype[_ScalarType_co]]) ndarray[Any, dtype[_ScalarType_co]] [source]
Back-projection of the sinogram to the volume.
- Parameters:
sino (NDArray) – The sinogram to back-project.
- Returns:
The back-projected volume.
- Return type:
NDArray
- bp_angle(sino: ndarray[Any, dtype[_ScalarType_co]], angle_ind: int, single_line: bool = False) ndarray[Any, dtype[_ScalarType_co]] [source]
Back-project a single sinogram line to the volume.
It only applies the attenuation corrections if the projector is symmetric.
- Parameters:
sino (NDArray) – The sinogram to back-project or a single line.
angle_ind (int) – The angle index to forward project.
single_line (bool, optional) – Whether the input is a single sinogram line. The default is False.
- Returns:
The back-projected volume.
- Return type:
NDArray
- executor: ThreadPoolExecutor | None
- fp(vol: ndarray[Any, dtype[_ScalarType_co]]) ndarray[Any, dtype[_ScalarType_co]] [source]
Forward-project the volume to the sinogram.
It applies the attenuation corrections.
- Parameters:
vol (NDArray) – The volume to forward-project.
- Returns:
The forward-projected sinogram.
- Return type:
NDArray
- fp_angle(vol: ndarray[Any, dtype[_ScalarType_co]], angle_ind: int) ndarray[Any, dtype[_ScalarType_co]] [source]
Forward-project the volume to a single sinogram line.
It applies the attenuation corrections.
- Parameters:
vol (NDArray) – The volume to forward-project.
angle_ind (int) – The angle index to forward project.
- Returns:
sino_line – The forward-projected sinogram line.
- Return type:
NDArray
- class corrct.projectors.ProjectorMatrix(*args, **kwargs)[source]
Bases:
ProjectorOperator
Projector that uses an explicit projection matrix.
- A: ndarray[Any, dtype[_ScalarType_co]] | spmatrix
- absolute() ProjectorOperator [source]
Return the projection operator using the absolute value of the projection coefficients.
- Returns:
The absolute value operator.
- Return type:
- class corrct.projectors.ProjectorUncorrected(*args, **kwargs)[source]
Bases:
ProjectorOperator
Base projection class.
- property angles_rot_rad: ndarray[Any, dtype[_ScalarType_co]]
Simplify access to the rotation angles (in radians).
- Returns:
The rotation angles (in radians).
- Return type:
NDArray
- bp(prj_vwu: ndarray[Any, dtype[_ScalarType_co]]) ndarray[Any, dtype[_ScalarType_co]] [source]
Back-projection of the projection data to the volume.
- Parameters:
prj_vwu (NDArray) – The projection data to back-project.
- Returns:
The back-projected volume.
- Return type:
NDArray
- bp_angle(prj_vu: ndarray[Any, dtype[_ScalarType_co]], angle_ind: int) ndarray[Any, dtype[_ScalarType_co]] [source]
Back-project a single sinogram line to the volume.
- Parameters:
prj_vu (NDArray) – The sinogram to back-project or a single line.
angle_ind (int) – The angle index to forward project.
- Returns:
The back-projected volume.
- Return type:
NDArray
- fp(vol: ndarray[Any, dtype[_ScalarType_co]]) ndarray[Any, dtype[_ScalarType_co]] [source]
Forward-projection of the volume to the projection data.
- Parameters:
vol (NDArray) – The volume to forward-project.
- Returns:
The forward-projected projection data.
- Return type:
NDArray
- fp_angle(vol: ndarray[Any, dtype[_ScalarType_co]], angle_ind: int) ndarray[Any, dtype[_ScalarType_co]] [source]
Forward-project a volume to a single sinogram line.
- Parameters:
vol (NDArray) – The volume to forward-project.
angle_ind (int) – The angle index to forward project.
- Returns:
x – The forward-projected sinogram line.
- Return type:
NDArray
- get_pre_weights() ndarray[Any, dtype[_ScalarType_co]] | None [source]
Compute the pre-weights of the projector geometry (notably for cone-beam geometries).
- Returns:
The computed detector weights
- Return type:
Union[NDArray, None]
- prj_intensities: ndarray[Any, dtype[floating]] | None
- projector_backend: ProjectorBackend
- psf: ndarray[Any, dtype[floating]] | float | None
- vol_geom: VolumeGeometry
corrct.regularizers module
Regularizers module.
@author: Nicola VIGANÒ, Computational Imaging group, CWI, The Netherlands, and ESRF - The European Synchrotron, Grenoble, France
- class corrct.regularizers.BaseRegularizer(weight: float | ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy._typing._array_like._ScalarType_co]], norm: ~corrct.data_terms.DataFidelityBase, upd_mask: ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy._typing._array_like._ScalarType_co]] | None = None, dtype: ~numpy.dtype[~typing.Any] | None | type[~typing.Any] | ~numpy._typing._dtype_like._SupportsDType[~numpy.dtype[~typing.Any]] | str | tuple[~typing.Any, int] | tuple[~typing.Any, ~typing.SupportsIndex | ~collections.abc.Sequence[~typing.SupportsIndex]] | list[~typing.Any] | ~numpy._typing._dtype_like._DTypeDict | tuple[~typing.Any, ~typing.Any] = <class 'numpy.float32'>)[source]
Bases:
ABC
Initialize a base regularizer class, that defines the Regularizer object interface.
- Parameters:
weight (Union[float, NDArray]) – The weight of the regularizer.
norm (DataFidelityBase) – The norm of the regularizer minimization.
- apply_proximal(dual: ndarray[Any, dtype[_ScalarType_co]]) None [source]
Apply the proximal operator to the dual in-place.
- Parameters:
dual (NDArray) – The dual to be applied the proximal on.
- compute_update_primal(dual: ndarray[Any, dtype[_ScalarType_co]]) ndarray[Any, dtype[_ScalarType_co]] [source]
Compute the partial update of a primal term, from this regularizer.
- Parameters:
dual (NDArray) – The dual associated to this regularizer.
- Returns:
upd – The update to the primal.
- Return type:
NDArray
- dtype: dtype[Any] | None | type[Any] | _SupportsDType[dtype[Any]] | str | tuple[Any, int] | tuple[Any, SupportsIndex | Sequence[SupportsIndex]] | list[Any] | _DTypeDict | tuple[Any, Any]
- info() str [source]
Return the regularizer info.
- Returns:
Regularizer info string.
- Return type:
str
- initialize_dual() ndarray[Any, dtype[_ScalarType_co]] [source]
Return the initialized dual.
- Returns:
Initialized (zero) dual.
- Return type:
NDArray
- abstract initialize_sigma_tau(primal: ndarray[Any, dtype[_ScalarType_co]]) float | ndarray[Any, dtype[_ScalarType_co]] [source]
Initialize the internal state, operator, and sigma. It then returns the tau.
- Parameters:
primal (NDArray) – The primal vector.
- Returns:
The tau to be used in the SIRT or PDHG algorithm.
- Return type:
Union[float, NDArray]
- lower() str [source]
Return the lower case name of the regularizer.
- Returns:
Lower case string name of the regularizer.
- Return type:
str
- op: BaseTransform | None
- sigma: float | ndarray[Any, dtype[_ScalarType_co]]
- upd_mask: ndarray[Any, dtype[_ScalarType_co]] | None
- update_dual(dual: ndarray[Any, dtype[_ScalarType_co]], primal: ndarray[Any, dtype[_ScalarType_co]]) None [source]
Update the dual in-place.
- Parameters:
dual (NDArray) – Current stat of the dual.
primal (NDArray) – Primal or over-relaxation of the primal.
- upper() str [source]
Return the upper case name of the regularizer.
- Returns:
Upper case string name of the regularizer.
- Return type:
str
- weight: ndarray[Any, dtype[_ScalarType_co]]
- class corrct.regularizers.BaseRegularizer_med(weight: float | ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy._typing._array_like._ScalarType_co]], filt_size: int = 3, upd_mask: ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy._typing._array_like._ScalarType_co]] | None = None, norm: ~corrct.data_terms.DataFidelityBase = <corrct.data_terms.DataFidelity_l1 object>)[source]
Bases:
BaseRegularizer
Median filter regularizer base class. It can be used to promote filtered reconstructions.
- info() str [source]
Return the regularizer info.
- Returns:
Regularizer info string.
- Return type:
str
- initialize_sigma_tau(primal: ndarray[Any, dtype[_ScalarType_co]]) float | ndarray[Any, dtype[_ScalarType_co]] [source]
Initialize the internal state, operator, and sigma. It then returns the tau.
- Parameters:
primal (NDArray) – The primal vector.
- Returns:
The tau to be used in the SIRT or PDHG algorithm.
- Return type:
Union[float, NDArray]
- class corrct.regularizers.Constraint_LowerLimit(limit: float | ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy._typing._array_like._ScalarType_co]], upd_mask: ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy._typing._array_like._ScalarType_co]] | None = None, norm: ~corrct.data_terms.DataFidelityBase = <corrct.data_terms.DataFidelity_l2 object>)[source]
Bases:
BaseRegularizer
Lower limit constraint. It can be used to promote reconstructions in certain regions of solution space.
- apply_proximal(dual: ndarray[Any, dtype[_ScalarType_co]]) None [source]
Apply the proximal operator to the dual in-place.
- Parameters:
dual (NDArray) – The dual to be applied the proximal on.
- info() str [source]
Return the regularizer info.
- Returns:
Regularizer info string.
- Return type:
str
- initialize_sigma_tau(primal: ndarray[Any, dtype[_ScalarType_co]]) float | ndarray[Any, dtype[_ScalarType_co]] [source]
Initialize the internal state, operator, and sigma. It then returns the tau.
- Parameters:
primal (NDArray) – The primal vector.
- Returns:
The tau to be used in the SIRT or PDHG algorithm.
- Return type:
Union[float, NDArray]
- class corrct.regularizers.Constraint_UpperLimit(limit: float | ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy._typing._array_like._ScalarType_co]], upd_mask: ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy._typing._array_like._ScalarType_co]] | None = None, norm: ~corrct.data_terms.DataFidelityBase = <corrct.data_terms.DataFidelity_l2 object>)[source]
Bases:
BaseRegularizer
Upper limit constraint. It can be used to promote reconstructions in certain regions of solution space.
- apply_proximal(dual: ndarray[Any, dtype[_ScalarType_co]]) None [source]
Apply the proximal operator to the dual in-place.
- Parameters:
dual (NDArray) – The dual to be applied the proximal on.
- info() str [source]
Return the regularizer info.
- Returns:
Regularizer info string.
- Return type:
str
- initialize_sigma_tau(primal: ndarray[Any, dtype[_ScalarType_co]]) float | ndarray[Any, dtype[_ScalarType_co]] [source]
Initialize the internal state, operator, and sigma. It then returns the tau.
- Parameters:
primal (NDArray) – The primal vector.
- Returns:
The tau to be used in the SIRT or PDHG algorithm.
- Return type:
Union[float, NDArray]
- class corrct.regularizers.Regularizer_Grad(weight: float | ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy._typing._array_like._ScalarType_co]], ndims: int = 2, axes: ~collections.abc.Sequence[int] | ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy._typing._array_like._ScalarType_co]] | None = None, pad_mode: str = 'edge', upd_mask: ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy._typing._array_like._ScalarType_co]] | None = None, norm: ~corrct.data_terms.DataFidelityBase = <corrct.data_terms.DataFidelity_l12 object>)[source]
Bases:
BaseRegularizer
Gradient regularizer.
When used with l1-norms, it promotes piece-wise constant reconstructions. When used with l2-norm, it promotes smooth reconstructions.
- Parameters:
weight (Union[float, NDArray]) – The weight of the regularizer.
ndims (int, optional) – The number of dimensions. The default is 2.
axes (Sequence, optional) – The axes over which it computes the gradient. If None, it uses the last 2. The default is None.
pad_mode (str, optional) – The padding mode to use for the linear convolution. The default is “edge”.
norm (DataFidelityBase, optional) – The norm of the regularizer minimization. The default is DataFidelity_l12().
- initialize_sigma_tau(primal: ndarray[Any, dtype[_ScalarType_co]]) float | ndarray[Any, dtype[_ScalarType_co]] [source]
Initialize the internal state, operator, and sigma. It then returns the tau.
- Parameters:
primal (NDArray) – The primal vector.
- Returns:
The tau to be used in the SIRT or PDHG algorithm.
- Return type:
Union[float, NDArray]
- class corrct.regularizers.Regularizer_HubTV2D(weight: float | ndarray[Any, dtype[_ScalarType_co]], huber_size: float, axes: Sequence[int] | ndarray[Any, dtype[_ScalarType_co]] | None = None, pad_mode: str = 'edge', upd_mask: ndarray[Any, dtype[_ScalarType_co]] | None = None)[source]
Bases:
Regularizer_Grad
Total Variation (TV) regularizer in 2D. It can be used to promote piece-wise constant reconstructions.
- class corrct.regularizers.Regularizer_HubTV3D(weight: float | ndarray[Any, dtype[_ScalarType_co]], huber_size: float, axes: Sequence[int] | ndarray[Any, dtype[_ScalarType_co]] | None = None, pad_mode: str = 'edge', upd_mask: ndarray[Any, dtype[_ScalarType_co]] | None = None)[source]
Bases:
Regularizer_Grad
Total Variation (TV) regularizer in 3D. It can be used to promote piece-wise constant reconstructions.
- class corrct.regularizers.Regularizer_Hub_dwl(weight: float | ndarray[Any, dtype[_ScalarType_co]], wavelet: str, level: int, ndims: int = 2, axes: Sequence[int] | ndarray[Any, dtype[_ScalarType_co]] | None = None, pad_on_demand: str = 'constant', upd_mask: ndarray[Any, dtype[_ScalarType_co]] | None = None, huber_size: int | None = None)[source]
Bases:
Regularizer_dwl
l1-norm decimated wavelet regularizer. It can be used to promote sparse reconstructions.
- class corrct.regularizers.Regularizer_Hub_swl(weight: float | ndarray[Any, dtype[_ScalarType_co]], wavelet: str, level: int, ndims: int = 2, axes: Sequence[int] | ndarray[Any, dtype[_ScalarType_co]] | None = None, pad_on_demand: str = 'constant', upd_mask: ndarray[Any, dtype[_ScalarType_co]] | None = None, normalized: bool = False, min_approx: bool = True, huber_size: int | None = None)[source]
Bases:
Regularizer_swl
l1-norm Wavelet regularizer. It can be used to promote sparse reconstructions in the wavelet domain.
- class corrct.regularizers.Regularizer_TNV(weight: float | ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy._typing._array_like._ScalarType_co]], ndims: int = 2, axes: ~collections.abc.Sequence[int] | ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy._typing._array_like._ScalarType_co]] | None = None, pad_mode: str = 'edge', upd_mask: ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy._typing._array_like._ScalarType_co]] | None = None, spectral_norm: ~corrct.data_terms.DataFidelityBase = <corrct.data_terms.DataFidelity_l1 object>, x_ref: ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy._typing._array_like._ScalarType_co]] | None = None)[source]
Bases:
Regularizer_Grad
Total Nuclear Variation (TNV) regularizer.
It can be used to promote piece-wise constant reconstructions, for multi-channel volumes.
- initialize_sigma_tau(primal: ndarray[Any, dtype[_ScalarType_co]]) float | ndarray[Any, dtype[_ScalarType_co]] [source]
Initialize the internal state, operator, and sigma. It then returns the tau.
- Parameters:
primal (NDArray) – The primal vector.
- Returns:
The tau to be used in the SIRT or PDHG algorithm.
- Return type:
Union[float, NDArray]
- class corrct.regularizers.Regularizer_TV1D(weight: float | ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy._typing._array_like._ScalarType_co]], axes: ~collections.abc.Sequence[int] | ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy._typing._array_like._ScalarType_co]] | None = None, pad_mode: str = 'edge', upd_mask: ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy._typing._array_like._ScalarType_co]] | None = None, norm: ~corrct.data_terms.DataFidelityBase = <corrct.data_terms.DataFidelity_l12 object>)[source]
Bases:
Regularizer_Grad
Total Variation (TV) regularizer in 1D. It can be used to promote piece-wise constant reconstructions.
- class corrct.regularizers.Regularizer_TV2D(weight: float | ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy._typing._array_like._ScalarType_co]], axes: ~collections.abc.Sequence[int] | ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy._typing._array_like._ScalarType_co]] | None = None, pad_mode: str = 'edge', upd_mask: ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy._typing._array_like._ScalarType_co]] | None = None, norm: ~corrct.data_terms.DataFidelityBase = <corrct.data_terms.DataFidelity_l12 object>)[source]
Bases:
Regularizer_Grad
Total Variation (TV) regularizer in 2D. It can be used to promote piece-wise constant reconstructions.
- class corrct.regularizers.Regularizer_TV3D(weight: float | ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy._typing._array_like._ScalarType_co]], axes: ~collections.abc.Sequence[int] | ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy._typing._array_like._ScalarType_co]] | None = None, pad_mode: str = 'edge', upd_mask: ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy._typing._array_like._ScalarType_co]] | None = None, norm: ~corrct.data_terms.DataFidelityBase = <corrct.data_terms.DataFidelity_l12 object>)[source]
Bases:
Regularizer_Grad
Total Variation (TV) regularizer in 3D. It can be used to promote piece-wise constant reconstructions.
- class corrct.regularizers.Regularizer_VTV(weight: float | ndarray[Any, dtype[_ScalarType_co]], ndims: int = 2, pwise_der_norm: int | float = 2, pwise_chan_norm: int | float = inf, x_ref: ndarray[Any, dtype[_ScalarType_co]] | None = None, upd_mask: ndarray[Any, dtype[_ScalarType_co]] | None = None)[source]
Bases:
Regularizer_Grad
Vectorial Total Variation (VTV) regularizer.
It can be used to promote piece-wise constant reconstructions, for multi-channel volumes.
- class corrct.regularizers.Regularizer_dwl(weight: float | ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy._typing._array_like._ScalarType_co]], wavelet: str, level: int, ndims: int = 2, axes: ~collections.abc.Sequence[int] | ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy._typing._array_like._ScalarType_co]] | None = None, pad_on_demand: str = 'constant', upd_mask: ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy._typing._array_like._ScalarType_co]] | None = None, min_approx: bool = True, norm: ~corrct.data_terms.DataFidelityBase = <corrct.data_terms.DataFidelity_l1 object>)[source]
Bases:
BaseRegularizer
Base decimated wavelet regularizer. It can be used to promote sparse reconstructions in the wavelet domain.
- apply_proximal(dual: ndarray[Any, dtype[_ScalarType_co]]) None [source]
Apply the proximal operator to the dual in-place.
- Parameters:
dual (NDArray) – The dual to be applied the proximal on.
- info() str [source]
Return the regularizer info.
- Returns:
Regularizer info string.
- Return type:
str
- initialize_sigma_tau(primal: ndarray[Any, dtype[_ScalarType_co]]) float | ndarray[Any, dtype[_ScalarType_co]] [source]
Initialize the internal state, operator, and sigma. It then returns the tau.
- Parameters:
primal (NDArray) – The primal vector.
- Returns:
The tau to be used in the SIRT or PDHG algorithm.
- Return type:
Union[float, NDArray]
- class corrct.regularizers.Regularizer_fft(weight: float | ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy._typing._array_like._ScalarType_co]], ndims: int = 2, axes: ~collections.abc.Sequence[int] | ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy._typing._array_like._ScalarType_co]] | None = None, fft_filter: str = 'exp', upd_mask: ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy._typing._array_like._ScalarType_co]] | None = None, norm: ~corrct.data_terms.DataFidelityBase = <corrct.data_terms.DataFidelity_l12 object>)[source]
Bases:
BaseRegularizer
Fourier regularizer. It can be used to promote sparse reconstructions in the Fourier domain.
- initialize_sigma_tau(primal: ndarray[Any, dtype[_ScalarType_co]]) float | ndarray[Any, dtype[_ScalarType_co]] [source]
Initialize the internal state, operator, and sigma. It then returns the tau.
- Parameters:
primal (NDArray) – The primal vector.
- Returns:
The tau to be used in the SIRT or PDHG algorithm.
- Return type:
Union[float, NDArray]
- class corrct.regularizers.Regularizer_l1(weight: float | ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy._typing._array_like._ScalarType_co]], upd_mask: ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy._typing._array_like._ScalarType_co]] | None = None, norm: ~corrct.data_terms.DataFidelityBase = <corrct.data_terms.DataFidelity_l1 object>)[source]
Bases:
BaseRegularizer
l1-norm regularizer. It can be used to promote sparse reconstructions.
- initialize_sigma_tau(primal: ndarray[Any, dtype[_ScalarType_co]]) float | ndarray[Any, dtype[_ScalarType_co]] [source]
Initialize the internal state, operator, and sigma. It then returns the tau.
- Parameters:
primal (NDArray) – The primal vector.
- Returns:
The tau to be used in the SIRT or PDHG algorithm.
- Return type:
Union[float, NDArray]
- class corrct.regularizers.Regularizer_l12dwl(weight: float | ndarray[Any, dtype[_ScalarType_co]], wavelet: str, level: int, ndims: int = 2, axes: Sequence[int] | ndarray[Any, dtype[_ScalarType_co]] | None = None, pad_on_demand: str = 'constant', upd_mask: ndarray[Any, dtype[_ScalarType_co]] | None = None)[source]
Bases:
Regularizer_dwl
l1-norm decimated wavelet regularizer. It can be used to promote sparse reconstructions.
- class corrct.regularizers.Regularizer_l12swl(weight: float | ndarray[Any, dtype[_ScalarType_co]], wavelet: str, level: int, ndims: int = 2, axes: Sequence[int] | ndarray[Any, dtype[_ScalarType_co]] | None = None, pad_on_demand: str = 'constant', upd_mask: ndarray[Any, dtype[_ScalarType_co]] | None = None, normalized: bool = False, min_approx: bool = True)[source]
Bases:
Regularizer_swl
l1-norm Wavelet regularizer. It can be used to promote sparse reconstructions in the wavelet domain.
- class corrct.regularizers.Regularizer_l1dwl(weight: float | ndarray[Any, dtype[_ScalarType_co]], wavelet: str, level: int, ndims: int = 2, axes: Sequence[int] | ndarray[Any, dtype[_ScalarType_co]] | None = None, pad_on_demand: str = 'constant', upd_mask: ndarray[Any, dtype[_ScalarType_co]] | None = None)[source]
Bases:
Regularizer_dwl
l1-norm decimated wavelet regularizer. It can be used to promote sparse reconstructions.
- class corrct.regularizers.Regularizer_l1med(weight: float | ndarray[Any, dtype[_ScalarType_co]], filt_size: int = 3)[source]
Bases:
BaseRegularizer_med
l1-norm median filter regularizer. It can be used to promote filtered reconstructions.
- class corrct.regularizers.Regularizer_l1swl(weight: float | ndarray[Any, dtype[_ScalarType_co]], wavelet: str, level: int, ndims: int = 2, axes: Sequence[int] | ndarray[Any, dtype[_ScalarType_co]] | None = None, pad_on_demand: str = 'constant', upd_mask: ndarray[Any, dtype[_ScalarType_co]] | None = None, normalized: bool = False, min_approx: bool = True)[source]
Bases:
Regularizer_swl
l1-norm Wavelet regularizer. It can be used to promote sparse reconstructions in the wavelet domain.
- class corrct.regularizers.Regularizer_l2med(weight: float | ndarray[Any, dtype[_ScalarType_co]], filt_size: int = 3)[source]
Bases:
BaseRegularizer_med
l2-norm median filter regularizer. It can be used to promote filtered reconstructions.
- class corrct.regularizers.Regularizer_lap(weight: float | ndarray[Any, dtype[_ScalarType_co]], ndims: int = 2, axes: Sequence[int] | ndarray[Any, dtype[_ScalarType_co]] | None = None, pad_mode: str = 'edge', upd_mask: ndarray[Any, dtype[_ScalarType_co]] | None = None)[source]
Bases:
BaseRegularizer
Laplacian regularizer. It can be used to promote smooth reconstructions.
- initialize_sigma_tau(primal: ndarray[Any, dtype[_ScalarType_co]]) float | ndarray[Any, dtype[_ScalarType_co]] [source]
Initialize the internal state, operator, and sigma. It then returns the tau.
- Parameters:
primal (NDArray) – The primal vector.
- Returns:
The tau to be used in the SIRT or PDHG algorithm.
- Return type:
Union[float, NDArray]
- class corrct.regularizers.Regularizer_lap1D(weight, axes: Sequence[int] | ndarray[Any, dtype[_ScalarType_co]] | None = None, pad_mode: str = 'edge', upd_mask: ndarray[Any, dtype[_ScalarType_co]] | None = None)[source]
Bases:
Regularizer_lap
Laplacian regularizer in 1D. It can be used to promote smooth reconstructions.
- class corrct.regularizers.Regularizer_lap2D(weight, axes: Sequence[int] | ndarray[Any, dtype[_ScalarType_co]] | None = None, pad_mode: str = 'edge', upd_mask: ndarray[Any, dtype[_ScalarType_co]] | None = None)[source]
Bases:
Regularizer_lap
Laplacian regularizer in 2D. It can be used to promote smooth reconstructions.
- class corrct.regularizers.Regularizer_lap3D(weight, axes: Sequence[int] | ndarray[Any, dtype[_ScalarType_co]] | None = None, pad_mode: str = 'edge', upd_mask: ndarray[Any, dtype[_ScalarType_co]] | None = None)[source]
Bases:
Regularizer_lap
Laplacian regularizer in 3D. It can be used to promote smooth reconstructions.
- class corrct.regularizers.Regularizer_lnswl(weight: float | ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy._typing._array_like._ScalarType_co]], wavelet: str, level: int, ndims: int = 2, axes: ~collections.abc.Sequence[int] | ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy._typing._array_like._ScalarType_co]] | None = None, pad_on_demand: str = 'constant', upd_mask: ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy._typing._array_like._ScalarType_co]] | None = None, normalized: bool = False, min_approx: bool = True, spectral_norm: ~corrct.data_terms.DataFidelityBase = <corrct.data_terms.DataFidelity_l1 object>, x_ref: ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy._typing._array_like._ScalarType_co]] | None = None)[source]
Bases:
Regularizer_l1swl
Nuclear-norm Wavelet regularizer.
It can be used to promote compressed multi-channel reconstructions.
- initialize_sigma_tau(primal: ndarray[Any, dtype[_ScalarType_co]]) float | ndarray[Any, dtype[_ScalarType_co]] [source]
Initialize the internal state, operator, and sigma. It then returns the tau.
- Parameters:
primal (NDArray) – The primal vector.
- Returns:
The tau to be used in the SIRT or PDHG algorithm.
- Return type:
Union[float, NDArray]
- class corrct.regularizers.Regularizer_smooth1D(weight: float | ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy._typing._array_like._ScalarType_co]], axes: ~collections.abc.Sequence[int] | ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy._typing._array_like._ScalarType_co]] | None = None, pad_mode: str = 'edge', upd_mask: ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy._typing._array_like._ScalarType_co]] | None = None, norm: ~corrct.data_terms.DataFidelityBase = <corrct.data_terms.DataFidelity_l2 object>)[source]
Bases:
Regularizer_Grad
It can be used to promote smooth reconstructions.
- class corrct.regularizers.Regularizer_smooth2D(weight: float | ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy._typing._array_like._ScalarType_co]], axes: ~collections.abc.Sequence[int] | ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy._typing._array_like._ScalarType_co]] | None = None, pad_mode: str = 'edge', upd_mask: ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy._typing._array_like._ScalarType_co]] | None = None, norm: ~corrct.data_terms.DataFidelityBase = <corrct.data_terms.DataFidelity_l2 object>)[source]
Bases:
Regularizer_Grad
It can be used to promote smooth reconstructions.
- class corrct.regularizers.Regularizer_smooth3D(weight: float | ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy._typing._array_like._ScalarType_co]], axes: ~collections.abc.Sequence[int] | ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy._typing._array_like._ScalarType_co]] | None = None, pad_mode: str = 'edge', upd_mask: ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy._typing._array_like._ScalarType_co]] | None = None, norm: ~corrct.data_terms.DataFidelityBase = <corrct.data_terms.DataFidelity_l2 object>)[source]
Bases:
Regularizer_Grad
It can be used to promote smooth reconstructions.
- class corrct.regularizers.Regularizer_swl(weight: float | ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy._typing._array_like._ScalarType_co]], wavelet: str, level: int, ndims: int = 2, axes: ~collections.abc.Sequence[int] | ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy._typing._array_like._ScalarType_co]] | None = None, pad_on_demand: str = 'constant', upd_mask: ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy._typing._array_like._ScalarType_co]] | None = None, normalized: bool = False, min_approx: bool = True, norm: ~corrct.data_terms.DataFidelityBase = <corrct.data_terms.DataFidelity_l1 object>)[source]
Bases:
BaseRegularizer
Base stationary wavelet regularizer. It can be used to promote sparse reconstructions in the wavelet domain.
- apply_proximal(dual: ndarray[Any, dtype[_ScalarType_co]]) None [source]
Apply the proximal operator to the dual in-place.
- Parameters:
dual (NDArray) – The dual to be applied the proximal on.
- info() str [source]
Return the regularizer info.
- Returns:
Regularizer info string.
- Return type:
str
- initialize_sigma_tau(primal: ndarray[Any, dtype[_ScalarType_co]]) float | ndarray[Any, dtype[_ScalarType_co]] [source]
Initialize the internal state, operator, and sigma. It then returns the tau.
- Parameters:
primal (NDArray) – The primal vector.
- Returns:
The tau to be used in the SIRT or PDHG algorithm.
- Return type:
Union[float, NDArray]
- class corrct.regularizers.Regularizer_vSVD(weight: float | ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy._typing._array_like._ScalarType_co]], ndims: int = 2, axes: ~collections.abc.Sequence[int] | ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy._typing._array_like._ScalarType_co]] | None = None, axis_channels: ~collections.abc.Sequence[int] = (0,), upd_mask: ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy._typing._array_like._ScalarType_co]] | None = None, norm: ~corrct.data_terms.DataFidelityBase = <corrct.data_terms.DataFidelity_l1 object>)[source]
Bases:
BaseRegularizer
Regularizer based on the Singular Value Decomposition.
It can be used to promote similar reconstructions across different channels.
- initialize_sigma_tau(primal: ndarray[Any, dtype[_ScalarType_co]]) float | ndarray[Any, dtype[_ScalarType_co]] [source]
Initialize the internal state, operator, and sigma. It then returns the tau.
- Parameters:
primal (NDArray) – The primal vector.
- Returns:
The tau to be used in the SIRT or PDHG algorithm.
- Return type:
Union[float, NDArray]
- class corrct.regularizers.Regularizer_vl1wl(weight: float | ndarray[Any, dtype[_ScalarType_co]], wavelet: str, level: int, ndims: int = 2, axes: Sequence[int] | ndarray[Any, dtype[_ScalarType_co]] | None = None, pad_on_demand: str = 'constant', upd_mask: ndarray[Any, dtype[_ScalarType_co]] | None = None, normalized: bool = False, min_approx: bool = True, pwise_lvl_norm: int | float = 1, pwise_chan_norm: int | float = inf, x_ref: ndarray[Any, dtype[_ScalarType_co]] | None = None)[source]
Bases:
Regularizer_l1swl
l1-norm vectorial Wavelet regularizer. It can be used to promote compressed reconstructions.
- apply_proximal(dual: ndarray[Any, dtype[_ScalarType_co]]) None [source]
Apply the proximal operator to the dual in-place.
- Parameters:
dual (NDArray) – The dual to be applied the proximal on.
- initialize_sigma_tau(primal: ndarray[Any, dtype[_ScalarType_co]]) float | ndarray[Any, dtype[_ScalarType_co]] [source]
Initialize the internal state, operator, and sigma. It then returns the tau.
- Parameters:
primal (NDArray) – The primal vector.
- Returns:
The tau to be used in the SIRT or PDHG algorithm.
- Return type:
Union[float, NDArray]
corrct.solvers module
Solvers for the tomographic reconstruction problem.
@author: Nicola VIGANÒ, Computational Imaging group, CWI, The Netherlands, and ESRF - The European Synchrotron, Grenoble, France
- class corrct.solvers.FBP(verbose: bool = False, leave_progress: bool = False, regularizer: Sequence[BaseRegularizer] | BaseRegularizer | None = None, data_term: str | DataFidelityBase = 'l2', fbp_filter: str | ndarray[Any, dtype[floating]] | Filter = 'ramp', pad_mode: str = 'constant')[source]
Bases:
Solver
Implementation of the Filtered Back-Projection (FBP) algorithm.
- class corrct.solvers.MLEM(verbose: bool = False, leave_progress: bool = True, tolerance: float | None = None, regularizer: Sequence[BaseRegularizer] | BaseRegularizer | None = None, data_term: str | DataFidelityBase = 'kl', data_term_test: str | DataFidelityBase | None = None)[source]
Bases:
Solver
Initialize the MLEM solver class.
This class implements the Maximul Likelihood Expectation Maximization (MLEM) algorithm.
- Parameters:
verbose (bool, optional) – Turn on verbose output. The default is False.
leave_progress (bool, optional) – Leave the progress bar after the computation is finished. The default is True.
tolerance (Optional[float], optional) – Tolerance on the data residual for computing when to stop iterations. The default is None.
regularizer (Sequence[regularizers.BaseRegularizer] | regularizers.BaseRegularizer | None, optional) – Regularizer to be used. The default is None.
data_term (Union[str, data_terms.DataFidelityBase], optional) – Data fidelity term for computing the data residual. The default is “l2”.
data_term_test (Optional[data_terms.DataFidelityBase], optional) – The data fidelity to be used for the test set. If None, it will use the same as for the rest of the data. The default is None.
- class corrct.solvers.PDHG(verbose: bool = False, leave_progress: bool = True, tolerance: float | None = None, relaxation: float = 0.95, regularizer: Sequence[BaseRegularizer] | BaseRegularizer | None = None, data_term: str | DataFidelityBase = 'l2', data_term_test: str | DataFidelityBase | None = None)[source]
Bases:
Solver
Initialize the PDHG solver class.
PDHG stands for primal-dual hybrid gradient algorithm from Chambolle and Pock.
- Parameters:
verbose (bool, optional) – Turn on verbose output. The default is False.
leave_progress (bool, optional) – Leave the progress bar after the computation is finished. The default is True.
tolerance (Optional[float], optional) – Tolerance on the data residual for computing when to stop iterations. The default is None.
relaxation (float, optional) – The relaxation length. The default is 0.95.
regularizer (Sequence[regularizers.BaseRegularizer] | regularizers.BaseRegularizer | None, optional) – Regularizer to be used. The default is None.
data_term (Union[str, data_terms.DataFidelityBase], optional) – Data fidelity term for computing the data residual. The default is “l2”.
data_term_test (Optional[data_terms.DataFidelityBase], optional) – The data fidelity to be used for the test set. If None, it will use the same as for the rest of the data. The default is None.
- power_method(A: BaseTransform, b: ndarray[Any, dtype[floating]], iterations: int = 5) tuple[floating, Sequence[int], dtype[Any] | None | type[Any] | _SupportsDType[dtype[Any]] | str | tuple[Any, int] | tuple[Any, SupportsIndex | Sequence[SupportsIndex]] | list[Any] | _DTypeDict | tuple[Any, Any]] [source]
Compute the l2-norm of the operator A, with the power method.
- Parameters:
A (BaseTransform) – Operator whose l2-norm needs to be computed.
b (NDArrayFloat) – The data vector.
iterations (int, optional) – Number of power method iterations. The default is 5.
- Returns:
The l2-norm of A, and the shape and type of the solution.
- Return type:
Tuple[float, Tuple[int], DTypeLike]
- class corrct.solvers.SART(verbose: bool = False, leave_progress: bool = True, relaxation: float = 1.0, tolerance: float | None = None, data_term: str | DataFidelityBase = 'l2', data_term_test: str | DataFidelityBase | None = None)[source]
Bases:
Solver
Solver class implementing the Simultaneous Algebraic Reconstruction Technique (SART) algorithm.
- compute_residual(A: Callable, b: ndarray[Any, dtype[floating]], x: ndarray[Any, dtype[floating]], A_num_rows: int, b_mask: ndarray[Any, dtype[floating]] | None) ndarray[Any, dtype[floating]] [source]
Compute the solution residual.
- Parameters:
A (Callable) – The forward projector.
b (NDArrayFloat) – The detector data.
x (NDArrayFloat) – The current solution
A_num_rows (int) – The number of projections.
b_mask (Optional[NDArrayFloat]) – The mask to apply
- Returns:
The residual.
- Return type:
NDArrayFloat
- class corrct.solvers.SIRT(verbose: bool = False, leave_progress: bool = True, relaxation: float = 1.95, tolerance: float | None = None, regularizer: Sequence[BaseRegularizer] | BaseRegularizer | None = None, data_term: str | DataFidelityBase = 'l2', data_term_test: str | DataFidelityBase | None = None)[source]
Bases:
Solver
Initialize the SIRT solver class.
This class implements the Simultaneous Iterative Reconstruction Technique (SIRT) algorithm.
- Parameters:
verbose (bool, optional) – Turn on verbose output. The default is False.
leave_progress (bool, optional) – Leave the progress bar after the computation is finished. The default is True.
tolerance (Optional[float], optional) – Tolerance on the data residual for computing when to stop iterations. The default is None.
relaxation (float, optional) – The relaxation length. The default is 1.95.
regularizer (Sequence[regularizers.BaseRegularizer] | regularizers.BaseRegularizer | None, optional) – Regularizer to be used. The default is None.
data_term (Union[str, data_terms.DataFidelityBase], optional) – Data fidelity term for computing the data residual. The default is “l2”.
data_term_test (Optional[data_terms.DataFidelityBase], optional) – The data fidelity to be used for the test set. If None, it will use the same as for the rest of the data. The default is None.
- class corrct.solvers.SolutionInfo(method: str, max_iterations: int, tolerance: float | floating | None, residual0: float = inf, residual0_cv: float = inf)[source]
Bases:
object
Reconstruction info.
- iterations: int
- max_iterations: int
- method: str
- residual0: float | floating
- residual0_cv: float | floating
- residuals: ndarray[Any, dtype[floating]]
- residuals_cv: ndarray[Any, dtype[floating]]
- property residuals_cv_rel: ndarray[Any, dtype[floating]]
- property residuals_rel: ndarray[Any, dtype[floating]]
- tolerance: float | floating | None
- class corrct.solvers.Solver(verbose: bool = False, leave_progress: bool = True, relaxation: float = 1.0, tolerance: float | None = None, data_term: str | DataFidelityBase = 'l2', data_term_test: str | DataFidelityBase | None = None)[source]
Bases:
ABC
Initialize the base solver class.
- Parameters:
verbose (bool, optional) – Turn on verbose output. The default is False.
tolerance (Optional[float], optional) – Tolerance on the data residual for computing when to stop iterations. The default is None.
relaxation (float, optional) – The relaxation length. The default is 1.0.
data_term (Union[str, data_terms.DataFidelityBase], optional) – Data fidelity term for computing the data residual. The default is “l2”.
data_term_test (Optional[data_terms.DataFidelityBase], optional) – The data fidelity to be used for the test set. If None, it will use the same as for the rest of the data. The default is None.
corrct.struct_illum module
Provide structured illumination support.
Created on Sun Jan 9 17:39:02 2022
@author: Nicola VIGANÒ, Computational Imaging group, CWI, The Netherlands, and ESRF - The European Synchrotron, Grenoble, France
- class corrct.struct_illum.MaskCollection(masks_enc: ndarray[Any, dtype[_ScalarType_co]], masks_dec: ndarray[Any, dtype[_ScalarType_co]] | None = None, mask_dims: int = 2, mask_type: str = 'measured', mask_support: None | Sequence[int] | ndarray[Any, dtype[integer]] = None)[source]
Bases:
object
Define mask collection class.
- bin_masks(binning: float) MaskCollection [source]
Bin the masks.
- Parameters:
binning (float) – The binning size.
- Returns:
A new collection of binned masks.
- Return type:
- get_QR_decomposition(buckets: ndarray[Any, dtype[_ScalarType_co]], shift: int = 0) tuple[MaskCollection, ndarray[Any, dtype[_ScalarType_co]]] [source]
Compute and return the QR decomposition of the masks.
- Parameters:
buckets (NDArray) – The buckets corresponding to the masks.
shift (int, optional) – Index of the first mask, by default 0.
- Returns:
The new mask collection and the matrix for modifying buckets accordingly.
- Return type:
Tuple[MaskCollection, NDArray]
- Raises:
ValueError – In case the masks have encoding-decoding form.
- get_mask(mask_inds_vu: Sequence | ndarray[Any, dtype[_ScalarType_co]], mask_encoding: bool = True) ndarray[Any, dtype[_ScalarType_co]] [source]
Return the requested mask.
- Parameters:
mask_inds_vu (Union[Sequence, NDArray]) – The mask position.
mask_encoding (bool, optional) – Whether it is an encoding or decoding mask, by default True
- Returns:
The requested mask.
- Return type:
NDArray
- inspect_masks(mask_inds_vu: None | Sequence[int] | ndarray[Any, dtype[integer]] = None)[source]
Inspect the encoding and decoding masks at the requested shifts.
- Parameters:
mask_inds_vu (Sequence[int] | NDArray[np.integer] | None, optional) – The requested axes shifts. The default (None) is the first mask.
- lower() str [source]
Return the lower case name of the mask.
- Returns:
Lower case string name of the mask.
- Return type:
str
- mask_dims: int
- mask_support: ndarray[Any, dtype[integer]]
- mask_type: str
- masks_dec: ndarray[Any, dtype[_ScalarType_co]]
- masks_enc: ndarray[Any, dtype[_ScalarType_co]]
- property num_buckets: int
Compute the total number of available buckets.
- Returns:
The total number of buckets.
- Return type:
int
- property num_pixels: int
Compute the number of pixels in the image.
- Returns:
The number of pixels in the image.
- Return type:
int
- property shape_fov: Sequence[int]
Return the mask shape.
- Returns:
The mask shape.
- Return type:
Sequence[int]
- property shape_shifts: Sequence[int]
Compute the shape of the available shifts.
- Returns:
The shape of the available shifts.
- Return type:
Sequence[int]
- class corrct.struct_illum.MaskGenerator(shape_fov: ~collections.abc.Sequence[int] | ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy.integer]], shape_mask: ~collections.abc.Sequence[int] | ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy.integer]], shape_shifts: ~collections.abc.Sequence[int] | ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy.integer]], transmittance: float = 1.0, dtype: ~numpy.dtype[~typing.Any] | None | type[~typing.Any] | ~numpy._typing._dtype_like._SupportsDType[~numpy.dtype[~typing.Any]] | str | tuple[~typing.Any, int] | tuple[~typing.Any, ~typing.SupportsIndex | ~collections.abc.Sequence[~typing.SupportsIndex]] | list[~typing.Any] | ~numpy._typing._dtype_like._DTypeDict | tuple[~typing.Any, ~typing.Any] = <class 'numpy.float32'>)[source]
Bases:
ABC
Define mask generation interface.
- dtype: dtype[Any] | None | type[Any] | _SupportsDType[dtype[Any]] | str | tuple[Any, int] | tuple[Any, SupportsIndex | Sequence[SupportsIndex]] | list[Any] | _DTypeDict | tuple[Any, Any]
- generate_collection(buckets_fraction: float = 1, shift_type: str = 'sequential') MaskCollection [source]
Generate the mask collection.
- Parameters:
buckets_fraction (float, optional) – The fraction of buckets to generate, by default 1
shift_type (str, optional) – The type of shift to implement, by default “sequential”
abs_fraction (float, optional) – The attenuation fraction of the pixels
- Returns:
The generated mask collection.
- Return type:
- Raises:
ValueError – In case of wrong shift type.
- abstract generate_shifted_mask(mask_inds_vu: Sequence | ndarray[Any, dtype[_ScalarType_co]], mask_encoding: bool = True) ndarray[Any, dtype[_ScalarType_co]] [source]
Produce the shifted masks.
- Parameters:
mask_inds_vu (tuple | list | NDArray) – The vertical and horizontal shifts.
mask_encoding (bool, optional) – Is the mask encoding (False = decoding). The default is True.
- Returns:
The shifted mask.
- Return type:
NDArray
- get_interval_shifts(interval: int | Sequence[int] | ndarray[Any, dtype[_ScalarType_co]], axes_order: Sequence[int] = (-2, -1)) Sequence[ndarray[Any, dtype[_ScalarType_co]]] [source]
Produce shifts for the “interval” shift type.
- Parameters:
interval (int | tuple(int, int) | list(int, int)) – The shift interval.
axes_order (int | tuple | list, optional) – Order of the axes to shift. The default is (-2, -1).
- Returns:
The collection of shifts.
- Return type:
tuple
- get_random_shifts(num_shifts: int, axes_order: Sequence[int] = (-2, -1)) Sequence[ndarray[Any, dtype[_ScalarType_co]]] [source]
Produce shifts for the “random” shift type.
- Parameters:
num_shifts (int) – Number of shifts.
- Returns:
The collection of shifts.
- Return type:
NDArray
- get_sequential_shifts(num_shifts: int | None = None, axes_order: Sequence[int] = (-2, -1)) Sequence[ndarray[Any, dtype[_ScalarType_co]]] [source]
Produce shifts for the “sequential” shift type.
- Parameters:
num_shifts (int, optional) – Number of shifts. The default is None.
axes_order (tuple | list | NDArray, optional) – Order of the axes to shift. The default is (-2, -1).
- Returns:
The collection of shifts.
- Return type:
NDArray
- property max_buckets: int
Compute the maximum number of buckets.
- Returns:
The maximum number of buckets.
- Return type:
int
- property num_pixels: int
Compute the number of pixels in the image.
- Returns:
The number of pixels in the image.
- Return type:
int
- shape_fov: ndarray[Any, dtype[integer]]
- shape_mask: ndarray[Any, dtype[integer]]
- shape_shifts: ndarray[Any, dtype[integer]]
- transmittance: float
- class corrct.struct_illum.MaskGeneratorBernoulli(fov_size_mm: float | Sequence[float] | ndarray[Any, dtype[_ScalarType_co]], req_res_mm: float = 1.0, max_masks_ratio: float = 1.2)[source]
Bases:
MaskGenerator
Bernoulli mask generator class.
- generate_shifted_mask(mask_inds_vu: Sequence | ndarray[Any, dtype[_ScalarType_co]], mask_encoding: bool = True) ndarray[Any, dtype[_ScalarType_co]] [source]
Produce the shifted masks.
- Parameters:
mask_inds_vu (tuple | list | NDArray) – The vertical and horizontal shifts.
mask_encoding (bool, optional) – Is the mask encoding (False = decoding). The default is True.
- Returns:
The shifted mask.
- Return type:
NDArray
- class corrct.struct_illum.MaskGeneratorHalfGaussian(fov_size_mm: float | Sequence[float] | ndarray[Any, dtype[_ScalarType_co]], req_res_mm: float = 1.0, max_masks_ratio: float = 1.2)[source]
Bases:
MaskGenerator
Half-Gaussian mask generator class.
- generate_shifted_mask(mask_inds_vu: Sequence | ndarray[Any, dtype[_ScalarType_co]], mask_encoding: bool = True) ndarray[Any, dtype[_ScalarType_co]] [source]
Produce the shifted masks.
- Parameters:
mask_inds_vu (tuple | list | NDArray) – The vertical and horizontal shifts.
mask_encoding (bool, optional) – Is the mask encoding (False = decoding). The default is True.
- Returns:
The shifted mask.
- Return type:
NDArray
- class corrct.struct_illum.MaskGeneratorMURA(fov_size_mm: float, req_res_mm: float = 1.0)[source]
Bases:
MaskGenerator
MURA mask generator class.
- static compute_possible_mask_sizes(fov_size: int) ndarray[Any, dtype[_ScalarType_co]] [source]
Compute MURA masks sizes.
MURA masks require specific edge sizes: prime numbers _x_ that also satisfy the rule: _x_ = 4 * _l_ + 1, where _l_ is integer.
- Parameters:
fov_size (int) – Edge size of the fov in pixels.
- Returns:
Array of all possible MURA mask sizes in that range.
- Return type:
NDArray
- generate_shifted_mask(mask_inds_vu: Sequence | ndarray[Any, dtype[_ScalarType_co]], mask_encoding: bool = True) ndarray[Any, dtype[_ScalarType_co]] [source]
Produce the shifted masks.
- Parameters:
mask_inds_vu (tuple | list | NDArray) – The vertical and horizontal shifts.
mask_encoding (bool, optional) – Is the mask encoding (False = decoding). The default is True.
- Returns:
The shifted mask.
- Return type:
NDArray
- class corrct.struct_illum.MaskGeneratorPoint(fov_size_mm: float | Sequence[float] | ndarray[Any, dtype[_ScalarType_co]], req_res_mm: float = 1.0)[source]
Bases:
MaskGenerator
Pencil beam masks generator class.
- generate_shifted_mask(mask_inds_vu: Sequence | ndarray[Any, dtype[_ScalarType_co]], mask_encoding: bool = True) ndarray[Any, dtype[_ScalarType_co]] [source]
Produce the shifted masks.
- Parameters:
mask_inds_vu (tuple | list | NDArray) – The vertical and horizontal shifts.
mask_encoding (bool, optional) – Is the mask encoding (False = decoding). The default is True.
- Returns:
The shifted mask.
- Return type:
NDArray
- class corrct.struct_illum.ProjectorGhostImaging(*args, **kwargs)[source]
Bases:
ProjectorOperator
Projector class for the ghost imaging reconstructions.
- absolute()[source]
Compute the absolute value of the projection operator coefficients.
- Returns:
Op_a – The absolute value of the projector.
- Return type:
- adjust_sampling_scaling(image: ndarray[Any, dtype[_ScalarType_co]]) ndarray[Any, dtype[_ScalarType_co]] [source]
Adjust reconstruction scaling and bias, due to the undersampling.
- Parameters:
image (NDArray) – Unscaled image.
- Returns:
Scaled image.
- Return type:
NDArray
- bp(bucket_vals: ndarray[Any, dtype[_ScalarType_co]]) ndarray[Any, dtype[_ScalarType_co]] [source]
Compute back-projection of the bucket values.
- Parameters:
bucket_vals (NDArray) – The list of bucket values.
subtract_mean (bool, optional) – Whether to subtract the mean of the values. The default is False.
- Returns:
Back-projected image.
- Return type:
NDArray
- fbp(bucket_vals: ndarray[Any, dtype[_ScalarType_co]], use_lstsq: bool = True, adjust_scaling: bool = False) ndarray[Any, dtype[_ScalarType_co]] [source]
Compute cross-correlation reconstruction of the bucket values.
- Parameters:
bucket_vals (NDArray) – The bucket vales to reconstruct.
use_lstsq (bool, optional) – If True, uses least squares for reconstruction, by default True.
adjust_scaling (bool, optional) – If True, adjusts scaling of the reconstructed image, to account for the intensity loss, by default False.
- Returns:
The reconstructed image.
- Return type:
NDArray
- fp(image: ndarray[Any, dtype[_ScalarType_co]]) ndarray[Any, dtype[_ScalarType_co]] [source]
Compute forward-projection (prediction) of the bucket values.
- Parameters:
image (NDArray) – The image for which we want to predict the bucket values.
- Returns:
The predicted bucket values.
- Return type:
NDArray
- mc: MaskCollection
- corrct.struct_illum.decompose_qr_masks(masks: ndarray[Any, dtype[_ScalarType_co]], verbose: bool = False) tuple[ndarray[Any, dtype[_ScalarType_co]], ndarray[Any, dtype[_ScalarType_co]]] [source]
Compute QR decomposition of the given masks.
- Parameters:
masks (NDArray) – The masks to decompose
verbose (bool, optional) – Whether to emite verbose output, by default False
- Returns:
The Q and R components.
- Return type:
Tuple[NDArray, NDArray]
- corrct.struct_illum.estimate_resolution(masks: ndarray[Any, dtype[_ScalarType_co]], verbose: bool = True, plot_result: bool = True) tuple[float, float] [source]
Estimate the mask collection resolution through auto-correlation.
- Parameters:
masks (NDArray) – The list of encoding masks
verbose (bool, optional) – Whether to produce verbose output, by default True
plot_result (bool, optional) – Whether to plot the results, by default True
- Returns:
The mean and minimum HWHM of the auto-correlation functions for all the masks.
- Return type:
tuple[float, float]
- corrct.struct_illum.reorder_masks(masks: ndarray[Any, dtype[_ScalarType_co]], buckets: ndarray[Any, dtype[_ScalarType_co]], shift: int) tuple[ndarray[Any, dtype[_ScalarType_co]], ndarray[Any, dtype[_ScalarType_co]]] [source]
Reorder masks, with a simple rot-n algorithm.
- Parameters:
masks (NDArray) – The masks to re-order.
buckets (NDArray) – The corresponding buckets.
shift (int) – The length of the shift.
- Returns:
The reordered masks and buckets.
- Return type:
Tuple[NDArray, NDArray]
corrct.testing module
Provide utility functions for testing.
Created on Thu Jun 4 12:28:21 2020
@author: Nicola VIGANÒ, Computational Imaging group, CWI, The Netherlands, and ESRF - The European Synchrotron, Grenoble, France
- corrct.testing.add_noise(img_clean: ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy._typing._array_like._ScalarType_co]], num_photons: int | float, add_poisson: bool = False, readout_noise_std: float | None = None, background_avg: float | None = None, background_std: float | None = None, detection_efficiency: float = 1.0, dtype: ~numpy.dtype[~typing.Any] | None | type[~typing.Any] | ~numpy._typing._dtype_like._SupportsDType[~numpy.dtype[~typing.Any]] | str | tuple[~typing.Any, int] | tuple[~typing.Any, ~typing.SupportsIndex | ~collections.abc.Sequence[~typing.SupportsIndex]] | list[~typing.Any] | ~numpy._typing._dtype_like._DTypeDict | tuple[~typing.Any, ~typing.Any] = <class 'numpy.float32'>) tuple[ndarray[Any, dtype[_ScalarType_co]], ndarray[Any, dtype[_ScalarType_co]], float] [source]
Add noise to an image (sinogram).
- Parameters:
img_clean (NDArray) – The clean input image.
num_photons (Union[int, float]) – Number of photons corresponding to the value 1.0 in the image.
add_poisson (bool, optional) – Whether to add Poisson noise, by default False.
readout_noise_std (Optional[float], optional) – Standard deviation of the readout noise, by default None.
background_avg (Optional[float], optional) – Average value of the background, by default None.
background_std (Optional[float], optional) – Standard deviation of the background, by default None.
detection_efficiency (float, optional) – Efficiency of the detection (e.g. detector solid angle, inclination, etc), by default 1.0.
dtype (DTypeLike, optional) – Data type of the volumes, by default np.float32.
- Returns:
The noised and clean images (scaled by the photons and efficiency), and the background.
- Return type:
Tuple[NDArray, NDArray, float]
- corrct.testing.compute_error_power(expected_vol: ndarray[Any, dtype[floating]], computed_vol: ndarray[Any, dtype[floating]]) tuple[float, float] [source]
Compute the expected volume signal power, and computed volume error power.
- Parameters:
expected_vol (NDArrayFloat) – The expected volume.
computed_vol (NDArrayFloat) – The computed volume.
- Returns:
The expected volume signal power, and the computed volume.
- Return type:
Tuple[float, float]
- corrct.testing.create_phantom_nuclei3d(FoV_size: int | None = 100, dtype: ~numpy.dtype[~typing.Any] | None | type[~typing.Any] | ~numpy._typing._dtype_like._SupportsDType[~numpy.dtype[~typing.Any]] | str | tuple[~typing.Any, int] | tuple[~typing.Any, ~typing.SupportsIndex | ~collections.abc.Sequence[~typing.SupportsIndex]] | list[~typing.Any] | ~numpy._typing._dtype_like._DTypeDict | tuple[~typing.Any, ~typing.Any] = <class 'numpy.float32'>) tuple[ndarray[Any, dtype[floating]], ndarray[Any, dtype[floating]], ndarray[Any, dtype[floating]]] [source]
Create a 3D phantom of cell nuclei.
- Parameters:
FoV_size (int | None) – Size of the field-of-view in pixels, per edge, by default None
dtype (DTypeLike, optional) – The dtype of the produced data, by default np.float32
- Returns:
The nuclei 3D phantom, the background, and the pixel sizes along each axis
- Return type:
tuple[NDArrayFloat, NDArrayFloat, NDArrayFloat]
- corrct.testing.create_sino(ph: ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy.floating]], num_angles: int, start_angle_deg: float = 0.0, end_angle_deg: float = 180.0, dwell_time_s: float = 1.0, photon_flux: float = 1000000000.0, detectors_pos_rad: float | ~collections.abc.Sequence[float] | ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy.floating]] = 1.5707963267948966, vol_att_in: ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy.floating]] | None = None, vol_att_out: ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy.floating]] | None = None, psf: ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy.floating]] | None = None, background_avg: float | None = None, background_std: float | None = None, add_poisson: bool = False, readout_noise_std: float | None = None, dtype: ~numpy.dtype[~typing.Any] | None | type[~typing.Any] | ~numpy._typing._dtype_like._SupportsDType[~numpy.dtype[~typing.Any]] | str | tuple[~typing.Any, int] | tuple[~typing.Any, ~typing.SupportsIndex | ~collections.abc.Sequence[~typing.SupportsIndex]] | list[~typing.Any] | ~numpy._typing._dtype_like._DTypeDict | tuple[~typing.Any, ~typing.Any] = <class 'numpy.float32'>) tuple[ndarray[Any, dtype[floating]], ndarray[Any, dtype[floating]], ndarray[Any, dtype[floating]], float] [source]
Compute the sinogram from a given phantom.
- Parameters:
ph (NDArrayFloat) – The phantom volume, with the expected average photon production per voxel per impinging photon.
num_angles (int) – The number of angles.
start_angle_deg (float, optional) – Initial scan angle in degrees. The default is 0.
end_angle_deg (float, optional) – Final scan angle in degrees. The default is 180.
dwell_time_s (float, optional) – The acquisition time per sinogram point. The default is 1.
photon_flux (float, optional) – The impinging photon flux per unit time (second). The default is 1e9.
detectors_pos_rad (float | Sequence[float] | NDArrayFloat, optional) – Detector(s) positions in radians, with respect to incoming beam. The default is (np.pi / 2).
vol_att_in (NDArrayFloat, optional) – Attenuation volume for the incoming beam. The default is None.
vol_att_out (NDArrayFloat, optional) – Attenuation volume for the outgoing beam. The default is None.
psf (NDArrayFloat, optional) – Point spread function or probing beam profile. The default is None.
background_avg (float, optional) – Background average value. The default is None.
background_std (float, optional) – Background standard deviation. The default is None.
add_poisson (bool, optional) – Switch to turn on Poisson noise. The default is False.
readout_noise_std (float, optional) – Read-out noise standard deviation. The default is None.
dtype (numpy.dtype, optional) – Output datatype. The default is np.float32.
- Returns:
The sinogram (detector readings), the angular positions, and the expected average phton production per voxel.
- Return type:
Tuple[NDArrayFloat, NDArrayFloat, NDArrayFloat, float]
- corrct.testing.create_sino_transmission(ph: ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy.floating]], num_angles: int, start_angle_deg: float = 0, end_angle_deg: float = 180, dwell_time_s: float = 1, photon_flux: float = 1000000000.0, psf: ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy.floating]] | None = None, add_poisson: bool = False, readout_noise_std: float | None = None, dtype: ~numpy.dtype[~typing.Any] | None | type[~typing.Any] | ~numpy._typing._dtype_like._SupportsDType[~numpy.dtype[~typing.Any]] | str | tuple[~typing.Any, int] | tuple[~typing.Any, ~typing.SupportsIndex | ~collections.abc.Sequence[~typing.SupportsIndex]] | list[~typing.Any] | ~numpy._typing._dtype_like._DTypeDict | tuple[~typing.Any, ~typing.Any] = <class 'numpy.float32'>) tuple[ndarray[Any, dtype[floating]], ndarray[Any, dtype[floating]], ndarray[Any, dtype[floating]], ndarray[Any, dtype[floating]]] [source]
Compute the sinogram from a given phantom.
- Parameters:
ph (NDArrayFloat) – The phantom volume, with the expected average photon production per voxel per impinging photon.
num_angles (int) – The number of angles.
start_angle_deg (float, optional) – Initial scan angle in degrees. The default is 0.
end_angle_deg (float, optional) – Final scan angle in degrees. The default is 180.
dwell_time_s (float, optional) – The acquisition time per sinogram point. The default is 1.
photon_flux (float, optional) – The impinging photon flux per unit time (second). The default is 1e9.
psf (NDArrayFloat, optional) – Point spread function or probing beam profile. The default is None.
add_poisson (bool, optional) – Switch to turn on Poisson noise. The default is False.
readout_noise_std (float, optional) – Read-out noise standard deviation. The default is None.
dtype (numpy.dtype, optional) – Output datatype. The default is np.float32.
- Returns:
The sinogram (detector readings), the flat-field, and the angular positions.
- Return type:
Tuple[NDArrayFloat, NDArrayFloat, NDArrayFloat, NDArrayFloat, float]
- corrct.testing.download_phantom()[source]
Download the phantom generation module from Nicolas Barbey’s repository on github.
- corrct.testing.phantom_assign_concentration(ph_or: ndarray[Any, dtype[floating]], element: str = 'Ca', em_line: str = 'KA', in_energy_keV: float = 20.0, voxel_size_um: float = 0.5) tuple[ndarray[Any, dtype[floating]], ndarray[Any, dtype[floating]], ndarray[Any, dtype[floating]]] [source]
Build an XRF phantom.
The created phantom has been used in: - N. Viganò and V. A. Solé, “Physically corrected forward operators for induced emission tomography: a simulation study,” Meas. Sci. Technol., no. Advanced X-Ray Tomography, pp. 1–26, Nov. 2017.
- Parameters:
ph_or (NDArrayFloat) – The phases phantom map.
element (str, optional) – Element symbol. The default is “Ca”.
em_line (str, optional) – Emission line. The default is “KA” (corresponding to K-alpha).
in_energy_keV (float, optional) – Incoming beam energy in keV. The default is 20.0.
- Returns:
vol_fluo_yield (NDArrayFloat) – Voxel-wise fluorescence yields.
vol_att_in (NDArrayFloat) – Voxel-wise attenuation at the incoming beam energy.
vol_att_out (NDArrayFloat) – Voxel-wise attenuation at the emitted energy.
- corrct.testing.phantom_assign_concentration_multi(ph_or: ndarray[Any, dtype[floating]], elements: Sequence[str] = ('Ca', 'Fe'), em_lines: str | Sequence[str] = 'KA', in_energy_keV: float = 20.0, detectors_pos_rad: float | None = None) tuple[list[ndarray[Any, dtype[floating]]], ndarray[Any, dtype[floating]], list[ndarray[Any, dtype[floating]]]] [source]
Build an XRF phantom.
The created phantom has been used in: - N. Viganò and V. A. Solé, “Physically corrected forward operators for induced emission tomography: a simulation study,” Meas. Sci. Technol., no. Advanced X-Ray Tomography, pp. 1–26, Nov. 2017.
- Parameters:
ph_or (NDArrayFloat) – The phases phantom map.
elements (Sequence[str], optional) – Element symbols. The default is [“Ca”, “Fe”].
em_lines (str | Sequence[str], optional) – Emission lines. The default is “KA” (corresponding to K-alpha).
in_energy_keV (float, optional) – Incoming beam energy in keV. The default is 20.0.
detectors_pos_rad (float | tuple | list | NDArrayFloat, optional) – Detector(s) positions in radians, with respect to incoming beam. If None, Compton is not produced. The default is None.
- Returns:
vol_yield (List[NDArrayFloat]) – Voxel-wise fluorescence and Compton yields.
vol_att_in (NDArrayFloat) – Voxel-wise attenuation at the incoming beam energy.
vol_att_out (List[NDArrayFloat]) – Voxel-wise attenuation at the emitted energy.
- corrct.testing.roundup_to_pow2(x: int | float | ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy.floating]], p: int, dtype: ~numpy.dtype[~typing.Any] | None | type[~typing.Any] | ~numpy._typing._dtype_like._SupportsDType[~numpy.dtype[~typing.Any]] | str | tuple[~typing.Any, int] | tuple[~typing.Any, ~typing.SupportsIndex | ~collections.abc.Sequence[~typing.SupportsIndex]] | list[~typing.Any] | ~numpy._typing._dtype_like._DTypeDict | tuple[~typing.Any, ~typing.Any] = <class 'int'>) int | float | ndarray[Any, dtype[floating]] [source]
Round first argument to the power of 2 indicated by second argument.
- Parameters:
x (int | float | NDArrayFloat) – Number to round up.
p (int) – Power of 2.
dtype (DTypeLike, optional) – data type of the output. The default is int.
- Returns:
Rounding up of input.
- Return type:
int | float | NDArrayFloat
Module contents
Top-level package for PyCorrectedEmissionCT.