corrct.processing package

Submodules

corrct.processing.misc module

Miscellaneous processing routines.

@author: Nicola VIGANÒ, Computational Imaging group, CWI, The Netherlands, and ESRF - The European Synchrotron, Grenoble, France

corrct.processing.misc.azimuthal_integration(img: ndarray[Any, dtype[_ScalarType_co]], axes: Sequence[int] = (-2, -1), domain: str = 'direct') ndarray[Any, dtype[_ScalarType_co]][source]

Compute the azimuthal integration of a n-dimensional image or a stack of them.

Parameters:
  • img (NDArray) – The image or stack of images.

  • axes (tuple(int, int), optional) – Axes of that need to be azimuthally integrated. The default is (-2, -1).

  • domain (string, optional) – Domain of the integration. Options are: “direct” | “fourier”. Default is “direct”.

Raises:

ValueError – Error returned when not passing images or wrong axes.

Returns:

The azimuthally integrated profile.

Return type:

NDArray

corrct.processing.misc.ball(data_shape_vu: ~numpy._typing._array_like._SupportsArray[~numpy.dtype[~typing.Any]] | ~numpy._typing._nested_sequence._NestedSequence[~numpy._typing._array_like._SupportsArray[~numpy.dtype[~typing.Any]]] | bool | int | float | complex | str | bytes | ~numpy._typing._nested_sequence._NestedSequence[bool | int | float | complex | str | bytes], radius: int | float, super_sampling: int = 5, dtype: ~numpy.dtype[~typing.Any] | None | type[~typing.Any] | ~numpy._typing._dtype_like._SupportsDType[~numpy.dtype[~typing.Any]] | str | tuple[~typing.Any, int] | tuple[~typing.Any, ~typing.SupportsIndex | ~collections.abc.Sequence[~typing.SupportsIndex]] | list[~typing.Any] | ~numpy._typing._dtype_like._DTypeDict | tuple[~typing.Any, ~typing.Any] = <class 'numpy.float32'>, func: ~typing.Callable | None = None) _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes][source]

Compute a ball with specified radius.

Parameters:
  • data_shape_vu (ArrayLike) – Shape of the output array.

  • radius (int | float) – Radius of the ball.

  • super_sampling (int, optional) – Super-sampling for having smoother ball edges. The default is 5.

  • dtype (DTypeLike, optional) – Type of the output. The default is np.float32.

  • func (Optional[Callable], optional) – Point-wise function for the local values. The default is None.

Returns:

The ball.

Return type:

ArrayLike

corrct.processing.misc.circular_mask(vol_shape_zxy: ~collections.abc.Sequence[int] | ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy.signedinteger]], radius_offset: float = 0, coords_ball: ~collections.abc.Sequence[int] | ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy.signedinteger]] | None = None, vol_origin_zxy: ~collections.abc.Sequence[float] | ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy._typing._array_like._ScalarType_co]] | None = None, taper_func: str | None = None, taper_target: str | float = 'edge', super_sampling: int = 1, squeeze: bool = True, dtype: ~numpy.dtype[~typing.Any] | None | type[~typing.Any] | ~numpy._typing._dtype_like._SupportsDType[~numpy.dtype[~typing.Any]] | str | tuple[~typing.Any, int] | tuple[~typing.Any, ~typing.SupportsIndex | ~collections.abc.Sequence[~typing.SupportsIndex]] | list[~typing.Any] | ~numpy._typing._dtype_like._DTypeDict | tuple[~typing.Any, ~typing.Any] = <class 'numpy.float32'>) ndarray[Any, dtype[_ScalarType_co]][source]

Compute a circular mask for the reconstruction volume.

Parameters:
  • vol_shape_zxy (Sequence[int] | NDArrayInt) – The size of the volume.

  • radius_offset (float, optional) – The offset with respect to the volume edge. The default is 0.

  • coords_ball (Sequence[int] | NDArrayInt | None, optional) – The coordinates to consider for the non-masked region. The default is None.

  • vol_origin_zxy (Optional[Sequence[float]], optional) – The origin of the coordinates in voxels. The default is None.

  • taper_func (str, optional) – The mask data type. Allowed types: “const” | “cos”. The default is “const”.

  • taper_target (str | float, optional) – The size target. Allowed values: “edge” | “diagonal”. The default is “edge”.

  • super_sampling (int, optional) – The pixel super sampling to be used for the mask. The default is 1.

  • squeeze (bool, optional) – Whether to squeeze the mask. The default is True.

  • dtype (DTypeLike, optional) – The type of mask. The default is np.float32.

Raises:

ValueError – In case of unknown taper_func value, or mismatching volume origin and shape.

Returns:

The circular mask.

Return type:

NDArray

corrct.processing.misc.inspect_fourier_img(img: ndarray[Any, dtype[_ScalarType_co]], remove_zero: bool = False) None[source]

Display Fourier representation of the input image.

Parameters:
  • img (NDArray) – Input image.

  • remove_zero (bool, optional) – Remove the zero frequency value. The default is False.

corrct.processing.misc.lines_intersection(line_1: ndarray[Any, dtype[_ScalarType_co]], line_2: float | ndarray[Any, dtype[_ScalarType_co]], position: str = 'first', x_lims: tuple[float | None, float | None] | None = None) tuple[float, float] | None[source]

Compute the intersection point between two lines.

Parameters:
  • line_1 (NDArray) – The first line.

  • line_2 (float | NDArray) – The second line. It can be a scalar representing a horizontal line.

  • position (str, optional) – The position of the point to select. Either “first” or “last”. The default is “first”.

Raises:

ValueError – If position is neither “first” nor “last”.

Returns:

It returns either the requested crossing point, or None in case the point was not found.

Return type:

Tuple[float, float] | None

corrct.processing.misc.norm_cross_corr(img1: ndarray[Any, dtype[_ScalarType_co]], img2: ndarray[Any, dtype[_ScalarType_co]] | None = None, axes: Sequence[int] = (-2, -1), t_match: bool = False, mode_full: bool = True, compute_profile: bool = True, plot: bool = True) ndarray[Any, dtype[_ScalarType_co]] | tuple[ndarray[Any, dtype[_ScalarType_co]], ndarray[Any, dtype[_ScalarType_co]]][source]

Compute the normalized cross-correlation between two images.

Parameters:
  • img1 (NDArray) – The first image.

  • img2 (NDArray, optional) – The second images. If None, it computes the auto-correlation. The default is None.

  • axes (Sequence[int], optional) – Axes along which to compute the cross-correlation. The default is (-2, -1).

  • t_match (bool, optional) – Whether to perform the cross-correlation for template matching. The default is False.

  • mode_full (bool, optional) – Whether to return the “full” or “same” convolution size. The default is True.

  • compute_profile (bool, optional) – Whether to compute the azimuthal integration of the cross-correlation or not. The default is True.

  • plot (bool, optional) – Whether to plot the profile of the cross-correlation curve. The default is True.

Returns:

The one-dimensional cross-correlation curve.

Return type:

NDArray

corrct.processing.noise module

Noise handling routines.

@author: Nicola VIGANÒ, Computational Imaging group, CWI, The Netherlands, and ESRF - The European Synchrotron, Grenoble, France

corrct.processing.noise.compute_variance_poisson(Is: ndarray[Any, dtype[_ScalarType_co]], I0: ndarray[Any, dtype[_ScalarType_co]] | None = None, var_I0: ndarray[Any, dtype[_ScalarType_co]] | None = None, normalized: bool = True) ndarray[Any, dtype[_ScalarType_co]][source]

Compute the variance of a signal subject to Poisson noise, against a reference intensity.

The reference intensity can also be subject to Poisson and Gaussian noise. If the variance of the reference intensity is not passed, it will be assumed to be Poisson.

Parameters:
  • Is (NDArray) – The signal intensity.

  • I0 (Optional[NDArray], optional) – The reference intensity. The default is None.

  • var_I0 (Optional[NDArray], optional) – The variance of the reference intensity. The default is None. If not given, it will be assumed to be equal to I0.

  • normalized (bool, optional) – Whether to renormalize by the mean of the reference intensity.

Returns:

The computed variance.

Return type:

NDArray

corrct.processing.noise.compute_variance_transmission(Is: ndarray[Any, dtype[_ScalarType_co]], I0: ndarray[Any, dtype[_ScalarType_co]], var_I0: ndarray[Any, dtype[_ScalarType_co]] | None = None, normalized: bool = True) ndarray[Any, dtype[_ScalarType_co]][source]

Compute the variance of a linearized attenuation (transmission) signal, against a reference intensity.

Parameters:
  • Is (NDArray) – The transmitted signal.

  • I0 (NDArray) – The reference intensity.

  • var_I0 (Optional[NDArray], optional) – The variance of the reference intensity. The default is None. If not given, it will be assumed to be equal to I0.

  • normalized (bool, optional) – Whether to renormalize by the mean of the reference intensity.

Returns:

The computed variance.

Return type:

NDArray

corrct.processing.noise.compute_variance_weight(variance: ndarray[Any, dtype[_ScalarType_co]], *, percentile: float = 0.001, mask: ndarray[Any, dtype[_ScalarType_co]] | None = None, normalized: bool = False, use_std: bool = False, semilog: bool = False) ndarray[Any, dtype[_ScalarType_co]][source]

Compute the weight associated to the given variance, in a weighted least-squares context.

Parameters:
  • variance (NDArray) – The variance of the signal.

  • percentile (float) – Minimum percentile to discard. The default is 0.001 (0.1%)

  • mask (NDArray | None, optional) – Mask of valid values. The default is None.

  • normalized (bool, optional) – Scale the largest weight to 1. The default is False.

  • use_std (bool, optional) – Use the standard deviation instead of the variance.

  • semilog (bool, optional) – Scale the variance over a logarithmic curve. It can be beneficial with high dynamic range data. The default is False.

Returns:

The computed weights.

Return type:

NDArray

corrct.processing.post module

Post-processing routines.

Created on Tue Mar 24 15:25:14 2020

@author: Nicola VIGANÒ, Computational Imaging group, CWI, The Netherlands, and ESRF - The European Synchrotron, Grenoble, France

corrct.processing.post.com(vol: ndarray[Any, dtype[_ScalarType_co]], axes: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes] | None = None) ndarray[Any, dtype[_ScalarType_co]][source]

Compute center-of-mass for given volume.

Parameters:
  • vol (NDArray) – The input volume.

  • axes (ArrayLike, optional) – Axes on which to compute center-of-mass. The default is None.

Returns:

The center-of-mass position.

Return type:

NDArray

corrct.processing.post.compute_frc(img1: ndarray[Any, dtype[_ScalarType_co]], img2: ndarray[Any, dtype[_ScalarType_co]] | None, snrt: float = 0.2071, axes: Sequence[int] | None = None, smooth: int | None = 5, taper_ratio: float | None = 0.05, supersampling: int = 1, theo_threshold: bool = True) tuple[ndarray[Any, dtype[_ScalarType_co]], ndarray[Any, dtype[_ScalarType_co]]][source]

Compute the FRC/FSC (Fourier ring/shell correlation) between two images / volumes.

Please refer to the following article for more information:

M. van Heel and M. Schatz, “Fourier shell correlation threshold criteria,” J. Struct. Biol., vol. 151, no. 3, pp. 250–262, Sep. 2005.

Parameters:
  • img1 (NDArray) – First image / volume.

  • img2 (NDArray) – Second image / volume.

  • snrt (float, optional) – SNR to be used for generating the threshold curve for resolution definition. The SNR value of 0.4142 corresponds to the hald-bit curve for a full dataset. When splitting datasets in two sub-datasets, that value needs to be halved. The default is 0.2071, which corresponds to the half-bit threashold for half dataset.

  • axes (Sequence[int], optional) – The axes over which we want to compute the FRC/FSC. If None, all axes will be used The default is None.

  • smooth (Optional[int], optional) – Size of the Hann smoothing window. The default is 5.

  • taper_ratio (Optional[float], optional) – Ratio of the edge pixels to be tapered off. This is necessary when working with truncated volumes / local tomography, to avoid truncation artifacts. The default is 0.05.

  • supersampling (int, optional) – Supersampling factor of the images. Larger values increase the high-frequency range of the FRC/FSC function. The default is 1, which corresponds to the Nyquist frequency.

Raises:

ValueError – Error returned when not passing images of the same shape.

Returns:

  • NDArray – The computed FRC/FSC.

  • NDArray – The threshold curve corresponding to the given threshod SNR.

corrct.processing.post.estimate_resolution(frc: ndarray[Any, dtype[_ScalarType_co]], t_hb: ndarray[Any, dtype[_ScalarType_co]]) tuple[float, float] | None[source]

Estimate the resolution or bandwidth, given an FRC and a threshold curve.

Parameters:
  • frc (NDArray) – The FRC curve

  • t_hb (NDArray) – The threshold curve

Returns:

The resolution or bandwidth, if a crossing point was found. Otherwise None.

Return type:

tuple[float, float] | None

corrct.processing.post.fit_scale_bias(img_data: ndarray[Any, dtype[_ScalarType_co]], prj_data: ndarray[Any, dtype[_ScalarType_co]], prj: BaseTransform | None = None) tuple[float, float][source]

Fit the scale and bias of an image, against its projection in a different space.

Parameters:
  • img_data (NDArray) – The image data

  • prj_data (NDArray) – The projected data

  • prj (BaseTransform | None, optional) – The projection operator. The default is None, which uses the identity (TransformIdentity)

Returns:

The scale and bias

Return type:

tuple[float, float]

corrct.processing.post.plot_frcs(volume_pairs: Sequence[tuple[ndarray[Any, dtype[_ScalarType_co]], ndarray[Any, dtype[_ScalarType_co]]]], labels: Sequence[str], title: str | None = None, smooth: int | None = 5, snrt: float = 0.2071, axes: Sequence[int] | None = None, supersampling: int = 1, verbose: bool = False) tuple[Figure, Axes][source]

Compute and plot the FSCs / FRCs of some volumes.

Parameters:
  • volume_pairs (Sequence[Tuple[NDArray, NDArray]]) – A list of pairs of volumes to compute the FRCs on.

  • labels (Sequence[str]) – The labels associated with each pair.

  • title (Optional[str], optional) – The axes title, by default None.

  • smooth (Optional[int], optional) – The size of the smoothing window for the computed curves, by default 5.

  • snrt (float, optional) – The SNR of the T curve, by default 0.2071 - as per half-dataset SNR.

  • axes (Sequence[int] | None, optional) – The axes along which we want to compute the FRC. The unused axes will be averaged. The default is None.

  • verbose (bool, optional) – Whether to display verbose output, by default False.

corrct.processing.post.power_spectrum(img: ndarray[Any, dtype[_ScalarType_co]], axes: Sequence[int] | None = None, smooth: int | None = 5, taper_ratio: float | None = 0.05, power: int = 2) ndarray[Any, dtype[_ScalarType_co]][source]

Compute the power spectrum of a n-dimensional signal.

Parameters:
  • img (NDArray) – The n-dimensional signal.

  • axes (Optional[Sequence[int]], optional) – The axes over which we want to compute the power spectrum, by default None

  • smooth (Optional[int], optional) – The smoothing kernel size, by default 5

  • taper_ratio (Optional[float], optional) – Whether to taper the signal at the edges (for truncated signals), by default 0.05

  • power (int, optional) – The exponent to use, by default 2

Returns:

The power spectrum

Return type:

NDArray

corrct.processing.pre module

Pre-processing routines.

@author: Nicola VIGANÒ, Computational Imaging group, CWI, The Netherlands, and ESRF - The European Synchrotron, Grenoble, France

corrct.processing.pre.apply_flat_field(projs_wvu: ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy._typing._array_like._ScalarType_co]], flats_wvu: ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy._typing._array_like._ScalarType_co]], darks_wvu: ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy._typing._array_like._ScalarType_co]] | None = None, crop: ~collections.abc.Sequence[int] | None = None, dtype: ~numpy.dtype[~typing.Any] | None | type[~typing.Any] | ~numpy._typing._dtype_like._SupportsDType[~numpy.dtype[~typing.Any]] | str | tuple[~typing.Any, int] | tuple[~typing.Any, ~typing.SupportsIndex | ~collections.abc.Sequence[~typing.SupportsIndex]] | list[~typing.Any] | ~numpy._typing._dtype_like._DTypeDict | tuple[~typing.Any, ~typing.Any] = <class 'numpy.float32'>) ndarray[Any, dtype[_ScalarType_co]][source]

Apply flat field.

Parameters:
  • projs (NDArray) – Projections.

  • flats (NDArray) – Flat fields.

  • darks (Optional[NDArray], optional) – Dark noise. The default is None.

  • crop (Optional[Sequence[int]], optional) – Crop region. The default is None.

  • dtype (DTypeLike, optional) – Data type of the processed data. The default is np.float32.

Returns:

Falt-field corrected and linearized projections.

Return type:

NDArray

corrct.processing.pre.apply_minus_log(projs: ndarray[Any, dtype[_ScalarType_co]], lower_limit: float = -inf) ndarray[Any, dtype[_ScalarType_co]][source]

Apply -log.

Parameters:

projs (NDArray) – Projections.

Returns:

Linearized projections.

Return type:

NDArray

corrct.processing.pre.background_from_margin(data_vwu: ndarray[Any, dtype[_ScalarType_co]], margin: int | Sequence[int] | ndarray[Any, dtype[integer]] = 4, poly_order: int = 0, plot: bool = False) ndarray[Any, dtype[_ScalarType_co]][source]

Compute background of the projection data, from the margins of the projections.

Parameters:
  • data_vwu (NDArray) – The projection data in the format [V]WU.

  • margin (int | Sequence[int] | NDArray[np.integer], optional) – The size of the margin, by default 4

  • poly_order (int, optional) – The order of the interpolation polynomial, by default 0

Returns:

The computed background.

Return type:

NDArray

Raises:
  • NotImplementedError – Different margins per line are not supported, at the moment.

  • ValueError – In case the margins ar larger than the image size in U.

corrct.processing.pre.bin_imgs(imgs: ndarray[Any, dtype[_ScalarType_co]], binning: int | float, auto_crop: bool = False, verbose: bool = True) ndarray[Any, dtype[_ScalarType_co]][source]

Bin a stack of images.

Parameters:
  • imgs (NDArray) – The stack of images.

  • binning (int | float) – The binning factor.

  • auto_crop (bool, optional) – Whether to automatically crop the images to match, by default False

  • verbose (bool, optional) – Whether to print the image shapes, by default True

Returns:

The binned images

Return type:

NDArray

corrct.processing.pre.compute_eigen_flats(trans_wvu: ndarray[Any, dtype[_ScalarType_co]], flats_wvu: ndarray[Any, dtype[_ScalarType_co]] | None = None, darks_wvu: ndarray[Any, dtype[_ScalarType_co]] | None = None, ndim: int = 2, plot: bool = False) tuple[ndarray[Any, dtype[_ScalarType_co]], ndarray[Any, dtype[_ScalarType_co]]][source]

Compute the eigen flats of a stack of transmission images.

Parameters:
  • trans (NDArray) – The stack of transmission images.

  • flats (NDArray) – The flats without sample.

  • darks (NDArray) – The darks.

  • ndim (int, optional) – The number of dimensions of the images, by default 2

  • plot (bool, optional) – Whether to plot the results, by default False

Returns:

The decomposition of the transmissions of the sample and the flats.

Return type:

Tuple[NDArray, NDArray]

corrct.processing.pre.destripe_wlf_vwu(data: ndarray[Any, dtype[_ScalarType_co]], sigma: float = 0.005, level: int = 1, wavelet: str = 'bior2.2', angle_axis: int = -2, other_axes: Sequence[int] | ndarray[Any, dtype[_ScalarType_co]] | None = None) ndarray[Any, dtype[_ScalarType_co]][source]

Remove stripes from sinogram, using the Wavelet-Fourier method.

Parameters:
  • data (NDArray) – The data to de-stripe

  • sigma (float, optional) – Fourier space filter coefficient, by default 0.005

  • level (int, optional) – The wavelet level to use, by default 1

  • wavelet (str, optional) – The type of wavelet to use, by default “bior2.2”

  • angle_axis (int, optional) – The axis of the Fourier transform, by default -2

  • other_axes (Union[Sequence[int], NDArray, None], optional) – The axes of the wavelet decomposition, by default None

Returns:

The de-striped data.

Return type:

NDArray

corrct.processing.pre.pad_sinogram(sinogram: ndarray[Any, dtype[_ScalarType_co]], width: int | Sequence[int] | ndarray[Any, dtype[_ScalarType_co]], pad_axis: int = -1, mode: str = 'edge', **kwds) ndarray[Any, dtype[_ScalarType_co]][source]

Pad the sinogram.

Parameters:
  • sinogram (NDArray) – The sinogram to pad.

  • width (Union[int, Sequence[int]]) – The width of the padding. Normally, it should either be an int or a tuple(int, int).

  • pad_axis (int, optional) – The axis to pad. The default is -1.

  • mode (str, optional) – The padding type (from numpy.pad). The default is “edge”.

  • **kwds – The numpy.pad arguments.

Returns:

The padded sinogram.

Return type:

NDArray

corrct.processing.pre.rotate_proj_stack(data_vwu: ndarray[Any, dtype[_ScalarType_co]], rot_angle_deg: float) ndarray[Any, dtype[_ScalarType_co]][source]

Rotate the projection stack.

Parameters:
  • data_vwu (NDArray) – The projection stack, with dimensions [v, w, u] (vertical, omega / sample rotation, horizontal).

  • rot_angle_deg (float) – The rotation angle in degrees.

Returns:

The rotated projection stack.

Return type:

NDArray

corrct.processing.pre.shift_proj_stack(data_vwu: ndarray[Any, dtype[_ScalarType_co]], shifts: ndarray[Any, dtype[_ScalarType_co]], use_fft: bool = False) ndarray[Any, dtype[_ScalarType_co]][source]

Shift each projection in a stack of projections, by projection dependent shifts.

Parameters:
  • data_vwu (NDArray) – The projection stack

  • shifts (NDArray) – The shifts

  • use_fft (bool, optional) – Whether to use fft shift or not, by default False

Returns:

The shifted stack

Return type:

NDArray

Module contents

Package for data pre-processing and post-processing.