Skip to content

cellects.core.motion_analysis

cellects.core.motion_analysis

Module for analyzing motion, growth patterns, and structural properties of biological specimens in video data.

This module provides comprehensive tools to analyze videos of biological samples (e.g., cell colonies) by: 1. Loading and converting RGB videos to grayscale using configurable color space combinations 2. Performing multi-strategy segmentation (frame-by-frame, intensity thresholding, derivative-based detection) 3. Applying post-processing steps including error correction algorithms for shape continuity 4. Computing morphological descriptors over time (area, perimeter, fractal dimension, etc.) 5. Detecting network structures and oscillatory behavior in dynamic biological systems

Classes:

Name Description
MotionAnalysis : Processes video data to analyze specimen motion, growth patterns, and structural properties.

Provides methods for loading videos, performing segmentation using multiple algorithms, post-processing results with error correction, extracting morphological descriptors, detecting network structures, analyzing oscillations, and saving processed outputs.

Functions:

Name Description
load_images_and_videos : Loads and converts video files to appropriate format for analysis.
get_converted_video : Converts RGB video to grayscale based on specified color space parameters.
detection : Performs multi-strategy segmentation of the specimen across all frames.
update_shape : Updates segmented shape with post-processing steps like noise filtering and hole filling.
save_results : Saves processed data, efficiency tests, and annotated videos.
Notes

The features of this module include: - Processes large video datasets with memory optimization strategies including typed arrays (NumPy) and progressive processing techniques. - The module supports both single-specimen and multi-specimen analysis through configurable parameters. - Segmentation strategies include intensity-based thresholding, gradient detection, and combinations thereof. - Post-processing includes morphological operations to refine segmented regions and error correction for specific use cases (e.g., Physarum polycephalum). - Biological network detection and graph extraction is available to represent network structures as vertex-edge tables. - Biological oscillatory pattern detection - Fractal dimension calculation

CompareNeighborsWithValue

CompareNeighborsWithValue class to summarize each pixel by comparing its neighbors to a value.

This class analyzes pixels in a 2D array, comparing each pixel's neighbors to a specified value. The comparison can be equality, superiority, or inferiority, and neighbors can be the 4 or 8 nearest pixels based on the connectivity parameter.

Source code in src/cellects/image_analysis/morphological_operations.py
class CompareNeighborsWithValue:
    """
    CompareNeighborsWithValue class to summarize each pixel by comparing its neighbors to a value.

    This class analyzes pixels in a 2D array, comparing each pixel's neighbors
    to a specified value. The comparison can be equality, superiority,
    or inferiority, and neighbors can be the 4 or 8 nearest pixels based on
    the connectivity parameter.
    """
    def __init__(self, array: np.ndarray, connectivity: int=None, data_type: np.dtype=np.int8):
        """
        Initialize a class for array connectivity processing.

        This class processes arrays based on given connectivities, creating
        windows around the original data for both 1D and 2D arrays. Depending on
        the connectivity value (4 or 8), it creates different windows with borders.

        Parameters
        ----------
        array : ndarray
            Input array to process, can be 1D or 2D.
        connectivity : int, optional
            Connectivity type for processing (4 or 8), by default None.
        data_type : dtype, optional
            Data type for the array elements, by default np.int8.

        Attributes
        ----------
        array : ndarray
            The processed array based on the given data type.
        connectivity : int
            Connectivity value used for processing.
        on_the_right : ndarray
            Array with shifted elements to the right.
        on_the_left : ndarray
            Array with shifted elements to the left.
        on_the_bot : ndarray, optional
            Array with shifted elements to the bottom (for 2D arrays).
        on_the_top : ndarray, optional
            Array with shifted elements to the top (for 2D arrays).
        on_the_topleft : ndarray, optional
            Array with shifted elements to the top left (for 2D arrays).
        on_the_topright : ndarray, optional
            Array with shifted elements to the top right (for 2D arrays).
        on_the_botleft : ndarray, optional
            Array with shifted elements to the bottom left (for 2D arrays).
        on_the_botright : ndarray, optional
            Array with shifted elements to the bottom right (for 2D arrays).
        """
        array = array.astype(data_type)
        self.array = array
        self.connectivity = connectivity
        if len(self.array.shape) == 1:
            self.on_the_right = np.append(array[1:], array[-1])
            self.on_the_left = np.append(array[0], array[:-1])
        else:
            # Build 4 window of the original array, each missing one of the four borders
            # Grow each window with a copy of the last border at the opposite of the side a border have been deleted
            if self.connectivity == 4 or self.connectivity == 8:
                self.on_the_right = np.column_stack((array[:, 1:], array[:, -1]))
                self.on_the_left = np.column_stack((array[:, 0], array[:, :-1]))
                self.on_the_bot = np.vstack((array[1:, :], array[-1, :]))
                self.on_the_top = np.vstack((array[0, :], array[:-1, :]))
            if self.connectivity != 4:
                self.on_the_topleft = array[:-1, :-1]
                self.on_the_topright = array[:-1, 1:]
                self.on_the_botleft = array[1:, :-1]
                self.on_the_botright = array[1:, 1:]

                self.on_the_topleft = np.vstack((self.on_the_topleft[0, :], self.on_the_topleft))
                self.on_the_topleft = np.column_stack((self.on_the_topleft[:, 0], self.on_the_topleft))

                self.on_the_topright = np.vstack((self.on_the_topright[0, :], self.on_the_topright))
                self.on_the_topright = np.column_stack((self.on_the_topright, self.on_the_topright[:, -1]))

                self.on_the_botleft = np.vstack((self.on_the_botleft, self.on_the_botleft[-1, :]))
                self.on_the_botleft = np.column_stack((self.on_the_botleft[:, 0], self.on_the_botleft))

                self.on_the_botright = np.vstack((self.on_the_botright, self.on_the_botright[-1, :]))
                self.on_the_botright = np.column_stack((self.on_the_botright, self.on_the_botright[:, -1]))

    def is_equal(self, value, and_itself: bool=False):
        """
        Check equality of neighboring values in an array.

        This method compares the neighbors of each element in `self.array` to a given value.
        Depending on the dimensionality and connectivity settings, it checks different neighboring
        elements.

        Parameters
        ----------
        value : int or float
            The value to check equality with neighboring elements.
        and_itself : bool, optional
            If True, also check equality with the element itself. Defaults to False.

        Returns
        -------
        None

        Attributes (not standard Qt properties)
        --------------------------------------
        equal_neighbor_nb : ndarray of uint8
            Array that holds the number of equal neighbors for each element.

        Examples
        --------
        >>> matrix = np.array([[9, 0, 4, 6], [4, 9, 1, 3], [7, 2, 1, 4], [9, 0, 8, 5]], dtype=np.int8)
        >>> compare = CompareNeighborsWithValue(matrix, connectivity=4)
        >>> compare.is_equal(1)
        >>> print(compare.equal_neighbor_nb)
        [[0 0 1 0]
        [0 1 1 1]
        [0 1 1 1]
        [0 0 1 0]]
        """

        if len(self.array.shape) == 1:
            self.equal_neighbor_nb = np.sum((np.equal(self.on_the_right, value), np.equal(self.on_the_left, value)), axis=0)
        else:
            if self.connectivity == 4:
                self.equal_neighbor_nb =  np.dstack((np.equal(self.on_the_right, value), np.equal(self.on_the_left, value),
                                                 np.equal(self.on_the_bot, value), np.equal(self.on_the_top, value)))
            elif self.connectivity == 8:
                self.equal_neighbor_nb =  np.dstack(
                    (np.equal(self.on_the_right, value), np.equal(self.on_the_left, value),
                     np.equal(self.on_the_bot, value), np.equal(self.on_the_top, value),
                     np.equal(self.on_the_topleft, value), np.equal(self.on_the_topright, value),
                     np.equal(self.on_the_botleft, value), np.equal(self.on_the_botright, value)))
            else:
                self.equal_neighbor_nb =  np.dstack(
                    (np.equal(self.on_the_topleft, value), np.equal(self.on_the_topright, value),
                     np.equal(self.on_the_botleft, value), np.equal(self.on_the_botright, value)))
            self.equal_neighbor_nb = np.sum(self.equal_neighbor_nb, 2, dtype=np.uint8)

        if and_itself:
            self.equal_neighbor_nb[np.not_equal(self.array, value)] = 0

    def is_sup(self, value, and_itself=False):
        """
        Determine if pixels have more neighbors with higher values than a given threshold.

        This method computes the number of neighboring pixels that have values greater
        than a specified `value` for each pixel in the array. Optionally, it can exclude
        the pixel itself if its value is less than or equal to `value`.

        Parameters
        ----------
        value : int
            The threshold value used to determine if a neighboring pixel's value is greater.
        and_itself : bool, optional
            If True, exclude the pixel itself if its value is less than or equal to `value`.
            Defaults to False.

        Examples
        --------
        >>> matrix = np.array([[9, 0, 4, 6], [4, 9, 1, 3], [7, 2, 1, 4], [9, 0, 8, 5]], dtype=np.int8)
        >>> compare = CompareNeighborsWithValue(matrix, connectivity=4)
        >>> compare.is_sup(1)
        >>> print(compare.sup_neighbor_nb)
        [[3 3 2 4]
         [4 2 3 3]
         [4 2 3 3]
         [3 3 2 4]]
        """
        if len(self.array.shape) == 1:
            self.sup_neighbor_nb = (self.on_the_right > value).astype(self.array.dtype) + (self.on_the_left > value).astype(self.array.dtype)
        else:
            if self.connectivity == 4:
                self.sup_neighbor_nb =  np.dstack((self.on_the_right > value, self.on_the_left > value,
                                               self.on_the_bot > value, self.on_the_top > value))
            elif self.connectivity == 8:
                self.sup_neighbor_nb =  np.dstack((self.on_the_right > value, self.on_the_left > value,
                                               self.on_the_bot > value, self.on_the_top > value,
                                               self.on_the_topleft > value, self.on_the_topright > value,
                                               self.on_the_botleft > value, self.on_the_botright > value))
            else:
                self.sup_neighbor_nb =  np.dstack((self.on_the_topleft > value, self.on_the_topright > value,
                                               self.on_the_botleft > value, self.on_the_botright > value))

            self.sup_neighbor_nb = np.sum(self.sup_neighbor_nb, 2, dtype=np.uint8)
        if and_itself:
            self.sup_neighbor_nb[np.less_equal(self.array, value)] = 0

    def is_inf(self, value, and_itself=False):
        """
        is_inf(value and_itself=False)

        Determine the number of neighbors that are infinitely small relative to a given value,
        considering optional connectivity and exclusion of the element itself.

        Parameters
        ----------
        value : numeric
            The value to compare neighbor elements against.
        and_itself : bool, optional
            If True, excludes the element itself from being counted. Default is False.

        Examples
        --------
        >>> matrix = np.array([[9, 0, 4, 6], [4, 9, 1, 3], [7, 2, 1, 4], [9, 0, 8, 5]], dtype=np.int8)
        >>> compare = CompareNeighborsWithValue(matrix, connectivity=4)
        >>> compare.is_inf(1)
        >>> print(compare.inf_neighbor_nb)
        [[1 1 1 0]
         [0 1 0 0]
         [0 1 0 0]
         [1 1 1 0]]
        """
        if len(self.array.shape) == 1:
            self.inf_neighbor_nb = (self.on_the_right < value).astype(self.array.dtype) + (self.on_the_left < value).astype(self.array.dtype)
        else:
            if self.connectivity == 4:
                self.inf_neighbor_nb =  np.dstack((self.on_the_right < value, self.on_the_left < value,
                                               self.on_the_bot < value, self.on_the_top < value))
            elif self.connectivity == 8:
                self.inf_neighbor_nb =  np.dstack((self.on_the_right < value, self.on_the_left < value,
                                               self.on_the_bot < value, self.on_the_top < value,
                                               self.on_the_topleft < value, self.on_the_topright < value,
                                               self.on_the_botleft < value, self.on_the_botright < value))
            else:
                self.inf_neighbor_nb =  np.dstack((self.on_the_topleft < value, self.on_the_topright < value,
                                               self.on_the_botleft < value, self.on_the_botright < value))

            self.inf_neighbor_nb = np.sum(self.inf_neighbor_nb, 2, dtype=np.uint8)
        if and_itself:
            self.inf_neighbor_nb[np.greater_equal(self.array, value)] = 0

__init__(array, connectivity=None, data_type=np.int8)

Initialize a class for array connectivity processing.

This class processes arrays based on given connectivities, creating windows around the original data for both 1D and 2D arrays. Depending on the connectivity value (4 or 8), it creates different windows with borders.

Parameters:

Name Type Description Default
array ndarray

Input array to process, can be 1D or 2D.

required
connectivity int

Connectivity type for processing (4 or 8), by default None.

None
data_type dtype

Data type for the array elements, by default np.int8.

int8

Attributes:

Name Type Description
array ndarray

The processed array based on the given data type.

connectivity int

Connectivity value used for processing.

on_the_right ndarray

Array with shifted elements to the right.

on_the_left ndarray

Array with shifted elements to the left.

on_the_bot (ndarray, optional)

Array with shifted elements to the bottom (for 2D arrays).

on_the_top (ndarray, optional)

Array with shifted elements to the top (for 2D arrays).

on_the_topleft (ndarray, optional)

Array with shifted elements to the top left (for 2D arrays).

on_the_topright (ndarray, optional)

Array with shifted elements to the top right (for 2D arrays).

on_the_botleft (ndarray, optional)

Array with shifted elements to the bottom left (for 2D arrays).

on_the_botright (ndarray, optional)

Array with shifted elements to the bottom right (for 2D arrays).

Source code in src/cellects/image_analysis/morphological_operations.py
def __init__(self, array: np.ndarray, connectivity: int=None, data_type: np.dtype=np.int8):
    """
    Initialize a class for array connectivity processing.

    This class processes arrays based on given connectivities, creating
    windows around the original data for both 1D and 2D arrays. Depending on
    the connectivity value (4 or 8), it creates different windows with borders.

    Parameters
    ----------
    array : ndarray
        Input array to process, can be 1D or 2D.
    connectivity : int, optional
        Connectivity type for processing (4 or 8), by default None.
    data_type : dtype, optional
        Data type for the array elements, by default np.int8.

    Attributes
    ----------
    array : ndarray
        The processed array based on the given data type.
    connectivity : int
        Connectivity value used for processing.
    on_the_right : ndarray
        Array with shifted elements to the right.
    on_the_left : ndarray
        Array with shifted elements to the left.
    on_the_bot : ndarray, optional
        Array with shifted elements to the bottom (for 2D arrays).
    on_the_top : ndarray, optional
        Array with shifted elements to the top (for 2D arrays).
    on_the_topleft : ndarray, optional
        Array with shifted elements to the top left (for 2D arrays).
    on_the_topright : ndarray, optional
        Array with shifted elements to the top right (for 2D arrays).
    on_the_botleft : ndarray, optional
        Array with shifted elements to the bottom left (for 2D arrays).
    on_the_botright : ndarray, optional
        Array with shifted elements to the bottom right (for 2D arrays).
    """
    array = array.astype(data_type)
    self.array = array
    self.connectivity = connectivity
    if len(self.array.shape) == 1:
        self.on_the_right = np.append(array[1:], array[-1])
        self.on_the_left = np.append(array[0], array[:-1])
    else:
        # Build 4 window of the original array, each missing one of the four borders
        # Grow each window with a copy of the last border at the opposite of the side a border have been deleted
        if self.connectivity == 4 or self.connectivity == 8:
            self.on_the_right = np.column_stack((array[:, 1:], array[:, -1]))
            self.on_the_left = np.column_stack((array[:, 0], array[:, :-1]))
            self.on_the_bot = np.vstack((array[1:, :], array[-1, :]))
            self.on_the_top = np.vstack((array[0, :], array[:-1, :]))
        if self.connectivity != 4:
            self.on_the_topleft = array[:-1, :-1]
            self.on_the_topright = array[:-1, 1:]
            self.on_the_botleft = array[1:, :-1]
            self.on_the_botright = array[1:, 1:]

            self.on_the_topleft = np.vstack((self.on_the_topleft[0, :], self.on_the_topleft))
            self.on_the_topleft = np.column_stack((self.on_the_topleft[:, 0], self.on_the_topleft))

            self.on_the_topright = np.vstack((self.on_the_topright[0, :], self.on_the_topright))
            self.on_the_topright = np.column_stack((self.on_the_topright, self.on_the_topright[:, -1]))

            self.on_the_botleft = np.vstack((self.on_the_botleft, self.on_the_botleft[-1, :]))
            self.on_the_botleft = np.column_stack((self.on_the_botleft[:, 0], self.on_the_botleft))

            self.on_the_botright = np.vstack((self.on_the_botright, self.on_the_botright[-1, :]))
            self.on_the_botright = np.column_stack((self.on_the_botright, self.on_the_botright[:, -1]))

is_equal(value, and_itself=False)

Check equality of neighboring values in an array.

This method compares the neighbors of each element in self.array to a given value. Depending on the dimensionality and connectivity settings, it checks different neighboring elements.

Parameters:

Name Type Description Default
value int or float

The value to check equality with neighboring elements.

required
and_itself bool

If True, also check equality with the element itself. Defaults to False.

False

Returns:

Type Description
None
Attributes (not standard Qt properties)

equal_neighbor_nb : ndarray of uint8 Array that holds the number of equal neighbors for each element.

Examples:

>>> matrix = np.array([[9, 0, 4, 6], [4, 9, 1, 3], [7, 2, 1, 4], [9, 0, 8, 5]], dtype=np.int8)
>>> compare = CompareNeighborsWithValue(matrix, connectivity=4)
>>> compare.is_equal(1)
>>> print(compare.equal_neighbor_nb)
[[0 0 1 0]
[0 1 1 1]
[0 1 1 1]
[0 0 1 0]]
Source code in src/cellects/image_analysis/morphological_operations.py
def is_equal(self, value, and_itself: bool=False):
    """
    Check equality of neighboring values in an array.

    This method compares the neighbors of each element in `self.array` to a given value.
    Depending on the dimensionality and connectivity settings, it checks different neighboring
    elements.

    Parameters
    ----------
    value : int or float
        The value to check equality with neighboring elements.
    and_itself : bool, optional
        If True, also check equality with the element itself. Defaults to False.

    Returns
    -------
    None

    Attributes (not standard Qt properties)
    --------------------------------------
    equal_neighbor_nb : ndarray of uint8
        Array that holds the number of equal neighbors for each element.

    Examples
    --------
    >>> matrix = np.array([[9, 0, 4, 6], [4, 9, 1, 3], [7, 2, 1, 4], [9, 0, 8, 5]], dtype=np.int8)
    >>> compare = CompareNeighborsWithValue(matrix, connectivity=4)
    >>> compare.is_equal(1)
    >>> print(compare.equal_neighbor_nb)
    [[0 0 1 0]
    [0 1 1 1]
    [0 1 1 1]
    [0 0 1 0]]
    """

    if len(self.array.shape) == 1:
        self.equal_neighbor_nb = np.sum((np.equal(self.on_the_right, value), np.equal(self.on_the_left, value)), axis=0)
    else:
        if self.connectivity == 4:
            self.equal_neighbor_nb =  np.dstack((np.equal(self.on_the_right, value), np.equal(self.on_the_left, value),
                                             np.equal(self.on_the_bot, value), np.equal(self.on_the_top, value)))
        elif self.connectivity == 8:
            self.equal_neighbor_nb =  np.dstack(
                (np.equal(self.on_the_right, value), np.equal(self.on_the_left, value),
                 np.equal(self.on_the_bot, value), np.equal(self.on_the_top, value),
                 np.equal(self.on_the_topleft, value), np.equal(self.on_the_topright, value),
                 np.equal(self.on_the_botleft, value), np.equal(self.on_the_botright, value)))
        else:
            self.equal_neighbor_nb =  np.dstack(
                (np.equal(self.on_the_topleft, value), np.equal(self.on_the_topright, value),
                 np.equal(self.on_the_botleft, value), np.equal(self.on_the_botright, value)))
        self.equal_neighbor_nb = np.sum(self.equal_neighbor_nb, 2, dtype=np.uint8)

    if and_itself:
        self.equal_neighbor_nb[np.not_equal(self.array, value)] = 0

is_inf(value, and_itself=False)

is_inf(value and_itself=False)

Determine the number of neighbors that are infinitely small relative to a given value, considering optional connectivity and exclusion of the element itself.

Parameters:

Name Type Description Default
value numeric

The value to compare neighbor elements against.

required
and_itself bool

If True, excludes the element itself from being counted. Default is False.

False

Examples:

>>> matrix = np.array([[9, 0, 4, 6], [4, 9, 1, 3], [7, 2, 1, 4], [9, 0, 8, 5]], dtype=np.int8)
>>> compare = CompareNeighborsWithValue(matrix, connectivity=4)
>>> compare.is_inf(1)
>>> print(compare.inf_neighbor_nb)
[[1 1 1 0]
 [0 1 0 0]
 [0 1 0 0]
 [1 1 1 0]]
Source code in src/cellects/image_analysis/morphological_operations.py
def is_inf(self, value, and_itself=False):
    """
    is_inf(value and_itself=False)

    Determine the number of neighbors that are infinitely small relative to a given value,
    considering optional connectivity and exclusion of the element itself.

    Parameters
    ----------
    value : numeric
        The value to compare neighbor elements against.
    and_itself : bool, optional
        If True, excludes the element itself from being counted. Default is False.

    Examples
    --------
    >>> matrix = np.array([[9, 0, 4, 6], [4, 9, 1, 3], [7, 2, 1, 4], [9, 0, 8, 5]], dtype=np.int8)
    >>> compare = CompareNeighborsWithValue(matrix, connectivity=4)
    >>> compare.is_inf(1)
    >>> print(compare.inf_neighbor_nb)
    [[1 1 1 0]
     [0 1 0 0]
     [0 1 0 0]
     [1 1 1 0]]
    """
    if len(self.array.shape) == 1:
        self.inf_neighbor_nb = (self.on_the_right < value).astype(self.array.dtype) + (self.on_the_left < value).astype(self.array.dtype)
    else:
        if self.connectivity == 4:
            self.inf_neighbor_nb =  np.dstack((self.on_the_right < value, self.on_the_left < value,
                                           self.on_the_bot < value, self.on_the_top < value))
        elif self.connectivity == 8:
            self.inf_neighbor_nb =  np.dstack((self.on_the_right < value, self.on_the_left < value,
                                           self.on_the_bot < value, self.on_the_top < value,
                                           self.on_the_topleft < value, self.on_the_topright < value,
                                           self.on_the_botleft < value, self.on_the_botright < value))
        else:
            self.inf_neighbor_nb =  np.dstack((self.on_the_topleft < value, self.on_the_topright < value,
                                           self.on_the_botleft < value, self.on_the_botright < value))

        self.inf_neighbor_nb = np.sum(self.inf_neighbor_nb, 2, dtype=np.uint8)
    if and_itself:
        self.inf_neighbor_nb[np.greater_equal(self.array, value)] = 0

is_sup(value, and_itself=False)

Determine if pixels have more neighbors with higher values than a given threshold.

This method computes the number of neighboring pixels that have values greater than a specified value for each pixel in the array. Optionally, it can exclude the pixel itself if its value is less than or equal to value.

Parameters:

Name Type Description Default
value int

The threshold value used to determine if a neighboring pixel's value is greater.

required
and_itself bool

If True, exclude the pixel itself if its value is less than or equal to value. Defaults to False.

False

Examples:

>>> matrix = np.array([[9, 0, 4, 6], [4, 9, 1, 3], [7, 2, 1, 4], [9, 0, 8, 5]], dtype=np.int8)
>>> compare = CompareNeighborsWithValue(matrix, connectivity=4)
>>> compare.is_sup(1)
>>> print(compare.sup_neighbor_nb)
[[3 3 2 4]
 [4 2 3 3]
 [4 2 3 3]
 [3 3 2 4]]
Source code in src/cellects/image_analysis/morphological_operations.py
def is_sup(self, value, and_itself=False):
    """
    Determine if pixels have more neighbors with higher values than a given threshold.

    This method computes the number of neighboring pixels that have values greater
    than a specified `value` for each pixel in the array. Optionally, it can exclude
    the pixel itself if its value is less than or equal to `value`.

    Parameters
    ----------
    value : int
        The threshold value used to determine if a neighboring pixel's value is greater.
    and_itself : bool, optional
        If True, exclude the pixel itself if its value is less than or equal to `value`.
        Defaults to False.

    Examples
    --------
    >>> matrix = np.array([[9, 0, 4, 6], [4, 9, 1, 3], [7, 2, 1, 4], [9, 0, 8, 5]], dtype=np.int8)
    >>> compare = CompareNeighborsWithValue(matrix, connectivity=4)
    >>> compare.is_sup(1)
    >>> print(compare.sup_neighbor_nb)
    [[3 3 2 4]
     [4 2 3 3]
     [4 2 3 3]
     [3 3 2 4]]
    """
    if len(self.array.shape) == 1:
        self.sup_neighbor_nb = (self.on_the_right > value).astype(self.array.dtype) + (self.on_the_left > value).astype(self.array.dtype)
    else:
        if self.connectivity == 4:
            self.sup_neighbor_nb =  np.dstack((self.on_the_right > value, self.on_the_left > value,
                                           self.on_the_bot > value, self.on_the_top > value))
        elif self.connectivity == 8:
            self.sup_neighbor_nb =  np.dstack((self.on_the_right > value, self.on_the_left > value,
                                           self.on_the_bot > value, self.on_the_top > value,
                                           self.on_the_topleft > value, self.on_the_topright > value,
                                           self.on_the_botleft > value, self.on_the_botright > value))
        else:
            self.sup_neighbor_nb =  np.dstack((self.on_the_topleft > value, self.on_the_topright > value,
                                           self.on_the_botleft > value, self.on_the_botright > value))

        self.sup_neighbor_nb = np.sum(self.sup_neighbor_nb, 2, dtype=np.uint8)
    if and_itself:
        self.sup_neighbor_nb[np.less_equal(self.array, value)] = 0

EdgeIdentification

Initialize the class with skeleton and distance arrays.

This class is used to identify edges within a skeleton structure based on provided skeleton and distance arrays. It performs various operations to refine and label edges, ultimately producing a fully identified network.

Source code in src/cellects/image_analysis/network_functions.py
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
class EdgeIdentification:
    """Initialize the class with skeleton and distance arrays.

    This class is used to identify edges within a skeleton structure based on
    provided skeleton and distance arrays. It performs various operations to
    refine and label edges, ultimately producing a fully identified network.
    """
    def __init__(self, pad_skeleton: NDArray[np.uint8], pad_distances: NDArray[np.float64], t: int=0):
        """
        Initialize the class with skeleton and distance arrays.

        Parameters
        ----------
        pad_skeleton : ndarray of uint8
            Array representing the skeleton to pad.
        pad_distances : ndarray of float64
            Array representing distances corresponding to the skeleton.

        Attributes
        ----------
        remaining_vertices : None
            Remaining vertices. Initialized as `None`.
        vertices : None
            Vertices. Initialized as `None`.
        growing_vertices : None
            Growing vertices. Initialized as `None`.
        im_shape : tuple of ints
            Shape of the skeleton array.
        """
        self.pad_skeleton = pad_skeleton
        self.pad_distances = pad_distances
        self.t = t
        self.remaining_vertices = None
        self.vertices = None
        self.growing_vertices = None
        self.im_shape = pad_skeleton.shape

    def run_edge_identification(self):
        """
        Run the edge identification process.

        This method orchestrates a series of steps to identify and label edges
        within the graph structure. Each step handles a specific aspect of edge
        identification, ultimately leading to a clearer and more refined edge network.

        Steps involved:
        1. Get vertices and tips coordinates.
        2. Identify tipped edges.
        3. Remove tipped edges smaller than branch width.
        4. Label tipped edges and their vertices.
        5. Label edges connected with vertex clusters.
        6. Label edges connecting vertex clusters.
        7. Label edges from known vertices iteratively.
        8. Label edges looping on 1 vertex.
        9. Clear areas with 1 or 2 unidentified pixels.
        10. Clear edge duplicates.
        11. Clear vertices connecting 2 edges.
        """
        self.get_vertices_and_tips_coord()
        self.get_tipped_edges()
        self.remove_tipped_edge_smaller_than_branch_width()
        self.label_tipped_edges_and_their_vertices()
        self.check_vertex_existence()
        self.label_edges_connected_with_vertex_clusters()
        self.label_edges_connecting_vertex_clusters()
        self.label_edges_from_known_vertices_iteratively()
        self.label_edges_looping_on_1_vertex()
        self.clear_areas_of_1_or_2_unidentified_pixels()
        self.clear_edge_duplicates()
        self.clear_vertices_connecting_2_edges()

    def get_vertices_and_tips_coord(self):
        """Process skeleton data to extract non-tip vertices and tip coordinates.

        This method processes the skeleton stored in `self.pad_skeleton` by first
        extracting all vertices and tips. It then separates these into branch points
        (non-tip vertices) and specific tip coordinates using internal processing.

        Attributes
        ----------
        self.non_tip_vertices : array-like
            Coordinates of non-tip (branch) vertices.
        self.tips_coord : array-like
            Coordinates of identified tips in the skeleton.
        """
        pad_vertices, pad_tips = get_vertices_and_tips_from_skeleton(self.pad_skeleton)
        self.non_tip_vertices, self.tips_coord = get_branches_and_tips_coord(pad_vertices, pad_tips)

    def get_tipped_edges(self):
        """
        get_tipped_edges : method to extract skeleton edges connecting branching points and tips.

        Makes sure that there is only one connected component constituting the skeleton of the network and
        identifies all edges that are connected to a tip.

        Attributes
        ----------
        pad_skeleton : ndarray of bool, modified
            Boolean mask representing the pruned skeleton after isolating the largest connected component.
        vertices_branching_tips : ndarray of int, shape (N, 2)
            Coordinates of branching points that connect to tips in the skeleton structure.
        edge_lengths : ndarray of float, shape (M,)
            Lengths of edges connecting non-tip vertices to identified tip locations.
        edge_pix_coord : list of array of int
            Pixel coordinates for each edge path between connected skeleton elements.

        """
        self.pad_skeleton = keep_one_connected_component(self.pad_skeleton)
        self.vertices_branching_tips, self.edge_lengths, self.edge_pix_coord = _find_closest_vertices(self.pad_skeleton,
                                                                                        self.non_tip_vertices,
                                                                                        self.tips_coord[:, :2])

    def remove_tipped_edge_smaller_than_branch_width(self):
        """Remove very short edges from the skeleton.

        This method focuses on edges connecting tips. When too short, they are considered are noise and
        removed from the skeleton and distances matrices. These edges are considered too short when their length is
        smaller than the width of the nearest network branch (an information included in pad_distances).
        This method also updates internal data structures (skeleton, edge coordinates, vertex/tip positions)
        accordingly through pixel-wise analysis and connectivity checks.
        """
        # Identify edges that are smaller than the width of the branch it is attached to
        tipped_edges_to_remove = np.zeros(self.edge_lengths.shape[0], dtype=bool)
        # connecting_vertices_to_remove = np.zeros(self.vertices_branching_tips.shape[0], dtype=bool)
        branches_to_remove = np.zeros(self.non_tip_vertices.shape[0], dtype=bool)
        new_edge_pix_coord = []
        remaining_tipped_edges_nb = 0
        for i in range(len(self.edge_lengths)): # i = 3142 #1096 # 974 # 222
            Y, X = self.vertices_branching_tips[i, 0], self.vertices_branching_tips[i, 1]
            edge_bool = self.edge_pix_coord[:, 2] == i + 1
            eY, eX = self.edge_pix_coord[edge_bool, 0], self.edge_pix_coord[edge_bool, 1]
            if np.nanmax(self.pad_distances[(Y - 1): (Y + 2), (X - 1): (X + 2)]) >= self.edge_lengths[i]:
                tipped_edges_to_remove[i] = True
                # Remove the edge
                self.pad_skeleton[eY, eX] = 0
                # Remove the tip
                self.pad_skeleton[self.tips_coord[i, 0], self.tips_coord[i, 1]] = 0

                # Remove the coordinates corresponding to that edge
                self.edge_pix_coord = np.delete(self.edge_pix_coord, edge_bool, 0)

                # check whether the connecting vertex remains a vertex of not
                pad_sub_skeleton = np.pad(self.pad_skeleton[(Y - 2): (Y + 3), (X - 2): (X + 3)], [(1,), (1,)],
                                          mode='constant')
                sub_vertices, sub_tips = get_vertices_and_tips_from_skeleton(pad_sub_skeleton)
                # If the vertex does not connect at least 3 edges anymore, remove its vertex label
                if sub_vertices[3, 3] == 0:
                    vertex_to_remove = np.nonzero(np.logical_and(self.non_tip_vertices[:, 0] == Y, self.non_tip_vertices[:, 1] == X))[0]
                    branches_to_remove[vertex_to_remove] = True
                # If that pixel became a tip connected to another vertex remove it from the skeleton
                if sub_tips[3, 3]:
                    if sub_vertices[2:5, 2:5].sum() > 1:
                        self.pad_skeleton[Y, X] = 0
                        self.edge_pix_coord = np.delete(self.edge_pix_coord, np.all(self.edge_pix_coord[:, :2] == [Y, X], axis=1), 0)
                        vertex_to_remove = np.nonzero(np.logical_and(self.non_tip_vertices[:, 0] == Y, self.non_tip_vertices[:, 1] == X))[0]
                        branches_to_remove[vertex_to_remove] = True
            else:
                remaining_tipped_edges_nb += 1
                new_edge_pix_coord.append(np.stack((eY, eX, np.repeat(remaining_tipped_edges_nb, len(eY))), axis=1))

        # Check that excedent connected components are 1 pixel size, if so:
        # It means that they were neighbors to removed tips and not necessary for the skeleton
        nb, sh = cv2.connectedComponents(self.pad_skeleton)
        if nb > 2:
            logging.error("Removing small tipped edges split the skeleton")
            # for i in range(2, nb):
            #     excedent = sh == i
            #     if (excedent).sum() == 1:
            #         self.pad_skeleton[excedent] = 0

        # Remove in distances the pixels removed in skeleton:
        self.pad_distances *= self.pad_skeleton

        # update edge_pix_coord
        if len(new_edge_pix_coord) > 0:
            self.edge_pix_coord = np.vstack(new_edge_pix_coord)

        # # Remove tips connected to very small edges
        # self.tips_coord = np.delete(self.tips_coord, tipped_edges_to_remove, 0)
        # # Add corresponding edge names
        # self.tips_coord = np.hstack((self.tips_coord, np.arange(1, len(self.tips_coord) + 1)[:, None]))

        # # Within all branching (non-tip) vertices, keep those that did not lose their vertex status because of the edge removal
        # self.non_tip_vertices = np.delete(self.non_tip_vertices, branches_to_remove, 0)

        # # Get the branching vertices who kept their typped edge
        # self.vertices_branching_tips = np.delete(self.vertices_branching_tips, tipped_edges_to_remove, 0)

        # Within all branching (non-tip) vertices, keep those that do not connect a tipped edge.
        # v_branching_tips_in_branching_v = find_common_coord(self.non_tip_vertices, self.vertices_branching_tips[:, :2])
        # self.remaining_vertices = np.delete(self.non_tip_vertices, v_branching_tips_in_branching_v, 0)
        # ordered_v_coord = np.vstack((self.tips_coord[:, :2], self.vertices_branching_tips[:, :2], self.remaining_vertices))

        # tips = self.tips_coord
        # branching_any_edge = self.non_tip_vertices
        # branching_typped_edges = self.vertices_branching_tips
        # branching_no_typped_edges = self.remaining_vertices

        self.get_vertices_and_tips_coord()
        self.get_tipped_edges()

    def label_tipped_edges_and_their_vertices(self):
        """Label edges connecting tip vertices to branching vertices and assign unique labels to all relevant vertices.

        Processes vertex coordinates by stacking tips, vertices branching from tips, and remaining non-tip vertices.
        Assigns unique sequential identifiers to these vertices in a new array. Constructs an array of edge-label information,
        where each row contains the edge label (starting at 1), corresponding tip label, and connected vertex label.

        Attributes
        ----------
        tip_number : int
            The number of tip coordinates available in `tips_coord`.

        ordered_v_coord : ndarray of float
            Stack of unique vertex coordinates ordered by: tips first, vertices branching tips second, non-tip vertices third.

        numbered_vertices : ndarray of uint32
            2D array where each coordinate position is labeled with a sequential integer (starting at 1) based on the order in `ordered_v_coord`.

        edges_labels : ndarray of uint32
            Array of shape (n_edges, 3). Each row contains:
            - Edge label (sequential from 1 to n_edges)
            - Label of the tip vertex for that edge.
            - Label of the vertex branching the tip.

        vertices_branching_tips : ndarray of float
            Unique coordinates of vertices directly connected to tips after removing duplicates.
        """
        self.tip_number = self.tips_coord.shape[0]

        # Stack vertex coordinates in that order: 1. Tips, 2. Vertices branching tips, 3. All remaining vertices
        ordered_v_coord = np.vstack((self.tips_coord[:, :2], self.vertices_branching_tips[:, :2], self.non_tip_vertices))
        ordered_v_coord = np.unique(ordered_v_coord, axis=0)

        # Create arrays to store edges and vertices labels
        self.numbered_vertices = np.zeros(self.im_shape, dtype=np.uint32)
        self.numbered_vertices[ordered_v_coord[:, 0], ordered_v_coord[:, 1]] = np.arange(1, ordered_v_coord.shape[0] + 1)
        self.vertices = None
        self.vertex_index_map = {}
        for idx, (y, x) in enumerate(ordered_v_coord):
            self.vertex_index_map[idx + 1] = tuple((np.uint32(y), np.uint32(x)))

        # Name edges from 1 to the number of edges connecting tips and set the vertices labels from all tips to their connected vertices:
        self.edges_labels = np.zeros((self.tip_number, 3), dtype=np.uint32)
        # edge label:
        self.edges_labels[:, 0] = np.arange(self.tip_number) + 1
        # tip label:
        self.edges_labels[:, 1] = self.numbered_vertices[self.tips_coord[:, 0], self.tips_coord[:, 1]]
        # vertex branching tip label:
        self.edges_labels[:, 2] = self.numbered_vertices[self.vertices_branching_tips[:, 0], self.vertices_branching_tips[:, 1]]

        # Remove duplicates in vertices_branching_tips
        self.vertices_branching_tips = np.unique(self.vertices_branching_tips[:, :2], axis=0)

    def check_vertex_existence(self):
        if self.tips_coord.shape[0] == 0 and self.non_tip_vertices.shape[0] == 0:
            loop_coord = np.nonzero(self.pad_skeleton)
            start = 1
            end = 1
            vertex_coord = loop_coord[0][0], loop_coord[1][0]
            self.numbered_vertices[vertex_coord[0], vertex_coord[1]] = 1
            self.vertex_index_map[1] = vertex_coord
            self.non_tip_vertices = np.array(vertex_coord)[None, :]
            new_edge_lengths = len(loop_coord[0]) - 1
            new_edge_pix_coord = np.transpose(np.vstack(((loop_coord[0][1:], loop_coord[1][1:], np.zeros(new_edge_lengths, dtype=np.int32)))))
            self.edge_pix_coord = np.zeros((0, 3), dtype=np.int32)
            self._update_edge_data(start, end, new_edge_lengths, new_edge_pix_coord)

    def label_edges_connected_with_vertex_clusters(self):
        """
        Identify edges connected to touching vertices by processing vertex clusters.

        This function processes the skeleton to identify edges connecting vertices
        that are part of touching clusters. It creates a cropped version of the skeleton
        by removing already detected edges and their tips, then iterates through vertex
        clusters to explore and identify nearby edges.
        """
        # I.1. Identify edges connected to touching vertices:
        # First, create another version of these arrays, where we remove every already detected edge and their tips
        cropped_skeleton = self.pad_skeleton.copy()
        cropped_skeleton[self.edge_pix_coord[:, 0], self.edge_pix_coord[:, 1]] = 0
        cropped_skeleton[self.tips_coord[:, 0], self.tips_coord[:, 1]] = 0

        # non_tip_vertices does not need to be updated yet, because it only contains verified branching vertices
        cropped_non_tip_vertices = self.non_tip_vertices.copy()

        self.new_level_vertices = None
        # The problem with vertex_to_vertex_connexion is that since they are not separated by zeros,
        # they always atract each other instead of exploring other paths.
        # To fix this, we loop over each vertex group to
        # 1. Add one edge per inter-vertex connexion inside the group
        # 2. Remove all except one, and loop as many time as necessary.
        # Inside that second loop, we explore and identify every edge nearby.
        # Find every vertex_to_vertex_connexion
        v_cluster_nb, self.v_cluster_lab, self.v_cluster_stats, vgc = cv2.connectedComponentsWithStats(
            (self.numbered_vertices > 0).astype(np.uint8), connectivity=8)
        if v_cluster_nb > 0:
            max_v_nb = np.max(self.v_cluster_stats[1:, 4])
            cropped_skeleton_list = []
            starting_vertices_list = []
            for v_nb in range(2, max_v_nb + 1):
                labels = np.nonzero(self.v_cluster_stats[:, 4] == v_nb)[0]
                coord_list = []
                for lab in labels:  # lab=labels[0]
                    coord_list.append(np.nonzero(self.v_cluster_lab == lab))
                for iter in range(v_nb):
                    for lab_ in range(labels.shape[0]): # lab=labels[0]
                        cs = cropped_skeleton.copy()
                        sv = []
                        v_c = coord_list[lab_]
                        # Save the current coordinate in the starting vertices array of this iteration
                        sv.append([v_c[0][iter], v_c[1][iter]])
                        # Remove one vertex coordinate to keep it from cs
                        v_y, v_x = np.delete(v_c[0], iter), np.delete(v_c[1], iter)
                        cs[v_y, v_x] = 0
                        cropped_skeleton_list.append(cs)
                        starting_vertices_list.append(np.array(sv))
            for cropped_skeleton, starting_vertices in zip(cropped_skeleton_list, starting_vertices_list):
                _, _ = self._identify_edges_connecting_a_vertex_list(cropped_skeleton, cropped_non_tip_vertices, starting_vertices)

    def label_edges_connecting_vertex_clusters(self):
        """
        Label edges connecting vertex clusters.

        This method identifies the connections between connected vertices within
        vertex clusters and labels these edges. It uses the previously found connected
        vertices, creates an image of the connections, and then identifies
        and labels the edges between these touching vertices.
        """
        # I.2. Identify the connexions between connected vertices:
        all_connected_vertices = np.nonzero(self.v_cluster_stats[:, 4] > 1)[0][1:]
        all_con_v_im = np.zeros_like(self.pad_skeleton)
        for v_group in all_connected_vertices:
            all_con_v_im[self.v_cluster_lab == v_group] = 1
        cropped_skeleton = all_con_v_im
        self.vertex_clusters_coord = np.transpose(np.array(np.nonzero(cropped_skeleton)))
        _, _ = self._identify_edges_connecting_a_vertex_list(cropped_skeleton, self.vertex_clusters_coord, self.vertex_clusters_coord)
        # self.edges_labels
        del self.v_cluster_stats
        del self.v_cluster_lab

    def label_edges_from_known_vertices_iteratively(self):
        """
        Label edges iteratively from known vertices.

        This method labels edges in an iterative process starting from
        known vertices. It handles the removal of detected edges and
        updates the skeleton accordingly, to avoid detecting edges twice.
        """
        # II/ Identify all remaining edges
        if self.new_level_vertices is not None:
            starting_vertices_coord = np.vstack((self.new_level_vertices[:, :2], self.vertices_branching_tips))
            starting_vertices_coord = np.unique(starting_vertices_coord, axis=0)
        else:
            # We start from the vertices connecting tips
            starting_vertices_coord = self.vertices_branching_tips.copy()
        # Remove the detected edges from cropped_skeleton:
        cropped_skeleton = self.pad_skeleton.copy()
        cropped_skeleton[self.edge_pix_coord[:, 0], self.edge_pix_coord[:, 1]] = 0
        cropped_skeleton[self.tips_coord[:, 0], self.tips_coord[:, 1]] = 0
        cropped_skeleton[self.vertex_clusters_coord[:, 0], self.vertex_clusters_coord[:, 1]] = 0

        # Reinitialize cropped_non_tip_vertices to browse all vertices except tips and groups
        cropped_non_tip_vertices = self.non_tip_vertices.copy()
        cropped_non_tip_vertices = remove_coordinates(cropped_non_tip_vertices, self.vertex_clusters_coord)
        del self.vertex_clusters_coord
        remaining_v = cropped_non_tip_vertices.shape[0] + 1
        while remaining_v > cropped_non_tip_vertices.shape[0]:
            remaining_v = cropped_non_tip_vertices.shape[0]
            cropped_skeleton, cropped_non_tip_vertices = self._identify_edges_connecting_a_vertex_list(cropped_skeleton, cropped_non_tip_vertices, starting_vertices_coord)
            if self.new_level_vertices is not None:
                starting_vertices_coord = np.unique(self.new_level_vertices[:, :2], axis=0)

    def label_edges_looping_on_1_vertex(self):
        """
        Identify and handle edges that form loops around a single vertex.
        This method processes the skeleton image to find looping edges and updates
        the edge data structure accordingly.
        """
        self.identified = np.zeros_like(self.numbered_vertices)
        self.identified[self.edge_pix_coord[:, 0], self.edge_pix_coord[:, 1]] = 1
        self.identified[self.non_tip_vertices[:, 0], self.non_tip_vertices[:, 1]] = 1
        self.identified[self.tips_coord[:, 0], self.tips_coord[:, 1]] = 1
        unidentified = (1 - self.identified) * self.pad_skeleton

        # Find out the remaining non-identified pixels
        nb, self.unidentified_shapes, self.unidentified_stats, ce = cv2.connectedComponentsWithStats(unidentified.astype(np.uint8))
        # Handle the cases where edges are loops over only one vertex
        looping_edges = np.nonzero(self.unidentified_stats[:, 4 ] > 2)[0][1:]
        for loop_i in looping_edges: # loop_i = looping_edges[0] loop_i=11 #  zoom_on_nonzero(unique_vertices_im, return_coord=False)
            edge_i = (self.unidentified_shapes == loop_i).astype(np.uint8)
            dil_edge_i = cv2.dilate(edge_i, square_33)
            unique_vertices_im = self.numbered_vertices.copy()
            unique_vertices_im[self.tips_coord[:, 0], self.tips_coord[:, 1]] = 0
            unique_vertices_im = dil_edge_i * unique_vertices_im
            unique_vertices = np.unique(unique_vertices_im)
            unique_vertices = unique_vertices[unique_vertices > 0]
            v_nb = len(unique_vertices)
            new_edge_lengths = edge_i.sum()
            new_edge_pix_coord = np.transpose(np.vstack((np.nonzero(edge_i))))
            new_edge_pix_coord = np.hstack((new_edge_pix_coord, np.repeat(1, new_edge_pix_coord.shape[0])[:, None]))
            if v_nb == 1:
                start, end = unique_vertices[0], unique_vertices[0]
                self._update_edge_data(start, end, new_edge_lengths, new_edge_pix_coord)
            elif v_nb == 2:
                # The edge loops around a group of connected vertices
                start, end = unique_vertices[0], unique_vertices[1]
                self._update_edge_data(start, end, new_edge_lengths, new_edge_pix_coord)
                # conn_v_nb, conn_v = cv2.connectedComponents((unique_vertices_im > 0).astype(np.uint8))
                # if len(unique_vertices) == 2 and conn_v_nb == 2:
            elif v_nb > 2: # The question is: How to choose two vertices so that they link all missing pixels?
                # 1. Find every edge pixel connected to these vertices
                vertex_connected_pixels = np.nonzero(cv2.dilate((unique_vertices_im > 0).astype(np.uint8), square_33) * edge_i)
                # 2. Find the most distant pair of these
                pix1, pix2 = get_min_or_max_euclidean_pair(vertex_connected_pixels, "max")
                # 3. The two best vertices are the two nearest to these two most distant edge pixels
                dist_to_pix1 = np.zeros(v_nb, np.float64)
                dist_to_pix2 = np.zeros(v_nb, np.float64)
                for _i, v_i in enumerate(unique_vertices):
                    v_coord = self.vertex_index_map[v_i]
                    dist_to_pix1[_i] = eudist(pix1, v_coord)
                    dist_to_pix2[_i] = eudist(pix2, v_coord)
                start, end = unique_vertices[np.argmin(dist_to_pix1)], unique_vertices[np.argmin(dist_to_pix2)]
                self._update_edge_data(start, end, new_edge_lengths, new_edge_pix_coord)
            else:
                logging.error(f"t={self.t}, One long edge is not identified: i={loop_i} of length={edge_i.sum()} close to {len(unique_vertices)} vertices.")
        self.identified[self.edge_pix_coord[:, 0], self.edge_pix_coord[:, 1]] = 1

    def clear_areas_of_1_or_2_unidentified_pixels(self):
        """Removes 1 or 2 pixel size non-identified areas from the skeleton.

        This function checks whether small non-identified areas (1 or 2 pixels)
        can be removed without breaking the skeleton structure. It performs
        a series of operations to ensure only safe removals are made, logging
        errors if the final skeleton is not fully connected or if some unidentified pixels remain.
        """
        # Check whether the 1 or 2 pixel size non-identified areas can be removed without breaking the skel
        one_pix = np.nonzero(self.unidentified_stats[:, 4 ] <= 2)[0] # == 1)[0]
        cutting_removal = []
        for pix_i in one_pix: #pix_i=one_pix[0]
            skel_copy = self.pad_skeleton.copy()
            y1, y2, x1, x2 = self.unidentified_stats[pix_i, 1], self.unidentified_stats[pix_i, 1] + self.unidentified_stats[pix_i, 3], self.unidentified_stats[pix_i, 0], self.unidentified_stats[pix_i, 0] + self.unidentified_stats[pix_i, 2]
            skel_copy[y1:y2, x1:x2][self.unidentified_shapes[y1:y2, x1:x2] == pix_i] = 0
            nb1, sh1 = cv2.connectedComponents(skel_copy.astype(np.uint8), connectivity=8)
            if nb1 > 2:
                cutting_removal.append(pix_i)
            else:
                self.pad_skeleton[y1:y2, x1:x2][self.unidentified_shapes[y1:y2, x1:x2] == pix_i] = 0
        if len(cutting_removal) > 0:
            logging.error(f"t={self.t}, These pixels break the skeleton when removed: {cutting_removal}")
        if (self.identified > 0).sum() != self.pad_skeleton.sum():
            logging.error(f"t={self.t}, Proportion of identified pixels in the skeleton: {(self.identified > 0).sum() / self.pad_skeleton.sum()}")
        self.pad_distances *= self.pad_skeleton
        del self.identified
        del self.unidentified_stats
        del self.unidentified_shapes


    def _identify_edges_connecting_a_vertex_list(self, cropped_skeleton: NDArray[np.uint8], cropped_non_tip_vertices: NDArray, starting_vertices_coord: NDArray) -> Tuple[NDArray[np.uint8], NDArray]:
        """Identify edges connecting a list of vertices within a cropped skeleton.

        This function iteratively connects the closest vertices from starting_vertices_coord to their nearest neighbors,
        updating the skeleton and removing already connected vertices until no new connections can be made or
        a maximum number of connections is reached.

        Parameters
        ----------
        cropped_skeleton : ndarray of uint8
            A binary skeleton image where skeletal pixels are marked as 1.
        cropped_non_tip_vertices : ndarray of int
            Coordinates of non-tip vertices in the cropped skeleton.
        starting_vertices_coord : ndarray of int
            Coordinates of vertices from which to find connections.

        Returns
        -------
        cropped_skeleton : ndarray of uint8
            Updated skeleton with edges marked as 0.
        cropped_non_tip_vertices : ndarray of int
            Updated list of non-tip vertices after removing those that have been connected.
        """
        explored_connexions_per_vertex = 0  # the maximal edge number that can connect a vertex
        new_connexions = True
        while new_connexions and explored_connexions_per_vertex < 5 and np.any(cropped_non_tip_vertices) and np.any(starting_vertices_coord):

            explored_connexions_per_vertex += 1
            # 1. Find the ith closest vertex to each focal vertex
            ending_vertices_coord, new_edge_lengths, new_edge_pix_coord = _find_closest_vertices(
                cropped_skeleton, cropped_non_tip_vertices, starting_vertices_coord)
            if np.isnan(new_edge_lengths).sum() + (new_edge_lengths == 0).sum() == new_edge_lengths.shape[0]:
                new_connexions = False
            else:
                # In new_edge_lengths, zeros are duplicates and nan are lone vertices (from starting_vertices_coord)
                # Find out which starting_vertices_coord should be kept and which one should be used to save edges
                no_new_connexion = np.isnan(new_edge_lengths)
                no_found_connexion = np.logical_or(no_new_connexion, new_edge_lengths == 0)
                found_connexion = np.logical_not(no_found_connexion)

                # Any vertex_to_vertex_connexions must be analyzed only once. We remove them with the non-connectable vertices
                vertex_to_vertex_connexions = new_edge_lengths == 1

                # Save edge data
                start = self.numbered_vertices[
                    starting_vertices_coord[found_connexion, 0], starting_vertices_coord[found_connexion, 1]]
                end = self.numbered_vertices[
                    ending_vertices_coord[found_connexion, 0], ending_vertices_coord[found_connexion, 1]]
                new_edge_lengths = new_edge_lengths[found_connexion]
                self._update_edge_data(start, end, new_edge_lengths, new_edge_pix_coord)

                no_new_connexion = np.logical_or(no_new_connexion, vertex_to_vertex_connexions)
                vertices_to_crop = starting_vertices_coord[no_new_connexion, :]

                # Remove non-connectable and connected_vertices from:
                cropped_non_tip_vertices = remove_coordinates(cropped_non_tip_vertices, vertices_to_crop)
                starting_vertices_coord = remove_coordinates(starting_vertices_coord, vertices_to_crop)

                if new_edge_pix_coord.shape[0] > 0:
                    # Update cropped_skeleton to not identify each edge more than once
                    cropped_skeleton[new_edge_pix_coord[:, 0], new_edge_pix_coord[:, 1]] = 0
                # And the starting vertices that cannot connect anymore
                cropped_skeleton[vertices_to_crop[:, 0], vertices_to_crop[:, 1]] = 0

                if self.new_level_vertices is None:
                    self.new_level_vertices = ending_vertices_coord[found_connexion, :].copy()
                else:
                    self.new_level_vertices = np.vstack((self.new_level_vertices, ending_vertices_coord[found_connexion, :]))

        return cropped_skeleton, cropped_non_tip_vertices

    def _update_edge_data(self, start, end, new_edge_lengths: NDArray, new_edge_pix_coord: NDArray):
        """
        Update edge data by expanding existing arrays with new edges.

        This method updates the internal edge data structures (lengths,
        labels, and pixel coordinates) by appending new edges.

        Parameters
        ----------
        start : int or ndarray of int
            The starting vertex label(s) for the new edges.
        end : int or ndarray of int
            The ending vertex label(s) for the new edges.
        new_edge_lengths : ndarray of float
            The lengths of the new edges to be added.
        new_edge_pix_coord : ndarray of float
            The pixel coordinates of the new edges.

        Attributes
        ----------
        edge_lengths : ndarray of float
            The lengths of all edges.
        edges_labels : ndarray of int
            The labels for each edge (start and end vertices).
        edge_pix_coord : ndarray of float
            The pixel coordinates for all edges.
        """
        if isinstance(start, np.ndarray):
            end_idx = len(start)
            self.edge_lengths = np.concatenate((self.edge_lengths, new_edge_lengths))
        else:
            end_idx = 1
            self.edge_lengths = np.append(self.edge_lengths, new_edge_lengths)
        start_idx = self.edges_labels.shape[0]
        new_edges = np.zeros((end_idx, 3), dtype=np.uint32)
        new_edges[:, 0] = np.arange(start_idx, start_idx + end_idx) + 1  # edge label
        new_edges[:, 1] = start  # starting vertex label
        new_edges[:, 2] = end  # ending vertex label
        self.edges_labels = np.vstack((self.edges_labels, new_edges))
        # Add the new edge coord
        if new_edge_pix_coord.shape[0] > 0:
            # Add the new edge pixel coord
            new_edge_pix_coord[:, 2] += start_idx
            self.edge_pix_coord = np.vstack((self.edge_pix_coord, new_edge_pix_coord))

    def clear_edge_duplicates(self):
        """
        Remove duplicate edges by checking vertices and coordinates.

        This method identifies and removes duplicate edges based on their vertex labels
        and pixel coordinates. It scans through the edge attributes, compares them,
        and removes duplicates if they are found.
        """
        edges_to_remove = []
        duplicates = find_duplicates_coord(np.vstack((self.edges_labels[:, 1:], self.edges_labels[:, :0:-1])))
        duplicates = np.logical_or(duplicates[:len(duplicates)//2], duplicates[len(duplicates)//2:])
        for v in self.edges_labels[duplicates, 1:]: #v = self.edges_labels[duplicates, 1:][4]
            edges_bool = np.logical_or(np.all(self.edges_labels[:, 1:] == v, axis=1), np.all(self.edges_labels[:, 1:] == v[::-1], axis=1))
            edge_labs = self.edges_labels[edges_bool, 0]
            for edge_i in range(0, len(edge_labs) - 1):  #  edge_i = 0
                edge_i_coord = self.edge_pix_coord[self.edge_pix_coord[:, 2] == edge_labs[edge_i], :2]
                for edge_j in range(edge_i + 1, len(edge_labs)):  #  edge_j = 1
                    edge_j_coord = self.edge_pix_coord[self.edge_pix_coord[:, 2] == edge_labs[edge_j], :2]
                    if np.array_equal(edge_i_coord, edge_j_coord):
                        edges_to_remove.append(edge_labs[edge_j])
        edges_to_remove = np.unique(edges_to_remove)
        for edge in edges_to_remove:
            edge_bool = self.edges_labels[:, 0] != edge
            self.edges_labels = self.edges_labels[edge_bool, :]
            self.edge_lengths = self.edge_lengths[edge_bool]
            self.edge_pix_coord = self.edge_pix_coord[self.edge_pix_coord[:, 2] != edge, :]


    def clear_vertices_connecting_2_edges(self):
        """
        Remove vertices connecting exactly two edges and update edge-related attributes.

        This method identifies vertices that are connected to exactly 2 edges,
        renames edges, updates edge lengths and vertex coordinates accordingly.
        It also removes the corresponding vertices from non-tip vertices list.
        """
        v_labels, v_counts = np.unique(self.edges_labels[:, 1:], return_counts=True)
        vertices2 = v_labels[v_counts == 2]
        for vertex2 in vertices2:  # vertex2 = vertices2[0]
            edge_indices = np.nonzero(self.edges_labels[:, 1:] == vertex2)[0]
            edge_names = [self.edges_labels[edge_indices[0], 0], self.edges_labels[edge_indices[1], 0]]
            v_names = np.concatenate((self.edges_labels[edge_indices[0], 1:], self.edges_labels[edge_indices[1], 1:]))
            v_names = v_names[v_names != vertex2]
            if len(v_names) > 0: # Otherwise it's a vertex between a normal edge and a loop
                kept_edge = int(self.edge_lengths[edge_indices[1]] >= self.edge_lengths[edge_indices[0]])
                # Rename the removed edge by the kept edge name in pix_coord:
                self.edge_pix_coord[self.edge_pix_coord[:, 2] == edge_names[1 - kept_edge], 2] = edge_names[kept_edge]
                # Add the removed edge length to the kept edge length (minus 2, corresponding to the removed vertex)
                self.edge_lengths[self.edges_labels[:, 0] == edge_names[kept_edge]] += self.edge_lengths[self.edges_labels[:, 0] == edge_names[1 - kept_edge]] - 1
                # Remove the corresponding edge length from the list
                self.edge_lengths = self.edge_lengths[self.edges_labels[:, 0] != edge_names[1 - kept_edge]]
                # Rename the vertex of the kept edge in edges_labels
                self.edges_labels[self.edges_labels[:, 0] == edge_names[kept_edge], 1:] = v_names[1 - kept_edge], v_names[kept_edge]
                # Remove the removed edge from the edges_labels array
                self.edges_labels = self.edges_labels[self.edges_labels[:, 0] != edge_names[1 - kept_edge], :]
                # vY, vX = np.nonzero(self.numbered_vertices == vertex2)
                # v_idx = np.nonzero(np.all(self.non_tip_vertices == [vY[0], vX[0]], axis=1))
                vY, vX = self.vertex_index_map[vertex2]
                v_idx = np.nonzero(np.all(self.non_tip_vertices == [vY, vX], axis=1))
                self.non_tip_vertices = np.delete(self.non_tip_vertices, v_idx, axis=0)
        # Sometimes, clearing vertices connecting 2 edges can create edge duplicates, so:
        self.clear_edge_duplicates()

    def _remove_padding(self):
        """
        Removes padding from various coordinate arrays.

        This method adjusts the coordinates of edge pixels, tips,
        and non-tip vertices by subtracting 1 from their x and y values.
        It also removes padding from the skeleton, distances, and vertices
        using the `remove_padding` function.
        """
        self.edge_pix_coord[:, :2] -= 1
        self.tips_coord[:, :2] -= 1
        self.non_tip_vertices[:, :2] -= 1
        del self.vertex_index_map
        self.skeleton, self.distances, self.vertices = remove_padding(
            [self.pad_skeleton, self.pad_distances, self.numbered_vertices])


    def make_vertex_table(self, origin_contours: NDArray[np.uint8]=None, growing_areas: NDArray=None):
        """
        Generate a table for the vertices.

        This method constructs and returns a 2D NumPy array holding information
        about all vertices. Each row corresponds to one vertex identified either
        by its coordinates in `self.tips_coord` or `self.non_tip_vertices`. The
        array includes additional information about each vertex, including whether
        they are food vertices, growing areas, and connected components.

        Parameters
        ----------
        origin_contours : ndarray of uint8, optional
            Binary map to identify food vertices. Default is `None`.
        growing_areas : ndarray, optional
            Binary map to identify growing regions. Default is `None`.

        Notes
        -----
            The method updates the instance attribute `self.vertex_table` with
            the generated vertex information.
        """
        if self.vertices is None:
            self._remove_padding()
        self.vertex_table = np.zeros((self.tips_coord.shape[0] + self.non_tip_vertices.shape[0], 6), dtype=self.vertices.dtype)
        self.vertex_table[:self.tips_coord.shape[0], :2] = self.tips_coord
        self.vertex_table[self.tips_coord.shape[0]:, :2] = self.non_tip_vertices
        self.vertex_table[:self.tips_coord.shape[0], 2] = self.vertices[self.tips_coord[:, 0], self.tips_coord[:, 1]]
        self.vertex_table[self.tips_coord.shape[0]:, 2] = self.vertices[self.non_tip_vertices[:, 0], self.non_tip_vertices[:, 1]]
        self.vertex_table[:self.tips_coord.shape[0], 3] = 1
        if origin_contours is not None:
            food_vertices = self.vertices[origin_contours > 0]
            food_vertices = food_vertices[food_vertices > 0]
            self.vertex_table[np.isin(self.vertex_table[:, 2], food_vertices), 4] = 1

        if growing_areas is not None and growing_areas.shape[1] > 0:
            # growing = np.unique(self.vertices * growing_areas)[1:]
            growing = np.unique(self.vertices[growing_areas[0], growing_areas[1]])
            growing = growing[growing > 0]
            if len(growing) > 0:
                growing = np.isin(self.vertex_table[:, 2], growing)
                self.vertex_table[growing, 4] = 2

        nb, sh, stats, cent = cv2.connectedComponentsWithStats((self.vertices > 0).astype(np.uint8))
        for i, v_i in enumerate(np.nonzero(stats[:, 4] > 1)[0][1:]):
            v_labs = self.vertices[sh == v_i]
            for v_lab in v_labs: # v_lab = v_labs[0]
                self.vertex_table[self.vertex_table[:, 2] == v_lab, 5] = 1


    def make_edge_table(self, greyscale: NDArray[np.uint8], compute_BC: bool=False):
        """
        Generate edge table with length and average intensity information.

        This method processes the vertex coordinates, calculates lengths
        between vertices for each edge, and computes average width and intensity
        along the edges. Additionally, it computes edge betweenness centrality
        for each vertex pair.

        Parameters
        ----------
        greyscale : ndarray of uint8
            Grayscale image.
        """
        if self.vertices is None:
            self._remove_padding()
        self.edge_table = np.zeros((self.edges_labels.shape[0], 7), float) # edge_id, vertex1, vertex2, length, average_width, int, BC
        self.edge_table[:, :3] = self.edges_labels[:, :]
        self.edge_table[:, 3] = self.edge_lengths
        for idx, edge_lab in enumerate(self.edges_labels[:, 0]):
            edge_coord = self.edge_pix_coord[self.edge_pix_coord[:, 2] == edge_lab, :]
            pix_widths = self.distances[edge_coord[:, 0], edge_coord[:, 1]]
            v_id = self.edges_labels[self.edges_labels[:, 0] == edge_lab, 1:][0]
            v1_coord = self.vertex_table[self.vertex_table[:, 2] == v_id[0], :2][0]#
            v2_coord = self.vertex_table[self.vertex_table[:, 2] == v_id[1], :2][0]#
            v1_width, v2_width = self.distances[v1_coord[0], v1_coord[1]], self.distances[v2_coord[0], v2_coord[1]]

            if not np.isnan(v1_width):
                pix_widths = np.append(pix_widths, v1_width)
            if not np.isnan(v2_width):
                pix_widths = np.append(pix_widths, v2_width)
            if pix_widths.size > 0:
                self.edge_table[idx, 4] = pix_widths.mean()
            else:
                self.edge_table[idx, 4] = np.nan
            pix_ints = greyscale[edge_coord[:, 0], edge_coord[:, 1]]
            v1_int, v2_int = greyscale[v1_coord[0], v1_coord[1]], greyscale[v2_coord[0], v2_coord[1]]
            pix_ints = np.append(pix_ints, (v1_int, v2_int))
            self.edge_table[idx, 5] = pix_ints.mean()

        if compute_BC:
            G = nx.from_edgelist(self.edges_labels[:, 1:])
            e_BC = nx.edge_betweenness_centrality(G, seed=0)
            self.BC_net = np.zeros_like(self.distances)
            for v, k in e_BC.items(): # v=(81, 80)
                v_bool = np.logical_or(self.vertex_table[:, 2] == v[0], self.vertex_table[:, 2] == v[1])
                full_coord = self.vertex_table[v_bool, :2]
                edges_bool = np.logical_or(np.all(self.edges_labels[:, 1:] == v[::-1], axis=1),
                                           np.all(self.edges_labels[:, 1:] == v, axis=1))
                edge_labs = self.edges_labels[edges_bool, 0]
                if len(edge_labs) == 1:
                    edge_coord = self.edge_pix_coord[self.edge_pix_coord[:, 2] == edge_labs, :2]
                    full_coord = np.concatenate((full_coord, edge_coord))
                    self.BC_net[full_coord[:, 0], full_coord[:, 1]] = k
                    self.edge_table[self.edge_table[:, 0] == edge_labs, 6] = k
                elif len(edge_labs) > 1:
                    edge_coord0 = self.edge_pix_coord[self.edge_pix_coord[:, 2] == edge_labs[0], :2]
                    for edge_i in range(len(edge_labs)): #  edge_i=1
                        edge_coord = self.edge_pix_coord[self.edge_pix_coord[:, 2] == edge_labs[edge_i], :2]
                        self.edge_table[self.edge_table[:, 0] == edge_labs[edge_i], 6] = k
                        full_coord = np.concatenate((full_coord, edge_coord))
                        self.BC_net[full_coord[:, 0], full_coord[:, 1]] = k
                        if edge_i > 0 and np.array_equal(edge_coord0, edge_coord):
                            logging.error(f"There still is two identical edges: {edge_labs} of len: {len(edge_coord)} linking vertices {v}")
                            break

__init__(pad_skeleton, pad_distances, t=0)

Initialize the class with skeleton and distance arrays.

Parameters:

Name Type Description Default
pad_skeleton ndarray of uint8

Array representing the skeleton to pad.

required
pad_distances ndarray of float64

Array representing distances corresponding to the skeleton.

required

Attributes:

Name Type Description
remaining_vertices None

Remaining vertices. Initialized as None.

vertices None

Vertices. Initialized as None.

growing_vertices None

Growing vertices. Initialized as None.

im_shape tuple of ints

Shape of the skeleton array.

Source code in src/cellects/image_analysis/network_functions.py
def __init__(self, pad_skeleton: NDArray[np.uint8], pad_distances: NDArray[np.float64], t: int=0):
    """
    Initialize the class with skeleton and distance arrays.

    Parameters
    ----------
    pad_skeleton : ndarray of uint8
        Array representing the skeleton to pad.
    pad_distances : ndarray of float64
        Array representing distances corresponding to the skeleton.

    Attributes
    ----------
    remaining_vertices : None
        Remaining vertices. Initialized as `None`.
    vertices : None
        Vertices. Initialized as `None`.
    growing_vertices : None
        Growing vertices. Initialized as `None`.
    im_shape : tuple of ints
        Shape of the skeleton array.
    """
    self.pad_skeleton = pad_skeleton
    self.pad_distances = pad_distances
    self.t = t
    self.remaining_vertices = None
    self.vertices = None
    self.growing_vertices = None
    self.im_shape = pad_skeleton.shape

clear_areas_of_1_or_2_unidentified_pixels()

Removes 1 or 2 pixel size non-identified areas from the skeleton.

This function checks whether small non-identified areas (1 or 2 pixels) can be removed without breaking the skeleton structure. It performs a series of operations to ensure only safe removals are made, logging errors if the final skeleton is not fully connected or if some unidentified pixels remain.

Source code in src/cellects/image_analysis/network_functions.py
def clear_areas_of_1_or_2_unidentified_pixels(self):
    """Removes 1 or 2 pixel size non-identified areas from the skeleton.

    This function checks whether small non-identified areas (1 or 2 pixels)
    can be removed without breaking the skeleton structure. It performs
    a series of operations to ensure only safe removals are made, logging
    errors if the final skeleton is not fully connected or if some unidentified pixels remain.
    """
    # Check whether the 1 or 2 pixel size non-identified areas can be removed without breaking the skel
    one_pix = np.nonzero(self.unidentified_stats[:, 4 ] <= 2)[0] # == 1)[0]
    cutting_removal = []
    for pix_i in one_pix: #pix_i=one_pix[0]
        skel_copy = self.pad_skeleton.copy()
        y1, y2, x1, x2 = self.unidentified_stats[pix_i, 1], self.unidentified_stats[pix_i, 1] + self.unidentified_stats[pix_i, 3], self.unidentified_stats[pix_i, 0], self.unidentified_stats[pix_i, 0] + self.unidentified_stats[pix_i, 2]
        skel_copy[y1:y2, x1:x2][self.unidentified_shapes[y1:y2, x1:x2] == pix_i] = 0
        nb1, sh1 = cv2.connectedComponents(skel_copy.astype(np.uint8), connectivity=8)
        if nb1 > 2:
            cutting_removal.append(pix_i)
        else:
            self.pad_skeleton[y1:y2, x1:x2][self.unidentified_shapes[y1:y2, x1:x2] == pix_i] = 0
    if len(cutting_removal) > 0:
        logging.error(f"t={self.t}, These pixels break the skeleton when removed: {cutting_removal}")
    if (self.identified > 0).sum() != self.pad_skeleton.sum():
        logging.error(f"t={self.t}, Proportion of identified pixels in the skeleton: {(self.identified > 0).sum() / self.pad_skeleton.sum()}")
    self.pad_distances *= self.pad_skeleton
    del self.identified
    del self.unidentified_stats
    del self.unidentified_shapes

clear_edge_duplicates()

Remove duplicate edges by checking vertices and coordinates.

This method identifies and removes duplicate edges based on their vertex labels and pixel coordinates. It scans through the edge attributes, compares them, and removes duplicates if they are found.

Source code in src/cellects/image_analysis/network_functions.py
def clear_edge_duplicates(self):
    """
    Remove duplicate edges by checking vertices and coordinates.

    This method identifies and removes duplicate edges based on their vertex labels
    and pixel coordinates. It scans through the edge attributes, compares them,
    and removes duplicates if they are found.
    """
    edges_to_remove = []
    duplicates = find_duplicates_coord(np.vstack((self.edges_labels[:, 1:], self.edges_labels[:, :0:-1])))
    duplicates = np.logical_or(duplicates[:len(duplicates)//2], duplicates[len(duplicates)//2:])
    for v in self.edges_labels[duplicates, 1:]: #v = self.edges_labels[duplicates, 1:][4]
        edges_bool = np.logical_or(np.all(self.edges_labels[:, 1:] == v, axis=1), np.all(self.edges_labels[:, 1:] == v[::-1], axis=1))
        edge_labs = self.edges_labels[edges_bool, 0]
        for edge_i in range(0, len(edge_labs) - 1):  #  edge_i = 0
            edge_i_coord = self.edge_pix_coord[self.edge_pix_coord[:, 2] == edge_labs[edge_i], :2]
            for edge_j in range(edge_i + 1, len(edge_labs)):  #  edge_j = 1
                edge_j_coord = self.edge_pix_coord[self.edge_pix_coord[:, 2] == edge_labs[edge_j], :2]
                if np.array_equal(edge_i_coord, edge_j_coord):
                    edges_to_remove.append(edge_labs[edge_j])
    edges_to_remove = np.unique(edges_to_remove)
    for edge in edges_to_remove:
        edge_bool = self.edges_labels[:, 0] != edge
        self.edges_labels = self.edges_labels[edge_bool, :]
        self.edge_lengths = self.edge_lengths[edge_bool]
        self.edge_pix_coord = self.edge_pix_coord[self.edge_pix_coord[:, 2] != edge, :]

clear_vertices_connecting_2_edges()

Remove vertices connecting exactly two edges and update edge-related attributes.

This method identifies vertices that are connected to exactly 2 edges, renames edges, updates edge lengths and vertex coordinates accordingly. It also removes the corresponding vertices from non-tip vertices list.

Source code in src/cellects/image_analysis/network_functions.py
def clear_vertices_connecting_2_edges(self):
    """
    Remove vertices connecting exactly two edges and update edge-related attributes.

    This method identifies vertices that are connected to exactly 2 edges,
    renames edges, updates edge lengths and vertex coordinates accordingly.
    It also removes the corresponding vertices from non-tip vertices list.
    """
    v_labels, v_counts = np.unique(self.edges_labels[:, 1:], return_counts=True)
    vertices2 = v_labels[v_counts == 2]
    for vertex2 in vertices2:  # vertex2 = vertices2[0]
        edge_indices = np.nonzero(self.edges_labels[:, 1:] == vertex2)[0]
        edge_names = [self.edges_labels[edge_indices[0], 0], self.edges_labels[edge_indices[1], 0]]
        v_names = np.concatenate((self.edges_labels[edge_indices[0], 1:], self.edges_labels[edge_indices[1], 1:]))
        v_names = v_names[v_names != vertex2]
        if len(v_names) > 0: # Otherwise it's a vertex between a normal edge and a loop
            kept_edge = int(self.edge_lengths[edge_indices[1]] >= self.edge_lengths[edge_indices[0]])
            # Rename the removed edge by the kept edge name in pix_coord:
            self.edge_pix_coord[self.edge_pix_coord[:, 2] == edge_names[1 - kept_edge], 2] = edge_names[kept_edge]
            # Add the removed edge length to the kept edge length (minus 2, corresponding to the removed vertex)
            self.edge_lengths[self.edges_labels[:, 0] == edge_names[kept_edge]] += self.edge_lengths[self.edges_labels[:, 0] == edge_names[1 - kept_edge]] - 1
            # Remove the corresponding edge length from the list
            self.edge_lengths = self.edge_lengths[self.edges_labels[:, 0] != edge_names[1 - kept_edge]]
            # Rename the vertex of the kept edge in edges_labels
            self.edges_labels[self.edges_labels[:, 0] == edge_names[kept_edge], 1:] = v_names[1 - kept_edge], v_names[kept_edge]
            # Remove the removed edge from the edges_labels array
            self.edges_labels = self.edges_labels[self.edges_labels[:, 0] != edge_names[1 - kept_edge], :]
            # vY, vX = np.nonzero(self.numbered_vertices == vertex2)
            # v_idx = np.nonzero(np.all(self.non_tip_vertices == [vY[0], vX[0]], axis=1))
            vY, vX = self.vertex_index_map[vertex2]
            v_idx = np.nonzero(np.all(self.non_tip_vertices == [vY, vX], axis=1))
            self.non_tip_vertices = np.delete(self.non_tip_vertices, v_idx, axis=0)
    # Sometimes, clearing vertices connecting 2 edges can create edge duplicates, so:
    self.clear_edge_duplicates()

get_tipped_edges()

get_tipped_edges : method to extract skeleton edges connecting branching points and tips.

Makes sure that there is only one connected component constituting the skeleton of the network and identifies all edges that are connected to a tip.

Attributes:

Name Type Description
pad_skeleton ndarray of bool, modified

Boolean mask representing the pruned skeleton after isolating the largest connected component.

vertices_branching_tips ndarray of int, shape (N, 2)

Coordinates of branching points that connect to tips in the skeleton structure.

edge_lengths ndarray of float, shape (M,)

Lengths of edges connecting non-tip vertices to identified tip locations.

edge_pix_coord list of array of int

Pixel coordinates for each edge path between connected skeleton elements.

Source code in src/cellects/image_analysis/network_functions.py
def get_tipped_edges(self):
    """
    get_tipped_edges : method to extract skeleton edges connecting branching points and tips.

    Makes sure that there is only one connected component constituting the skeleton of the network and
    identifies all edges that are connected to a tip.

    Attributes
    ----------
    pad_skeleton : ndarray of bool, modified
        Boolean mask representing the pruned skeleton after isolating the largest connected component.
    vertices_branching_tips : ndarray of int, shape (N, 2)
        Coordinates of branching points that connect to tips in the skeleton structure.
    edge_lengths : ndarray of float, shape (M,)
        Lengths of edges connecting non-tip vertices to identified tip locations.
    edge_pix_coord : list of array of int
        Pixel coordinates for each edge path between connected skeleton elements.

    """
    self.pad_skeleton = keep_one_connected_component(self.pad_skeleton)
    self.vertices_branching_tips, self.edge_lengths, self.edge_pix_coord = _find_closest_vertices(self.pad_skeleton,
                                                                                    self.non_tip_vertices,
                                                                                    self.tips_coord[:, :2])

get_vertices_and_tips_coord()

Process skeleton data to extract non-tip vertices and tip coordinates.

This method processes the skeleton stored in self.pad_skeleton by first extracting all vertices and tips. It then separates these into branch points (non-tip vertices) and specific tip coordinates using internal processing.

Attributes:

Name Type Description
self.non_tip_vertices array - like

Coordinates of non-tip (branch) vertices.

self.tips_coord array - like

Coordinates of identified tips in the skeleton.

Source code in src/cellects/image_analysis/network_functions.py
def get_vertices_and_tips_coord(self):
    """Process skeleton data to extract non-tip vertices and tip coordinates.

    This method processes the skeleton stored in `self.pad_skeleton` by first
    extracting all vertices and tips. It then separates these into branch points
    (non-tip vertices) and specific tip coordinates using internal processing.

    Attributes
    ----------
    self.non_tip_vertices : array-like
        Coordinates of non-tip (branch) vertices.
    self.tips_coord : array-like
        Coordinates of identified tips in the skeleton.
    """
    pad_vertices, pad_tips = get_vertices_and_tips_from_skeleton(self.pad_skeleton)
    self.non_tip_vertices, self.tips_coord = get_branches_and_tips_coord(pad_vertices, pad_tips)

label_edges_connected_with_vertex_clusters()

Identify edges connected to touching vertices by processing vertex clusters.

This function processes the skeleton to identify edges connecting vertices that are part of touching clusters. It creates a cropped version of the skeleton by removing already detected edges and their tips, then iterates through vertex clusters to explore and identify nearby edges.

Source code in src/cellects/image_analysis/network_functions.py
def label_edges_connected_with_vertex_clusters(self):
    """
    Identify edges connected to touching vertices by processing vertex clusters.

    This function processes the skeleton to identify edges connecting vertices
    that are part of touching clusters. It creates a cropped version of the skeleton
    by removing already detected edges and their tips, then iterates through vertex
    clusters to explore and identify nearby edges.
    """
    # I.1. Identify edges connected to touching vertices:
    # First, create another version of these arrays, where we remove every already detected edge and their tips
    cropped_skeleton = self.pad_skeleton.copy()
    cropped_skeleton[self.edge_pix_coord[:, 0], self.edge_pix_coord[:, 1]] = 0
    cropped_skeleton[self.tips_coord[:, 0], self.tips_coord[:, 1]] = 0

    # non_tip_vertices does not need to be updated yet, because it only contains verified branching vertices
    cropped_non_tip_vertices = self.non_tip_vertices.copy()

    self.new_level_vertices = None
    # The problem with vertex_to_vertex_connexion is that since they are not separated by zeros,
    # they always atract each other instead of exploring other paths.
    # To fix this, we loop over each vertex group to
    # 1. Add one edge per inter-vertex connexion inside the group
    # 2. Remove all except one, and loop as many time as necessary.
    # Inside that second loop, we explore and identify every edge nearby.
    # Find every vertex_to_vertex_connexion
    v_cluster_nb, self.v_cluster_lab, self.v_cluster_stats, vgc = cv2.connectedComponentsWithStats(
        (self.numbered_vertices > 0).astype(np.uint8), connectivity=8)
    if v_cluster_nb > 0:
        max_v_nb = np.max(self.v_cluster_stats[1:, 4])
        cropped_skeleton_list = []
        starting_vertices_list = []
        for v_nb in range(2, max_v_nb + 1):
            labels = np.nonzero(self.v_cluster_stats[:, 4] == v_nb)[0]
            coord_list = []
            for lab in labels:  # lab=labels[0]
                coord_list.append(np.nonzero(self.v_cluster_lab == lab))
            for iter in range(v_nb):
                for lab_ in range(labels.shape[0]): # lab=labels[0]
                    cs = cropped_skeleton.copy()
                    sv = []
                    v_c = coord_list[lab_]
                    # Save the current coordinate in the starting vertices array of this iteration
                    sv.append([v_c[0][iter], v_c[1][iter]])
                    # Remove one vertex coordinate to keep it from cs
                    v_y, v_x = np.delete(v_c[0], iter), np.delete(v_c[1], iter)
                    cs[v_y, v_x] = 0
                    cropped_skeleton_list.append(cs)
                    starting_vertices_list.append(np.array(sv))
        for cropped_skeleton, starting_vertices in zip(cropped_skeleton_list, starting_vertices_list):
            _, _ = self._identify_edges_connecting_a_vertex_list(cropped_skeleton, cropped_non_tip_vertices, starting_vertices)

label_edges_connecting_vertex_clusters()

Label edges connecting vertex clusters.

This method identifies the connections between connected vertices within vertex clusters and labels these edges. It uses the previously found connected vertices, creates an image of the connections, and then identifies and labels the edges between these touching vertices.

Source code in src/cellects/image_analysis/network_functions.py
def label_edges_connecting_vertex_clusters(self):
    """
    Label edges connecting vertex clusters.

    This method identifies the connections between connected vertices within
    vertex clusters and labels these edges. It uses the previously found connected
    vertices, creates an image of the connections, and then identifies
    and labels the edges between these touching vertices.
    """
    # I.2. Identify the connexions between connected vertices:
    all_connected_vertices = np.nonzero(self.v_cluster_stats[:, 4] > 1)[0][1:]
    all_con_v_im = np.zeros_like(self.pad_skeleton)
    for v_group in all_connected_vertices:
        all_con_v_im[self.v_cluster_lab == v_group] = 1
    cropped_skeleton = all_con_v_im
    self.vertex_clusters_coord = np.transpose(np.array(np.nonzero(cropped_skeleton)))
    _, _ = self._identify_edges_connecting_a_vertex_list(cropped_skeleton, self.vertex_clusters_coord, self.vertex_clusters_coord)
    # self.edges_labels
    del self.v_cluster_stats
    del self.v_cluster_lab

label_edges_from_known_vertices_iteratively()

Label edges iteratively from known vertices.

This method labels edges in an iterative process starting from known vertices. It handles the removal of detected edges and updates the skeleton accordingly, to avoid detecting edges twice.

Source code in src/cellects/image_analysis/network_functions.py
def label_edges_from_known_vertices_iteratively(self):
    """
    Label edges iteratively from known vertices.

    This method labels edges in an iterative process starting from
    known vertices. It handles the removal of detected edges and
    updates the skeleton accordingly, to avoid detecting edges twice.
    """
    # II/ Identify all remaining edges
    if self.new_level_vertices is not None:
        starting_vertices_coord = np.vstack((self.new_level_vertices[:, :2], self.vertices_branching_tips))
        starting_vertices_coord = np.unique(starting_vertices_coord, axis=0)
    else:
        # We start from the vertices connecting tips
        starting_vertices_coord = self.vertices_branching_tips.copy()
    # Remove the detected edges from cropped_skeleton:
    cropped_skeleton = self.pad_skeleton.copy()
    cropped_skeleton[self.edge_pix_coord[:, 0], self.edge_pix_coord[:, 1]] = 0
    cropped_skeleton[self.tips_coord[:, 0], self.tips_coord[:, 1]] = 0
    cropped_skeleton[self.vertex_clusters_coord[:, 0], self.vertex_clusters_coord[:, 1]] = 0

    # Reinitialize cropped_non_tip_vertices to browse all vertices except tips and groups
    cropped_non_tip_vertices = self.non_tip_vertices.copy()
    cropped_non_tip_vertices = remove_coordinates(cropped_non_tip_vertices, self.vertex_clusters_coord)
    del self.vertex_clusters_coord
    remaining_v = cropped_non_tip_vertices.shape[0] + 1
    while remaining_v > cropped_non_tip_vertices.shape[0]:
        remaining_v = cropped_non_tip_vertices.shape[0]
        cropped_skeleton, cropped_non_tip_vertices = self._identify_edges_connecting_a_vertex_list(cropped_skeleton, cropped_non_tip_vertices, starting_vertices_coord)
        if self.new_level_vertices is not None:
            starting_vertices_coord = np.unique(self.new_level_vertices[:, :2], axis=0)

label_edges_looping_on_1_vertex()

Identify and handle edges that form loops around a single vertex. This method processes the skeleton image to find looping edges and updates the edge data structure accordingly.

Source code in src/cellects/image_analysis/network_functions.py
def label_edges_looping_on_1_vertex(self):
    """
    Identify and handle edges that form loops around a single vertex.
    This method processes the skeleton image to find looping edges and updates
    the edge data structure accordingly.
    """
    self.identified = np.zeros_like(self.numbered_vertices)
    self.identified[self.edge_pix_coord[:, 0], self.edge_pix_coord[:, 1]] = 1
    self.identified[self.non_tip_vertices[:, 0], self.non_tip_vertices[:, 1]] = 1
    self.identified[self.tips_coord[:, 0], self.tips_coord[:, 1]] = 1
    unidentified = (1 - self.identified) * self.pad_skeleton

    # Find out the remaining non-identified pixels
    nb, self.unidentified_shapes, self.unidentified_stats, ce = cv2.connectedComponentsWithStats(unidentified.astype(np.uint8))
    # Handle the cases where edges are loops over only one vertex
    looping_edges = np.nonzero(self.unidentified_stats[:, 4 ] > 2)[0][1:]
    for loop_i in looping_edges: # loop_i = looping_edges[0] loop_i=11 #  zoom_on_nonzero(unique_vertices_im, return_coord=False)
        edge_i = (self.unidentified_shapes == loop_i).astype(np.uint8)
        dil_edge_i = cv2.dilate(edge_i, square_33)
        unique_vertices_im = self.numbered_vertices.copy()
        unique_vertices_im[self.tips_coord[:, 0], self.tips_coord[:, 1]] = 0
        unique_vertices_im = dil_edge_i * unique_vertices_im
        unique_vertices = np.unique(unique_vertices_im)
        unique_vertices = unique_vertices[unique_vertices > 0]
        v_nb = len(unique_vertices)
        new_edge_lengths = edge_i.sum()
        new_edge_pix_coord = np.transpose(np.vstack((np.nonzero(edge_i))))
        new_edge_pix_coord = np.hstack((new_edge_pix_coord, np.repeat(1, new_edge_pix_coord.shape[0])[:, None]))
        if v_nb == 1:
            start, end = unique_vertices[0], unique_vertices[0]
            self._update_edge_data(start, end, new_edge_lengths, new_edge_pix_coord)
        elif v_nb == 2:
            # The edge loops around a group of connected vertices
            start, end = unique_vertices[0], unique_vertices[1]
            self._update_edge_data(start, end, new_edge_lengths, new_edge_pix_coord)
            # conn_v_nb, conn_v = cv2.connectedComponents((unique_vertices_im > 0).astype(np.uint8))
            # if len(unique_vertices) == 2 and conn_v_nb == 2:
        elif v_nb > 2: # The question is: How to choose two vertices so that they link all missing pixels?
            # 1. Find every edge pixel connected to these vertices
            vertex_connected_pixels = np.nonzero(cv2.dilate((unique_vertices_im > 0).astype(np.uint8), square_33) * edge_i)
            # 2. Find the most distant pair of these
            pix1, pix2 = get_min_or_max_euclidean_pair(vertex_connected_pixels, "max")
            # 3. The two best vertices are the two nearest to these two most distant edge pixels
            dist_to_pix1 = np.zeros(v_nb, np.float64)
            dist_to_pix2 = np.zeros(v_nb, np.float64)
            for _i, v_i in enumerate(unique_vertices):
                v_coord = self.vertex_index_map[v_i]
                dist_to_pix1[_i] = eudist(pix1, v_coord)
                dist_to_pix2[_i] = eudist(pix2, v_coord)
            start, end = unique_vertices[np.argmin(dist_to_pix1)], unique_vertices[np.argmin(dist_to_pix2)]
            self._update_edge_data(start, end, new_edge_lengths, new_edge_pix_coord)
        else:
            logging.error(f"t={self.t}, One long edge is not identified: i={loop_i} of length={edge_i.sum()} close to {len(unique_vertices)} vertices.")
    self.identified[self.edge_pix_coord[:, 0], self.edge_pix_coord[:, 1]] = 1

label_tipped_edges_and_their_vertices()

Label edges connecting tip vertices to branching vertices and assign unique labels to all relevant vertices.

Processes vertex coordinates by stacking tips, vertices branching from tips, and remaining non-tip vertices. Assigns unique sequential identifiers to these vertices in a new array. Constructs an array of edge-label information, where each row contains the edge label (starting at 1), corresponding tip label, and connected vertex label.

Attributes:

Name Type Description
tip_number int

The number of tip coordinates available in tips_coord.

ordered_v_coord ndarray of float

Stack of unique vertex coordinates ordered by: tips first, vertices branching tips second, non-tip vertices third.

numbered_vertices ndarray of uint32

2D array where each coordinate position is labeled with a sequential integer (starting at 1) based on the order in ordered_v_coord.

edges_labels ndarray of uint32

Array of shape (n_edges, 3). Each row contains: - Edge label (sequential from 1 to n_edges) - Label of the tip vertex for that edge. - Label of the vertex branching the tip.

vertices_branching_tips ndarray of float

Unique coordinates of vertices directly connected to tips after removing duplicates.

Source code in src/cellects/image_analysis/network_functions.py
def label_tipped_edges_and_their_vertices(self):
    """Label edges connecting tip vertices to branching vertices and assign unique labels to all relevant vertices.

    Processes vertex coordinates by stacking tips, vertices branching from tips, and remaining non-tip vertices.
    Assigns unique sequential identifiers to these vertices in a new array. Constructs an array of edge-label information,
    where each row contains the edge label (starting at 1), corresponding tip label, and connected vertex label.

    Attributes
    ----------
    tip_number : int
        The number of tip coordinates available in `tips_coord`.

    ordered_v_coord : ndarray of float
        Stack of unique vertex coordinates ordered by: tips first, vertices branching tips second, non-tip vertices third.

    numbered_vertices : ndarray of uint32
        2D array where each coordinate position is labeled with a sequential integer (starting at 1) based on the order in `ordered_v_coord`.

    edges_labels : ndarray of uint32
        Array of shape (n_edges, 3). Each row contains:
        - Edge label (sequential from 1 to n_edges)
        - Label of the tip vertex for that edge.
        - Label of the vertex branching the tip.

    vertices_branching_tips : ndarray of float
        Unique coordinates of vertices directly connected to tips after removing duplicates.
    """
    self.tip_number = self.tips_coord.shape[0]

    # Stack vertex coordinates in that order: 1. Tips, 2. Vertices branching tips, 3. All remaining vertices
    ordered_v_coord = np.vstack((self.tips_coord[:, :2], self.vertices_branching_tips[:, :2], self.non_tip_vertices))
    ordered_v_coord = np.unique(ordered_v_coord, axis=0)

    # Create arrays to store edges and vertices labels
    self.numbered_vertices = np.zeros(self.im_shape, dtype=np.uint32)
    self.numbered_vertices[ordered_v_coord[:, 0], ordered_v_coord[:, 1]] = np.arange(1, ordered_v_coord.shape[0] + 1)
    self.vertices = None
    self.vertex_index_map = {}
    for idx, (y, x) in enumerate(ordered_v_coord):
        self.vertex_index_map[idx + 1] = tuple((np.uint32(y), np.uint32(x)))

    # Name edges from 1 to the number of edges connecting tips and set the vertices labels from all tips to their connected vertices:
    self.edges_labels = np.zeros((self.tip_number, 3), dtype=np.uint32)
    # edge label:
    self.edges_labels[:, 0] = np.arange(self.tip_number) + 1
    # tip label:
    self.edges_labels[:, 1] = self.numbered_vertices[self.tips_coord[:, 0], self.tips_coord[:, 1]]
    # vertex branching tip label:
    self.edges_labels[:, 2] = self.numbered_vertices[self.vertices_branching_tips[:, 0], self.vertices_branching_tips[:, 1]]

    # Remove duplicates in vertices_branching_tips
    self.vertices_branching_tips = np.unique(self.vertices_branching_tips[:, :2], axis=0)

make_edge_table(greyscale, compute_BC=False)

Generate edge table with length and average intensity information.

This method processes the vertex coordinates, calculates lengths between vertices for each edge, and computes average width and intensity along the edges. Additionally, it computes edge betweenness centrality for each vertex pair.

Parameters:

Name Type Description Default
greyscale ndarray of uint8

Grayscale image.

required
Source code in src/cellects/image_analysis/network_functions.py
def make_edge_table(self, greyscale: NDArray[np.uint8], compute_BC: bool=False):
    """
    Generate edge table with length and average intensity information.

    This method processes the vertex coordinates, calculates lengths
    between vertices for each edge, and computes average width and intensity
    along the edges. Additionally, it computes edge betweenness centrality
    for each vertex pair.

    Parameters
    ----------
    greyscale : ndarray of uint8
        Grayscale image.
    """
    if self.vertices is None:
        self._remove_padding()
    self.edge_table = np.zeros((self.edges_labels.shape[0], 7), float) # edge_id, vertex1, vertex2, length, average_width, int, BC
    self.edge_table[:, :3] = self.edges_labels[:, :]
    self.edge_table[:, 3] = self.edge_lengths
    for idx, edge_lab in enumerate(self.edges_labels[:, 0]):
        edge_coord = self.edge_pix_coord[self.edge_pix_coord[:, 2] == edge_lab, :]
        pix_widths = self.distances[edge_coord[:, 0], edge_coord[:, 1]]
        v_id = self.edges_labels[self.edges_labels[:, 0] == edge_lab, 1:][0]
        v1_coord = self.vertex_table[self.vertex_table[:, 2] == v_id[0], :2][0]#
        v2_coord = self.vertex_table[self.vertex_table[:, 2] == v_id[1], :2][0]#
        v1_width, v2_width = self.distances[v1_coord[0], v1_coord[1]], self.distances[v2_coord[0], v2_coord[1]]

        if not np.isnan(v1_width):
            pix_widths = np.append(pix_widths, v1_width)
        if not np.isnan(v2_width):
            pix_widths = np.append(pix_widths, v2_width)
        if pix_widths.size > 0:
            self.edge_table[idx, 4] = pix_widths.mean()
        else:
            self.edge_table[idx, 4] = np.nan
        pix_ints = greyscale[edge_coord[:, 0], edge_coord[:, 1]]
        v1_int, v2_int = greyscale[v1_coord[0], v1_coord[1]], greyscale[v2_coord[0], v2_coord[1]]
        pix_ints = np.append(pix_ints, (v1_int, v2_int))
        self.edge_table[idx, 5] = pix_ints.mean()

    if compute_BC:
        G = nx.from_edgelist(self.edges_labels[:, 1:])
        e_BC = nx.edge_betweenness_centrality(G, seed=0)
        self.BC_net = np.zeros_like(self.distances)
        for v, k in e_BC.items(): # v=(81, 80)
            v_bool = np.logical_or(self.vertex_table[:, 2] == v[0], self.vertex_table[:, 2] == v[1])
            full_coord = self.vertex_table[v_bool, :2]
            edges_bool = np.logical_or(np.all(self.edges_labels[:, 1:] == v[::-1], axis=1),
                                       np.all(self.edges_labels[:, 1:] == v, axis=1))
            edge_labs = self.edges_labels[edges_bool, 0]
            if len(edge_labs) == 1:
                edge_coord = self.edge_pix_coord[self.edge_pix_coord[:, 2] == edge_labs, :2]
                full_coord = np.concatenate((full_coord, edge_coord))
                self.BC_net[full_coord[:, 0], full_coord[:, 1]] = k
                self.edge_table[self.edge_table[:, 0] == edge_labs, 6] = k
            elif len(edge_labs) > 1:
                edge_coord0 = self.edge_pix_coord[self.edge_pix_coord[:, 2] == edge_labs[0], :2]
                for edge_i in range(len(edge_labs)): #  edge_i=1
                    edge_coord = self.edge_pix_coord[self.edge_pix_coord[:, 2] == edge_labs[edge_i], :2]
                    self.edge_table[self.edge_table[:, 0] == edge_labs[edge_i], 6] = k
                    full_coord = np.concatenate((full_coord, edge_coord))
                    self.BC_net[full_coord[:, 0], full_coord[:, 1]] = k
                    if edge_i > 0 and np.array_equal(edge_coord0, edge_coord):
                        logging.error(f"There still is two identical edges: {edge_labs} of len: {len(edge_coord)} linking vertices {v}")
                        break

make_vertex_table(origin_contours=None, growing_areas=None)

Generate a table for the vertices.

This method constructs and returns a 2D NumPy array holding information about all vertices. Each row corresponds to one vertex identified either by its coordinates in self.tips_coord or self.non_tip_vertices. The array includes additional information about each vertex, including whether they are food vertices, growing areas, and connected components.

Parameters:

Name Type Description Default
origin_contours ndarray of uint8

Binary map to identify food vertices. Default is None.

None
growing_areas ndarray

Binary map to identify growing regions. Default is None.

None
Notes
The method updates the instance attribute `self.vertex_table` with
the generated vertex information.
Source code in src/cellects/image_analysis/network_functions.py
def make_vertex_table(self, origin_contours: NDArray[np.uint8]=None, growing_areas: NDArray=None):
    """
    Generate a table for the vertices.

    This method constructs and returns a 2D NumPy array holding information
    about all vertices. Each row corresponds to one vertex identified either
    by its coordinates in `self.tips_coord` or `self.non_tip_vertices`. The
    array includes additional information about each vertex, including whether
    they are food vertices, growing areas, and connected components.

    Parameters
    ----------
    origin_contours : ndarray of uint8, optional
        Binary map to identify food vertices. Default is `None`.
    growing_areas : ndarray, optional
        Binary map to identify growing regions. Default is `None`.

    Notes
    -----
        The method updates the instance attribute `self.vertex_table` with
        the generated vertex information.
    """
    if self.vertices is None:
        self._remove_padding()
    self.vertex_table = np.zeros((self.tips_coord.shape[0] + self.non_tip_vertices.shape[0], 6), dtype=self.vertices.dtype)
    self.vertex_table[:self.tips_coord.shape[0], :2] = self.tips_coord
    self.vertex_table[self.tips_coord.shape[0]:, :2] = self.non_tip_vertices
    self.vertex_table[:self.tips_coord.shape[0], 2] = self.vertices[self.tips_coord[:, 0], self.tips_coord[:, 1]]
    self.vertex_table[self.tips_coord.shape[0]:, 2] = self.vertices[self.non_tip_vertices[:, 0], self.non_tip_vertices[:, 1]]
    self.vertex_table[:self.tips_coord.shape[0], 3] = 1
    if origin_contours is not None:
        food_vertices = self.vertices[origin_contours > 0]
        food_vertices = food_vertices[food_vertices > 0]
        self.vertex_table[np.isin(self.vertex_table[:, 2], food_vertices), 4] = 1

    if growing_areas is not None and growing_areas.shape[1] > 0:
        # growing = np.unique(self.vertices * growing_areas)[1:]
        growing = np.unique(self.vertices[growing_areas[0], growing_areas[1]])
        growing = growing[growing > 0]
        if len(growing) > 0:
            growing = np.isin(self.vertex_table[:, 2], growing)
            self.vertex_table[growing, 4] = 2

    nb, sh, stats, cent = cv2.connectedComponentsWithStats((self.vertices > 0).astype(np.uint8))
    for i, v_i in enumerate(np.nonzero(stats[:, 4] > 1)[0][1:]):
        v_labs = self.vertices[sh == v_i]
        for v_lab in v_labs: # v_lab = v_labs[0]
            self.vertex_table[self.vertex_table[:, 2] == v_lab, 5] = 1

remove_tipped_edge_smaller_than_branch_width()

Remove very short edges from the skeleton.

This method focuses on edges connecting tips. When too short, they are considered are noise and removed from the skeleton and distances matrices. These edges are considered too short when their length is smaller than the width of the nearest network branch (an information included in pad_distances). This method also updates internal data structures (skeleton, edge coordinates, vertex/tip positions) accordingly through pixel-wise analysis and connectivity checks.

Source code in src/cellects/image_analysis/network_functions.py
def remove_tipped_edge_smaller_than_branch_width(self):
    """Remove very short edges from the skeleton.

    This method focuses on edges connecting tips. When too short, they are considered are noise and
    removed from the skeleton and distances matrices. These edges are considered too short when their length is
    smaller than the width of the nearest network branch (an information included in pad_distances).
    This method also updates internal data structures (skeleton, edge coordinates, vertex/tip positions)
    accordingly through pixel-wise analysis and connectivity checks.
    """
    # Identify edges that are smaller than the width of the branch it is attached to
    tipped_edges_to_remove = np.zeros(self.edge_lengths.shape[0], dtype=bool)
    # connecting_vertices_to_remove = np.zeros(self.vertices_branching_tips.shape[0], dtype=bool)
    branches_to_remove = np.zeros(self.non_tip_vertices.shape[0], dtype=bool)
    new_edge_pix_coord = []
    remaining_tipped_edges_nb = 0
    for i in range(len(self.edge_lengths)): # i = 3142 #1096 # 974 # 222
        Y, X = self.vertices_branching_tips[i, 0], self.vertices_branching_tips[i, 1]
        edge_bool = self.edge_pix_coord[:, 2] == i + 1
        eY, eX = self.edge_pix_coord[edge_bool, 0], self.edge_pix_coord[edge_bool, 1]
        if np.nanmax(self.pad_distances[(Y - 1): (Y + 2), (X - 1): (X + 2)]) >= self.edge_lengths[i]:
            tipped_edges_to_remove[i] = True
            # Remove the edge
            self.pad_skeleton[eY, eX] = 0
            # Remove the tip
            self.pad_skeleton[self.tips_coord[i, 0], self.tips_coord[i, 1]] = 0

            # Remove the coordinates corresponding to that edge
            self.edge_pix_coord = np.delete(self.edge_pix_coord, edge_bool, 0)

            # check whether the connecting vertex remains a vertex of not
            pad_sub_skeleton = np.pad(self.pad_skeleton[(Y - 2): (Y + 3), (X - 2): (X + 3)], [(1,), (1,)],
                                      mode='constant')
            sub_vertices, sub_tips = get_vertices_and_tips_from_skeleton(pad_sub_skeleton)
            # If the vertex does not connect at least 3 edges anymore, remove its vertex label
            if sub_vertices[3, 3] == 0:
                vertex_to_remove = np.nonzero(np.logical_and(self.non_tip_vertices[:, 0] == Y, self.non_tip_vertices[:, 1] == X))[0]
                branches_to_remove[vertex_to_remove] = True
            # If that pixel became a tip connected to another vertex remove it from the skeleton
            if sub_tips[3, 3]:
                if sub_vertices[2:5, 2:5].sum() > 1:
                    self.pad_skeleton[Y, X] = 0
                    self.edge_pix_coord = np.delete(self.edge_pix_coord, np.all(self.edge_pix_coord[:, :2] == [Y, X], axis=1), 0)
                    vertex_to_remove = np.nonzero(np.logical_and(self.non_tip_vertices[:, 0] == Y, self.non_tip_vertices[:, 1] == X))[0]
                    branches_to_remove[vertex_to_remove] = True
        else:
            remaining_tipped_edges_nb += 1
            new_edge_pix_coord.append(np.stack((eY, eX, np.repeat(remaining_tipped_edges_nb, len(eY))), axis=1))

    # Check that excedent connected components are 1 pixel size, if so:
    # It means that they were neighbors to removed tips and not necessary for the skeleton
    nb, sh = cv2.connectedComponents(self.pad_skeleton)
    if nb > 2:
        logging.error("Removing small tipped edges split the skeleton")
        # for i in range(2, nb):
        #     excedent = sh == i
        #     if (excedent).sum() == 1:
        #         self.pad_skeleton[excedent] = 0

    # Remove in distances the pixels removed in skeleton:
    self.pad_distances *= self.pad_skeleton

    # update edge_pix_coord
    if len(new_edge_pix_coord) > 0:
        self.edge_pix_coord = np.vstack(new_edge_pix_coord)

    # # Remove tips connected to very small edges
    # self.tips_coord = np.delete(self.tips_coord, tipped_edges_to_remove, 0)
    # # Add corresponding edge names
    # self.tips_coord = np.hstack((self.tips_coord, np.arange(1, len(self.tips_coord) + 1)[:, None]))

    # # Within all branching (non-tip) vertices, keep those that did not lose their vertex status because of the edge removal
    # self.non_tip_vertices = np.delete(self.non_tip_vertices, branches_to_remove, 0)

    # # Get the branching vertices who kept their typped edge
    # self.vertices_branching_tips = np.delete(self.vertices_branching_tips, tipped_edges_to_remove, 0)

    # Within all branching (non-tip) vertices, keep those that do not connect a tipped edge.
    # v_branching_tips_in_branching_v = find_common_coord(self.non_tip_vertices, self.vertices_branching_tips[:, :2])
    # self.remaining_vertices = np.delete(self.non_tip_vertices, v_branching_tips_in_branching_v, 0)
    # ordered_v_coord = np.vstack((self.tips_coord[:, :2], self.vertices_branching_tips[:, :2], self.remaining_vertices))

    # tips = self.tips_coord
    # branching_any_edge = self.non_tip_vertices
    # branching_typped_edges = self.vertices_branching_tips
    # branching_no_typped_edges = self.remaining_vertices

    self.get_vertices_and_tips_coord()
    self.get_tipped_edges()

run_edge_identification()

Run the edge identification process.

This method orchestrates a series of steps to identify and label edges within the graph structure. Each step handles a specific aspect of edge identification, ultimately leading to a clearer and more refined edge network.

Steps involved: 1. Get vertices and tips coordinates. 2. Identify tipped edges. 3. Remove tipped edges smaller than branch width. 4. Label tipped edges and their vertices. 5. Label edges connected with vertex clusters. 6. Label edges connecting vertex clusters. 7. Label edges from known vertices iteratively. 8. Label edges looping on 1 vertex. 9. Clear areas with 1 or 2 unidentified pixels. 10. Clear edge duplicates. 11. Clear vertices connecting 2 edges.

Source code in src/cellects/image_analysis/network_functions.py
def run_edge_identification(self):
    """
    Run the edge identification process.

    This method orchestrates a series of steps to identify and label edges
    within the graph structure. Each step handles a specific aspect of edge
    identification, ultimately leading to a clearer and more refined edge network.

    Steps involved:
    1. Get vertices and tips coordinates.
    2. Identify tipped edges.
    3. Remove tipped edges smaller than branch width.
    4. Label tipped edges and their vertices.
    5. Label edges connected with vertex clusters.
    6. Label edges connecting vertex clusters.
    7. Label edges from known vertices iteratively.
    8. Label edges looping on 1 vertex.
    9. Clear areas with 1 or 2 unidentified pixels.
    10. Clear edge duplicates.
    11. Clear vertices connecting 2 edges.
    """
    self.get_vertices_and_tips_coord()
    self.get_tipped_edges()
    self.remove_tipped_edge_smaller_than_branch_width()
    self.label_tipped_edges_and_their_vertices()
    self.check_vertex_existence()
    self.label_edges_connected_with_vertex_clusters()
    self.label_edges_connecting_vertex_clusters()
    self.label_edges_from_known_vertices_iteratively()
    self.label_edges_looping_on_1_vertex()
    self.clear_areas_of_1_or_2_unidentified_pixels()
    self.clear_edge_duplicates()
    self.clear_vertices_connecting_2_edges()

MotionAnalysis

Source code in src/cellects/core/motion_analysis.py
  56
  57
  58
  59
  60
  61
  62
  63
  64
  65
  66
  67
  68
  69
  70
  71
  72
  73
  74
  75
  76
  77
  78
  79
  80
  81
  82
  83
  84
  85
  86
  87
  88
  89
  90
  91
  92
  93
  94
  95
  96
  97
  98
  99
 100
 101
 102
 103
 104
 105
 106
 107
 108
 109
 110
 111
 112
 113
 114
 115
 116
 117
 118
 119
 120
 121
 122
 123
 124
 125
 126
 127
 128
 129
 130
 131
 132
 133
 134
 135
 136
 137
 138
 139
 140
 141
 142
 143
 144
 145
 146
 147
 148
 149
 150
 151
 152
 153
 154
 155
 156
 157
 158
 159
 160
 161
 162
 163
 164
 165
 166
 167
 168
 169
 170
 171
 172
 173
 174
 175
 176
 177
 178
 179
 180
 181
 182
 183
 184
 185
 186
 187
 188
 189
 190
 191
 192
 193
 194
 195
 196
 197
 198
 199
 200
 201
 202
 203
 204
 205
 206
 207
 208
 209
 210
 211
 212
 213
 214
 215
 216
 217
 218
 219
 220
 221
 222
 223
 224
 225
 226
 227
 228
 229
 230
 231
 232
 233
 234
 235
 236
 237
 238
 239
 240
 241
 242
 243
 244
 245
 246
 247
 248
 249
 250
 251
 252
 253
 254
 255
 256
 257
 258
 259
 260
 261
 262
 263
 264
 265
 266
 267
 268
 269
 270
 271
 272
 273
 274
 275
 276
 277
 278
 279
 280
 281
 282
 283
 284
 285
 286
 287
 288
 289
 290
 291
 292
 293
 294
 295
 296
 297
 298
 299
 300
 301
 302
 303
 304
 305
 306
 307
 308
 309
 310
 311
 312
 313
 314
 315
 316
 317
 318
 319
 320
 321
 322
 323
 324
 325
 326
 327
 328
 329
 330
 331
 332
 333
 334
 335
 336
 337
 338
 339
 340
 341
 342
 343
 344
 345
 346
 347
 348
 349
 350
 351
 352
 353
 354
 355
 356
 357
 358
 359
 360
 361
 362
 363
 364
 365
 366
 367
 368
 369
 370
 371
 372
 373
 374
 375
 376
 377
 378
 379
 380
 381
 382
 383
 384
 385
 386
 387
 388
 389
 390
 391
 392
 393
 394
 395
 396
 397
 398
 399
 400
 401
 402
 403
 404
 405
 406
 407
 408
 409
 410
 411
 412
 413
 414
 415
 416
 417
 418
 419
 420
 421
 422
 423
 424
 425
 426
 427
 428
 429
 430
 431
 432
 433
 434
 435
 436
 437
 438
 439
 440
 441
 442
 443
 444
 445
 446
 447
 448
 449
 450
 451
 452
 453
 454
 455
 456
 457
 458
 459
 460
 461
 462
 463
 464
 465
 466
 467
 468
 469
 470
 471
 472
 473
 474
 475
 476
 477
 478
 479
 480
 481
 482
 483
 484
 485
 486
 487
 488
 489
 490
 491
 492
 493
 494
 495
 496
 497
 498
 499
 500
 501
 502
 503
 504
 505
 506
 507
 508
 509
 510
 511
 512
 513
 514
 515
 516
 517
 518
 519
 520
 521
 522
 523
 524
 525
 526
 527
 528
 529
 530
 531
 532
 533
 534
 535
 536
 537
 538
 539
 540
 541
 542
 543
 544
 545
 546
 547
 548
 549
 550
 551
 552
 553
 554
 555
 556
 557
 558
 559
 560
 561
 562
 563
 564
 565
 566
 567
 568
 569
 570
 571
 572
 573
 574
 575
 576
 577
 578
 579
 580
 581
 582
 583
 584
 585
 586
 587
 588
 589
 590
 591
 592
 593
 594
 595
 596
 597
 598
 599
 600
 601
 602
 603
 604
 605
 606
 607
 608
 609
 610
 611
 612
 613
 614
 615
 616
 617
 618
 619
 620
 621
 622
 623
 624
 625
 626
 627
 628
 629
 630
 631
 632
 633
 634
 635
 636
 637
 638
 639
 640
 641
 642
 643
 644
 645
 646
 647
 648
 649
 650
 651
 652
 653
 654
 655
 656
 657
 658
 659
 660
 661
 662
 663
 664
 665
 666
 667
 668
 669
 670
 671
 672
 673
 674
 675
 676
 677
 678
 679
 680
 681
 682
 683
 684
 685
 686
 687
 688
 689
 690
 691
 692
 693
 694
 695
 696
 697
 698
 699
 700
 701
 702
 703
 704
 705
 706
 707
 708
 709
 710
 711
 712
 713
 714
 715
 716
 717
 718
 719
 720
 721
 722
 723
 724
 725
 726
 727
 728
 729
 730
 731
 732
 733
 734
 735
 736
 737
 738
 739
 740
 741
 742
 743
 744
 745
 746
 747
 748
 749
 750
 751
 752
 753
 754
 755
 756
 757
 758
 759
 760
 761
 762
 763
 764
 765
 766
 767
 768
 769
 770
 771
 772
 773
 774
 775
 776
 777
 778
 779
 780
 781
 782
 783
 784
 785
 786
 787
 788
 789
 790
 791
 792
 793
 794
 795
 796
 797
 798
 799
 800
 801
 802
 803
 804
 805
 806
 807
 808
 809
 810
 811
 812
 813
 814
 815
 816
 817
 818
 819
 820
 821
 822
 823
 824
 825
 826
 827
 828
 829
 830
 831
 832
 833
 834
 835
 836
 837
 838
 839
 840
 841
 842
 843
 844
 845
 846
 847
 848
 849
 850
 851
 852
 853
 854
 855
 856
 857
 858
 859
 860
 861
 862
 863
 864
 865
 866
 867
 868
 869
 870
 871
 872
 873
 874
 875
 876
 877
 878
 879
 880
 881
 882
 883
 884
 885
 886
 887
 888
 889
 890
 891
 892
 893
 894
 895
 896
 897
 898
 899
 900
 901
 902
 903
 904
 905
 906
 907
 908
 909
 910
 911
 912
 913
 914
 915
 916
 917
 918
 919
 920
 921
 922
 923
 924
 925
 926
 927
 928
 929
 930
 931
 932
 933
 934
 935
 936
 937
 938
 939
 940
 941
 942
 943
 944
 945
 946
 947
 948
 949
 950
 951
 952
 953
 954
 955
 956
 957
 958
 959
 960
 961
 962
 963
 964
 965
 966
 967
 968
 969
 970
 971
 972
 973
 974
 975
 976
 977
 978
 979
 980
 981
 982
 983
 984
 985
 986
 987
 988
 989
 990
 991
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
class MotionAnalysis:

    def __init__(self, l: list):

        """
        Analyzes motion in a given arena using video data.

        This class processes video frames to analyze motion within a specified area,
        detecting shapes, covering durations, and generating descriptors for further
        analysis.

        Args:
            l (list): A list containing various parameters and flags necessary for the motion
                analysis.

        Args:
            l[0] (int): Arena index.
            l[1] (str): Arena identifier or name, stored in one_descriptor_per_arena['arena'].
            l[2] (dict): Variables required for the analysis, stored in vars.
            l[3] (bool): Flag to detect shape.
            l[4] (bool): Flag to analyze shape.
            l[5] (bool): Flag to show segmentation.
            l[6] (None or list): Videos already in RAM.

        Attributes:
            vars (dict): Variables required for the analysis.
            visu (None): Placeholder for visualization data.
            binary (None): Placeholder for binary segmentation data.
            origin_idx (None): Placeholder for the index of the first frame.
            smoothing_flag (bool): Flag to indicate if smoothing should be applied.
            dims (tuple): Dimensions of the converted video.
            segmentation (ndarray): Array to store segmentation data.
            covering_intensity (ndarray): Intensity values for covering analysis.
            mean_intensity_per_frame (ndarray): Mean intensity per frame.
            borders (object): Borders of the arena.
            pixel_ring_depth (int): Depth of the pixel ring for analysis, default is 9.
            step (int): Step size for processing, default is 10.
            lost_frames (int): Number of lost frames to account for, default is 10.
            start (None or int): Starting frame index for the analysis.

        Methods:
            load_images_and_videos(videos_already_in_ram, arena_idx): Loads images and videos
                for the specified arena index.
            update_ring_width(): Updates the width of the pixel ring for analysis.
            get_origin_shape(): Detects the origin shape in the video frames.
            get_covering_duration(step): Calculates the covering duration based on a step size.
            detection(): Performs motion detection within the arena.
            initialize_post_processing(): Initializes post-processing steps.
            update_shape(show_seg): Updates the shape based on segmentation and visualization flags.
            get_descriptors_from_binary(): Extracts descriptors from binary data.
            detect_growth_transitions(): Detects growth transitions in the data.
            networks_analysis(show_seg): Detected networks within the arena based on segmentation
                visualization.
            study_cytoscillations(show_seg): Studies cytoscillations within the arena with
                segmentation visualization.
            fractal_descriptions(): Generates fractal descriptions of the analyzed data.
            get_descriptors_summary(): Summarizes the descriptors obtained from the analysis.
            save_results(): Saves the results of the analysis.

        """
        self.one_descriptor_per_arena = {}
        self.one_descriptor_per_arena['arena'] = l[1]
        vars = l[2]
        detect_shape = l[3]
        analyse_shape = l[4]
        show_seg = l[5]
        videos_already_in_ram = l[6]
        self.visu = None
        self.binary = None
        self.origin_idx = None
        self.smoothing_flag: bool = False
        self.drift_mask_coord = None
        self.substantial_image = None
        self.segmented = None
        self.covering_intensity = None
        self.gravity_field = None
        self.rays = None
        self.sun = None
        self.holes = None
        self.coord_network = None
        self.vars = vars
        if not 'contour_color' in self.vars:
            self.vars['contour_color']: np.uint8 = 0
        self.load_images_and_videos(videos_already_in_ram, l[0])

        self.segmented = np.zeros(self.dims, dtype=np.uint8)

        self.covering_intensity = np.zeros(self.dims[1:], dtype=np.float64)
        self.mean_intensity_per_frame = np.mean(self.converted_video, (1, 2))

        self.borders = image_borders(self.dims[1:], shape=self.vars['arena_shape'])
        self.pixel_ring_depth = 9
        self.step: int = 10
        self.lost_frames = 10
        self.update_ring_width()

        self.start = None
        if detect_shape:
            self.assess_motion_detection()
            if self.start is not None:
                self.detection()
                self.initialize_post_processing()
                self.t = self.start
                while self.t < self.dims[0]:  #200:
                    self.update_shape(show_seg)
                #

            if analyse_shape:
                self.get_descriptors_from_binary()
                self.detect_growth_transitions()
                self.networks_analysis(show_seg)
                self.study_cytoscillations(show_seg)
                self.fractal_descriptions()
                if videos_already_in_ram is None:
                    self.save_results()

    def load_images_and_videos(self, videos_already_in_ram, i: int):
        """

        Load images and videos from disk or RAM.

        Parameters
        ----------
        videos_already_in_ram : numpy.ndarray or None
            Video data that is already loaded into RAM. If `None`, videos will be
            loaded from disk.
        i : int
            Index used to select the origin and background data.

        Notes
        -----
        This method logs information about the arena number and loads necessary data
        from disk or RAM based on whether videos are already in memory. It sets various
        attributes like `self.origin`, `self.background`, and `self.converted_video`.

        """
        if 'crop_coord' in self.vars and self.vars['crop_coord'] is not None:
            crop_top, crop_bot, crop_left, crop_right = self.vars['crop_coord']
        else:
            crop_top, crop_bot, crop_left, crop_right = 0, -1, 0, -1
        if 'arenas_coord' in self.vars:
            top, bot, left, right = self.vars['arenas_coord']
        else:
            top, bot, left, right = [0], [crop_bot], [0], [crop_right]
        frame_height = bot[i] - top[i]
        true_frame_width = right[i] - left[i]

        if true_frame_width == -1:
            true_frame_width = None
        vid_name = None
        if self.vars['video_list'] is not None:
            vid_name = self.vars['video_list'][i]
        self.background = read_h5(f'ind_{self.one_descriptor_per_arena['arena']}.h5', 'background')
        self.background2 = read_h5(f'ind_{self.one_descriptor_per_arena['arena']}.h5', 'background2')
        vids = read_one_arena(self.one_descriptor_per_arena['arena'], self.vars['already_greyscale'],
                              self.vars['convert_for_motion'], videos_already_in_ram, true_frame_width, vid_name,
                              self.background, self.background2)
        self.visu, self.converted_video, self.converted_video2 = vids

        if self.visu is not None:
            self.dims = self.visu.shape[:3]
        else:
            self.dims = self.converted_video.shape[:3]

        # When video(s) need to be sliced (only occurs with datasets originally stored as videos):
        if frame_height != - 1 and (self.dims[1] != frame_height or self.dims[2] != true_frame_width):
            if self.visu is not None:
                self.visu = self.visu[:, crop_top:crop_bot, crop_left:crop_right, ...]
                self.visu = self.visu[:, top[i]:bot[i], left[i]:right[i], ...]
            if self.converted_video is not None:
                self.converted_video = self.converted_video[:, crop_top:crop_bot, crop_left:crop_right]
                self.converted_video = self.converted_video[:, top[i]:bot[i], left[i]:right[i]]

        if self.converted_video is None:
            logging.info(
                f"Arena n°{self.one_descriptor_per_arena['arena']}. Convert the RGB visu video into a greyscale image using the color space combination: {self.vars['convert_for_motion']}")
            vids = convert_subtract_and_filter_video(self.visu, self.vars['convert_for_motion'],
                                                     self.background, self.background2,
                                                     self.vars['lose_accuracy_to_save_memory'],
                                                     self.vars['filter_spec'])
            self.converted_video, self.converted_video2 = vids

        self.origin = np.zeros((self.dims[1], self.dims[2]), dtype=np.uint8)
        self.origin_idx = read_h5(f'ind_{self.one_descriptor_per_arena['arena']}.h5', 'origin_coord')
        self.origin[self.origin_idx[0], self.origin_idx[1]] = 1

    def assess_motion_detection(self):
        """
        Assess if a motion can be detected using the current parameters.

        Validate the specimen(s) detected in the first frame and evaluate roughly how growth occurs during the video.
        """
        # Here to conditional layers allow to detect if an expansion/exploration occured
        self.get_origin_shape()
        # The first, user-defined is the 'first_move_threshold' and the second is the detection of the
        # substantial image: if any of them is not detected, the program considers there is no motion.
        if self.dims[0] >= 40:
            step = self.dims[0] // 20
        else:
            step = 1
        if self.dims[0] == 1 or self.start >= (self.dims[0] - step - 1):
            self.start = None
            self.binary = np.repeat(np.expand_dims(self.origin, 0), self.converted_video.shape[0], axis=0)
        else:
            self.get_covering_duration(step)

    def get_origin_shape(self):
        """
        Determine the origin shape and initialize variables based on the state of the current analysis.

        This method analyzes the initial frame or frames to determine the origin shape
        of an object in a video, initializing necessary variables and matrices for
        further processing.


        Attributes Modified:
            start: (int) Indicates the starting frame index.
            origin_idx: (np.ndarray) The indices of non-zero values in the origin matrix.
            covering_intensity: (np.ndarray) Matrix used for pixel fading intensity.
            substantial_growth: (int) Represents a significant growth measure based on the origin.

        Notes:
            - The method behavior varies if 'origin_state' is set to "constant" or not.
            - If the background is lighter, 'covering_intensity' matrix is initialized.
            - Uses connected components to determine which shape is closest to the center
              or largest, based on 'appearance_detection_method'.
        """
        logging.info(f"Arena n°{self.one_descriptor_per_arena['arena']}. Make sure of origin shape")
        if self.vars['drift_already_corrected']:
            self.drift_mask_coord = np.zeros((self.dims[0], 4), dtype=np.uint32)
            for frame_i in np.arange(self.dims[0]):  # 100):#
                true_pixels = np.nonzero(self.converted_video[frame_i, ...])
                self.drift_mask_coord[frame_i, :] = np.min(true_pixels[0]), np.max(true_pixels[0]) + 1, np.min(true_pixels[1]), np.max(true_pixels[1]) + 1
            if np.all(self.drift_mask_coord[:, 0] == 0) and np.all(self.drift_mask_coord[:, 1] == self.dims[1] - 1) and np.all(
                    self.drift_mask_coord[:, 2] == 0) and np.all(self.drift_mask_coord[:, 3] == self.dims[2] - 1):
                logging.error(f"Drift correction has been wrongly detected. Images do not contain zero-valued pixels")
                self.vars['drift_already_corrected'] = False
        self.start = 1
        if self.vars['origin_state'] == "invisible":
            self.start += self.vars['first_detection_frame']
            analysisi = self.frame_by_frame_segmentation(self.start, self.origin)
            # Use connected components to find which shape is the nearest from the image center.
            if self.vars['several_blob_per_arena']:
                self.origin = analysisi.binary_image
            else:
                if self.vars['appearance_detection_method'] == 'largest':
                    self.origin = keep_one_connected_component(analysisi.binary_image)
                elif self.vars['appearance_detection_method'] == 'most_central':
                    nb_components, output, stats, centroids = cv2.connectedComponentsWithStats(analysisi.binary_image,
                                                                                               connectivity=8)
                    center = np.array((self.dims[2] // 2, self.dims[1] // 2))
                    stats = np.zeros(nb_components - 1)
                    for shape_i in np.arange(1, nb_components):
                        stats[shape_i - 1] = eudist(center, centroids[shape_i, :])
                    # The shape having the minimal euclidean distance from the center will be the original shape
                    self.origin = np.zeros((self.dims[1], self.dims[2]), dtype=np.uint8)
                    self.origin[output == (np.argmin(stats) + 1)] = 1
        self.origin_idx = np.nonzero(self.origin)
        if self.vars['origin_state'] == "constant":
            if self.vars['lighter_background']:
                # Initialize the covering_intensity matrix as a reference for pixel fading
                self.covering_intensity[self.origin_idx[0], self.origin_idx[1]] = 200
        self.substantial_growth = np.min((1.2 * self.origin.sum(), self.origin.sum() + 250))

    def get_covering_duration(self, step: int):
        """
        Determine the number of frames necessary for a pixel to get covered.

        This function identifies the time when significant growth or motion occurs
        in a video and calculates the number of frames needed for a pixel to be
        completely covered. It also handles noise and ensures that the calculated
        step value is reasonable.

        Parameters
        ----------
        step : int
            The initial step size for frame analysis.

        Raises
        ------
        Exception
            If an error occurs during the calculation process.

        Notes
        -----
        This function may modify several instance attributes including
        `substantial_time`, `step`, and `start`.
        """
        logging.info(f"Arena n°{self.one_descriptor_per_arena['arena']}. Find a frame with a significant growth/motion and determine the number of frames necessary for a pixel to get covered")
        ## Find the time at which growth reached a substantial growth.
        self.substantial_time = self.start
        # To avoid noisy images to have deleterious effects, make sure that area area reaches the threshold thrice.
        occurrence = 0
        allowed_window = None
        if self.vars['drift_already_corrected']:
            allowed_window = self.drift_mask_coord[:, 0].max(), self.drift_mask_coord[:, 1].min(), self.drift_mask_coord[:, 2].max(), self.drift_mask_coord[:, 3].min()
        prev_bin_im = self.origin
        while np.logical_and(occurrence < 3, self.substantial_time < (self.dims[0] - step - 1)):
            self.substantial_time += step
            growth_vision = self.frame_by_frame_segmentation(self.substantial_time, prev_bin_im)
            prev_bin_im = growth_vision.binary_image * self.borders
            surfarea = np.sum(prev_bin_im)
            prev_bin_im = np.logical_or(prev_bin_im, self.origin).astype(np.uint8)
            if surfarea > self.substantial_growth:
                occurrence += 1
        # get a rough idea of the area covered during this time
        if (self.substantial_time - self.start) > 20:
            if self.vars['lighter_background']:
                growth = (np.sum(self.converted_video[self.start:(self.start + 10), :, :], 0) / 10) - (np.sum(self.converted_video[(self.substantial_time - 10):self.substantial_time, :, :], 0) / 10)
            else:
                growth = (np.sum(self.converted_video[(self.substantial_time - 10):self.substantial_time, :, :], 0) / 10) - (
                            np.sum(self.converted_video[self.start:(self.start + 10), :, :], 0) / 10)
        else:
            if self.vars['lighter_background']:
                growth = self.converted_video[self.start, ...] - self.converted_video[self.substantial_time, ...]
            else:
                growth = self.converted_video[self.substantial_time, ...] - self.converted_video[self.start, ...]
        intensity_extent = np.ptp(self.converted_video[self.start:self.substantial_time, :, :], axis=0)
        growth[np.logical_or(growth < 0, intensity_extent < np.median(intensity_extent))] = 0
        growth = bracket_to_uint8_image_contrast(growth)
        growth *= self.borders
        growth_vision = OneImageAnalysis(growth)
        growth_vision.segmentation(allowed_window=allowed_window)
        if self.vars['several_blob_per_arena']:
            _, _, stats, _ = cv2.connectedComponentsWithStats(self.origin)
            do_erode = np.any(stats[1:, 4] > 50)
        else:
            do_erode = self.origin.sum() > 50
        if do_erode:
            self.substantial_image = cv2.erode(growth_vision.binary_image, cross_33, iterations=2)
        else:
            self.substantial_image = growth_vision.binary_image

        if np.any(self.substantial_image):
            natural_noise = np.nonzero(intensity_extent == np.min(intensity_extent))
            natural_noise = self.converted_video[self.start:self.substantial_time, natural_noise[0][0], natural_noise[1][0]]
            natural_noise = moving_average(natural_noise, 5)
            natural_noise = np.ptp(natural_noise)
            subst_idx = np.nonzero(self.substantial_image)
            cover_lengths = np.zeros(len(subst_idx[0]), dtype=np.uint32)
            for index in np.arange(len(subst_idx[0])):
                vector = self.converted_video[self.start:self.substantial_time, subst_idx[0][index], subst_idx[1][index]]
                left, right = find_major_incline(vector, natural_noise)
                # If find_major_incline did find a major incline: (otherwise it put 0 to left and 1 to right)
                if not np.logical_and(left == 0, right == 1):
                    cover_lengths[index] = len(vector[left:-right])
            # If this analysis fails put a deterministic step
            if len(cover_lengths[cover_lengths > 0]) > 0:
                self.step = (np.round(np.mean(cover_lengths[cover_lengths > 0])).astype(int) // 2) + 1
                logging.info(f"Arena n°{self.one_descriptor_per_arena['arena']}. Pre-processing detection: the time for a pixel to get covered is set to {self.step}")
            else:
                logging.info(f"Arena n°{self.one_descriptor_per_arena['arena']}. Pre-processing detection: could not automatically find the time for a pixel to get covered. Default value is 1 for video length < 40 and 10 otherwise")
            # Make sure to avoid a step overestimation
            if self.step > self.dims[0] // 20:
                self.step: int = self.dims[0] // 20

            if self.step == 0:
                self.step: int = 1
        # When the first_move_threshold is not stringent enough the program may detect a movement due to noise
        # In that case, the substantial_image is empty and there is no reason to proceed further
        else:
            self.start = None

    def detection(self, compute_all_possibilities: bool=False):
        """
        Perform frame-by-frame or luminosity-based segmentation on video data to detect cell motion and growth.

        This function processes video frames using either frame-by-frame segmentation or luminosity-based
        segmentation algorithms to detect cell motion and growth. It handles drift correction, adjusts parameters
        based on configuration settings, and applies logical operations to combine results from different segmentation
        methods.

        Parameters
        ----------
        compute_all_possibilities : bool, optional
            Flag to determine if all segmentation possibilities should be computed, by default False

        Returns
        -------
        None

        Notes
        -----
        This function modifies the instance variables `self.segmented`, `self.converted_video`,
        and potentially `self.luminosity_segmentation` and `self.gradient_segmentation`.
        Depending on the configuration settings, it performs various segmentation algorithms and updates
        the instance variables accordingly.

        """
        if self.start is None:
            self.start = 1
        else:
            self.start = np.max((self.start, 1))
        self.lost_frames = np.min((self.step, self.dims[0] // 4))
        # I/ Image by image segmentation algorithms
        # If images contain a drift correction (zeros at borders of the image,
        # Replace these 0 by normal background values before segmenting
        if self.vars['frame_by_frame_segmentation'] or compute_all_possibilities:
            logging.info(f"Arena n°{self.one_descriptor_per_arena['arena']}. Detect cell motion and growth using the frame by frame segmentation algorithm")
            self.segmented = np.zeros(self.dims, dtype=np.uint8)
            for t in np.arange(self.dims[0]):#20):#
                analysisi = self.frame_by_frame_segmentation(t, self.segmented[t - 1, ...])
                self.segmented[t, ...] = analysisi.binary_image

                if self.vars['lose_accuracy_to_save_memory']:
                    self.converted_video[t, ...] = bracket_to_uint8_image_contrast(analysisi.image)
                else:
                    self.converted_video[t, ...] = analysisi.image
                if self.vars['convert_for_motion']['logical'] != 'None':
                    if self.vars['lose_accuracy_to_save_memory']:
                        self.converted_video2[t, ...] = bracket_to_uint8_image_contrast(analysisi.image2)
                    else:
                        self.converted_video2[t, ...] = analysisi.image2

        if self.vars['color_number'] == 2:
            luminosity_segmentation, l_threshold_over_time = self.lum_value_segmentation(self.converted_video, do_threshold_segmentation=self.vars['do_threshold_segmentation'] or compute_all_possibilities)
            self.converted_video = self.smooth_pixel_slopes(self.converted_video)
            gradient_segmentation = None
            if self.vars['do_slope_segmentation'] or compute_all_possibilities:
                gradient_segmentation = self.lum_slope_segmentation(self.converted_video)
                if gradient_segmentation is not None:
                    gradient_segmentation[-self.lost_frames:, ...] = np.repeat(gradient_segmentation[-self.lost_frames, :, :][np.newaxis, :, :], self.lost_frames, axis=0)
            if self.vars['convert_for_motion']['logical'] != 'None':
                if self.vars['do_threshold_segmentation'] or compute_all_possibilities:
                    luminosity_segmentation2, l_threshold_over_time2 = self.lum_value_segmentation(self.converted_video2, do_threshold_segmentation=True)
                    if luminosity_segmentation is None:
                        luminosity_segmentation = luminosity_segmentation2
                    if luminosity_segmentation is not None:
                        if self.vars['convert_for_motion']['logical'] == 'Or':
                            luminosity_segmentation = np.logical_or(luminosity_segmentation, luminosity_segmentation2)
                        elif self.vars['convert_for_motion']['logical'] == 'And':
                            luminosity_segmentation = np.logical_and(luminosity_segmentation, luminosity_segmentation2)
                        elif self.vars['convert_for_motion']['logical'] == 'Xor':
                            luminosity_segmentation = np.logical_xor(luminosity_segmentation, luminosity_segmentation2)
                self.converted_video2 = self.smooth_pixel_slopes(self.converted_video2)
                if self.vars['do_slope_segmentation'] or compute_all_possibilities:
                    gradient_segmentation2 = self.lum_slope_segmentation(self.converted_video2)
                    if gradient_segmentation2 is not None:
                        gradient_segmentation2[-self.lost_frames:, ...] = np.repeat(gradient_segmentation2[-self.lost_frames, :, :][np.newaxis, :, :], self.lost_frames, axis=0)
                    if gradient_segmentation is None:
                        gradient_segmentation = gradient_segmentation2
                    if gradient_segmentation is not None:
                        if self.vars['convert_for_motion']['logical'] == 'Or':
                            gradient_segmentation = np.logical_or(gradient_segmentation, gradient_segmentation2)
                        elif self.vars['convert_for_motion']['logical'] == 'And':
                            gradient_segmentation = np.logical_and(gradient_segmentation, gradient_segmentation2)
                        elif self.vars['convert_for_motion']['logical'] == 'Xor':
                            gradient_segmentation = np.logical_xor(gradient_segmentation, gradient_segmentation2)

            if compute_all_possibilities:
                logging.info(f"Arena n°{self.one_descriptor_per_arena['arena']}. Compute all options to detect cell motion and growth. Maximal growth per frame: {self.vars['maximal_growth_factor']}")
                if luminosity_segmentation is not None:
                    self.luminosity_segmentation = np.nonzero(luminosity_segmentation)
                if gradient_segmentation is not None:
                    self.gradient_segmentation = np.nonzero(gradient_segmentation)
                if luminosity_segmentation is not None and gradient_segmentation is not None:
                    self.logical_and = np.nonzero(np.logical_and(luminosity_segmentation, gradient_segmentation))
                    self.logical_or = np.nonzero(np.logical_or(luminosity_segmentation, gradient_segmentation))
            elif not self.vars['frame_by_frame_segmentation']:
                if self.vars['do_threshold_segmentation'] and not self.vars['do_slope_segmentation']:
                    logging.info(f"Arena n°{self.one_descriptor_per_arena['arena']}. Detect with luminosity threshold segmentation algorithm")
                    self.segmented = luminosity_segmentation
                if self.vars['do_slope_segmentation']:
                    logging.info(f"Arena n°{self.one_descriptor_per_arena['arena']}. Detect with luminosity slope segmentation algorithm")
                    self.segmented = gradient_segmentation
                if np.logical_and(self.vars['do_threshold_segmentation'], self.vars['do_slope_segmentation']):
                    if self.vars['true_if_use_light_AND_slope_else_OR']:
                        logging.info(f"Arena n°{self.one_descriptor_per_arena['arena']}. Detection resuts from threshold AND slope segmentation algorithms")
                        if luminosity_segmentation is not None and gradient_segmentation is not None:
                            self.segmented = np.logical_and(luminosity_segmentation, gradient_segmentation)
                    else:
                        logging.info(f"Arena n°{self.one_descriptor_per_arena['arena']}. Detection resuts from threshold OR slope segmentation algorithms")
                        if luminosity_segmentation is not None and gradient_segmentation is not None:
                            self.segmented = np.logical_or(luminosity_segmentation, gradient_segmentation)
                self.segmented = self.segmented.astype(np.uint8)


    def frame_by_frame_segmentation(self, t: int, previous_binary_image: NDArray=None):
        """

        Frame-by-frame segmentation of a video.

        Parameters
        ----------
        t : int
            The time index of the frame to process.
        previous_binary_image : NDArray, optional
            The binary image from the previous frame. Default is `None`.

        Returns
        -------
        OneImageAnalysis
            An object containing the analysis of the current frame.
        """
        contrasted_im = bracket_to_uint8_image_contrast(self.converted_video[t, :, :])
        # 1. Get the mask valid for a number of images around it (step).
        allowed_window = None
        if self.vars['drift_already_corrected']:
            half_step = np.ceil(self.step / 2).astype(int)
            t_start = t - half_step
            t_end = t + half_step
            t_start = np.max((t_start, 0))
            t_end = np.min((t_end, self.dims[0]))
            min_y, max_y = np.max(self.drift_mask_coord[t_start:t_end, 0]), np.min(self.drift_mask_coord[t_start:t_end, 1])
            min_x, max_x = np.max(self.drift_mask_coord[t_start:t_end, 2]), np.min(self.drift_mask_coord[t_start:t_end, 3])
            allowed_window = min_y, max_y, min_x, max_x

        analysisi = OneImageAnalysis(contrasted_im)
        if self.vars['convert_for_motion']['logical'] != 'None':
            contrasted_im2 = bracket_to_uint8_image_contrast(self.converted_video2[t, :, :])
            analysisi.image2 = contrasted_im2

        if previous_binary_image is None or t == 0:
            analysisi.previous_binary_image = self.origin
        else:
            analysisi.previous_binary_image = previous_binary_image

        analysisi.segmentation(self.vars['convert_for_motion']['logical'], self.vars['color_number'],
                               bio_label=self.vars["bio_label"], bio_label2=self.vars["bio_label2"],
                               rolling_window_segmentation=self.vars['rolling_window_segmentation'],
                               lighter_background=self.vars['lighter_background'],
                               allowed_window=allowed_window, filter_spec=self.vars['filter_spec']) # filtering already done when creating converted_video

        return analysisi

    def lum_value_segmentation(self, converted_video: NDArray, do_threshold_segmentation: bool) -> Tuple[NDArray, NDArray]:
        """
        Perform segmentation based on luminosity values from a video.

        Parameters
        ----------
        converted_video : NDArray
            The input video data in a NumPy array format.
        do_threshold_segmentation : bool
            Flag to determine whether threshold segmentation should be applied.

        Returns
        -------
        Tuple[NDArray, NDArray]
            A tuple containing two NumPy arrays:
            - The first array is the luminosity segmentation of the video.
            - The second array represents the luminosity threshold over time.

        Notes
        -----
        This function operates under the assumption that there is sufficient motion in the video data.
        If no valid thresholds are found for segmentation, the function returns None for
        `luminosity_segmentation`.
        """
        shape_motion_failed: bool = False
        if self.vars['lighter_background']:
            covering_l_values = np.min(converted_video[:self.substantial_time, :, :],
                                             0) * self.substantial_image
        else:
            covering_l_values = np.max(converted_video[:self.substantial_time, :, :],
                                             0) * self.substantial_image
        # Avoid errors by checking whether the covering values are nonzero
        covering_l_values = covering_l_values[covering_l_values != 0]
        if len(covering_l_values) == 0:
            shape_motion_failed = True

        luminosity_segmentation = None
        l_threshold_over_time = None
        if not shape_motion_failed:
            value_segmentation_thresholds = np.arange(0.8, -0.7, -0.1)
            validated_thresholds = np.zeros(value_segmentation_thresholds.shape, dtype=bool)
            counter = 0
            max_motion_per_frame = (self.dims[1] * self.dims[2]) * self.vars['maximal_growth_factor'] * 2
            if self.vars['lighter_background']:
                basic_bckgrnd_values = np.quantile(converted_video[:(self.lost_frames + 1), ...], 0.9, axis=(1, 2))
            else:
                basic_bckgrnd_values = np.quantile(converted_video[:(self.lost_frames + 1), ...], 0.1, axis=(1, 2))
            # Try different values of do_threshold_segmentation and keep the one that does not
            # segment more than x percent of the image
            while counter <= 14:
                value_threshold = value_segmentation_thresholds[counter]
                if self.vars['lighter_background']:
                    l_threshold = (1 + value_threshold) * np.max(covering_l_values)
                else:
                    l_threshold = (1 - value_threshold) * np.min(covering_l_values)
                starting_segmentation, l_threshold_over_time = segment_with_lum_value(converted_video[:(self.lost_frames + 1), ...],
                                                               basic_bckgrnd_values, l_threshold,
                                                               self.vars['lighter_background'])

                changing_pixel_number = np.sum(np.absolute(np.diff(starting_segmentation.astype(np.int8), 1, 0)), (1, 2))
                validation = np.max(np.sum(starting_segmentation, (1, 2))) < max_motion_per_frame and (
                        np.max(changing_pixel_number) < max_motion_per_frame)
                validated_thresholds[counter] = validation
                if np.any(validated_thresholds):
                    if not validation:
                        break
                counter += 1
            # If any threshold is accepted, use their average to proceed the final thresholding
            valid_number = validated_thresholds.sum()
            if valid_number > 0:
                if valid_number > 2:
                    index_to_keep = 2
                else:
                    index_to_keep = valid_number - 1
                value_threshold = value_segmentation_thresholds[
                    np.uint8(np.floor(np.mean(np.nonzero(validated_thresholds)[0][index_to_keep])))]
            else:
                value_threshold = 0

            if self.vars['lighter_background']:
                l_threshold = (1 + value_threshold) * np.max(covering_l_values)
            else:
                l_threshold = (1 - value_threshold) * np.min(covering_l_values)
            if do_threshold_segmentation:
                if self.vars['lighter_background']:
                    basic_bckgrnd_values = np.quantile(converted_video, 0.9, axis=(1, 2))
                else:
                    basic_bckgrnd_values = np.quantile(converted_video, 0.1, axis=(1, 2))
                luminosity_segmentation, l_threshold_over_time = segment_with_lum_value(converted_video, basic_bckgrnd_values,
                                                                 l_threshold, self.vars['lighter_background'])
            else:
                luminosity_segmentation, l_threshold_over_time = segment_with_lum_value(converted_video[:(self.lost_frames + 1), ...],
                                                               basic_bckgrnd_values, l_threshold,
                                                               self.vars['lighter_background'])
        return luminosity_segmentation, l_threshold_over_time

    def smooth_pixel_slopes(self, converted_video: NDArray) -> NDArray:
        """
        Apply smoothing to pixel slopes in a video by convolving with a moving average kernel.

        Parameters
        ----------
        converted_video : NDArray
            The input video array to be smoothed.

        Returns
        -------
        NDArray
            Smoothed video array with pixel slopes averaged using a moving average kernel.

        Raises
        ------
        MemoryError
            If there is not enough RAM available to perform the smoothing operation.

        Notes
        -----
        This function applies a moving average kernel to each pixel across the frames of
        the input video. The smoothing operation can be repeated based on user-defined settings.
        The precision of the output array is controlled by a flag that determines whether to
        save memory at the cost of accuracy.

        Examples
        --------
        >>> smoothed = smooth_pixel_slopes(converted_video)
        >>> print(smoothed.shape)  # Expected output will vary depending on the input video shape
        """
        try:
            if self.vars['lose_accuracy_to_save_memory']:
                smoothed_video = np.zeros(self.dims, dtype=np.float32)
                smooth_kernel = np.ones(self.step, dtype=np.float64) / self.step
                for i in np.arange(converted_video.shape[1]):
                    for j in np.arange(converted_video.shape[2]):
                        padded = np.pad(converted_video[:, i, j] / self.mean_intensity_per_frame,
                                     (self.step // 2, self.step - 1 - self.step // 2), mode='edge')
                        moving_average = np.convolve(padded, smooth_kernel, mode='valid')
                        if self.vars['repeat_video_smoothing'] > 1:
                            for it in np.arange(1, self.vars['repeat_video_smoothing']):
                                padded = np.pad(moving_average,
                                             (self.step // 2, self.step - 1 - self.step // 2), mode='edge')
                                moving_average = np.convolve(padded, smooth_kernel, mode='valid')
                        smoothed_video[:, i, j] = moving_average.astype(np.float32)
            else:
                smoothed_video = np.zeros(self.dims, dtype=np.float64)
                smooth_kernel = np.ones(self.step) / self.step
                for i in np.arange(converted_video.shape[1]):
                    for j in np.arange(converted_video.shape[2]):
                        padded = np.pad(converted_video[:, i, j] / self.mean_intensity_per_frame,
                                     (self.step // 2, self.step - 1 - self.step // 2), mode='edge')
                        moving_average = np.convolve(padded, smooth_kernel, mode='valid')
                        if self.vars['repeat_video_smoothing'] > 1:
                            for it in np.arange(1, self.vars['repeat_video_smoothing']):
                                padded = np.pad(moving_average,
                                             (self.step // 2, self.step - 1 - self.step // 2), mode='edge')
                                moving_average = np.convolve(padded, smooth_kernel, mode='valid')
                        smoothed_video[:, i, j] = moving_average
            return smoothed_video

        except MemoryError:
            logging.error("Not enough RAM available to smooth pixel curves. Detection may fail.")
            smoothed_video = converted_video
            return smoothed_video

    def lum_slope_segmentation(self, converted_video: NDArray) -> NDArray:
        """
        Perform lum slope segmentation on the given video.

        Parameters
        ----------
        converted_video : NDArray
            The input video array for segmentation processing.

        Returns
        -------
        NDArray
            Segmented gradient array of the video. If segmentation fails,
            returns `None` for the corresponding frames.

        Notes
        -----
        This function may consume significant memory and adjusts
        data types (float32 or float64) based on available RAM.

        Examples
        --------
        >>> result = lum_slope_segmentation(converted_video)
        """
        shape_motion_failed : bool = False
        # 2) Contrast increase
        oridx = np.nonzero(self.origin)
        notoridx = np.nonzero(1 - self.origin)
        do_increase_contrast = np.mean(converted_video[0, oridx[0], oridx[1]]) * 10 > np.mean(
                converted_video[0, notoridx[0], notoridx[1]])
        necessary_memory = self.dims[0] * self.dims[1] * self.dims[2] * 64 * 2 * 1.16415e-10
        available_memory = (virtual_memory().available >> 30) - self.vars['min_ram_free']
        if self.vars['lose_accuracy_to_save_memory']:
            derive = converted_video.astype(np.float32)
        else:
            derive = converted_video.astype(np.float64)
        if necessary_memory > available_memory:
            converted_video = None

        if do_increase_contrast:
            derive = np.square(derive)

        # 3) Get the gradient
        necessary_memory = derive.size * 64 * 4 * 1.16415e-10
        available_memory = (virtual_memory().available >> 30) - self.vars['min_ram_free']
        if necessary_memory > available_memory:
            for cy in np.arange(self.dims[1]):
                for cx in np.arange(self.dims[2]):
                    if self.vars['lose_accuracy_to_save_memory']:
                        derive[:, cy, cx] = np.gradient(derive[:, cy, cx], self.step).astype(np.float32)
                    else:
                        derive[:, cy, cx] = np.gradient(derive[:, cy, cx], self.step)
        else:
            if self.vars['lose_accuracy_to_save_memory']:
                derive = np.gradient(derive, self.step, axis=0).astype(np.float32)
            else:
                derive = np.gradient(derive, self.step, axis=0)

        # 4) Segment
        if self.vars['lighter_background']:
            covering_slopes = np.min(derive[:self.substantial_time, :, :], 0) * self.substantial_image
        else:
            covering_slopes = np.max(derive[:self.substantial_time, :, :], 0) * self.substantial_image
        covering_slopes = covering_slopes[covering_slopes != 0]
        if len(covering_slopes) == 0:
            shape_motion_failed = True

        gradient_segmentation = None
        if not shape_motion_failed:
            gradient_segmentation = np.zeros(self.dims, np.uint8)
            ####
            # ease_slope_segmentation = 0.8
            value_segmentation_thresholds = np.arange(0.8, -0.7, -0.1)
            validated_thresholds = np.zeros(value_segmentation_thresholds.shape, dtype=bool)
            counter = 0
            while_condition = True
            max_motion_per_frame = (self.dims[1] * self.dims[2]) * self.vars['maximal_growth_factor']
            # Try different values of do_slope_segmentation and keep the one that does not
            # segment more than x percent of the image
            while counter < value_segmentation_thresholds.shape[0]:
                ease_slope_segmentation = value_segmentation_thresholds[counter]
                if self.vars['lighter_background']:
                    gradient_threshold = (1 + ease_slope_segmentation) * np.max(covering_slopes)
                    sample = np.less(derive[:self.substantial_time], gradient_threshold)
                else:
                    gradient_threshold = (1 - ease_slope_segmentation) * np.min(covering_slopes)
                    sample = np.greater(derive[:self.substantial_time], gradient_threshold)
                changing_pixel_number = np.sum(np.absolute(np.diff(sample.astype(np.int8), 1, 0)), (1, 2))
                validation = np.max(np.sum(sample, (1, 2))) < max_motion_per_frame and (
                        np.max(changing_pixel_number) < max_motion_per_frame)
                validated_thresholds[counter] = validation
                if np.any(validated_thresholds):
                    if not validation:
                        break
                counter += 1
                # If any threshold is accepted, use their average to proceed the final thresholding
            valid_number = validated_thresholds.sum()
            if valid_number > 0:
                if valid_number > 2:
                    index_to_keep = 2
                else:
                    index_to_keep = valid_number - 1
                ease_slope_segmentation = value_segmentation_thresholds[
                    np.uint8(np.floor(np.mean(np.nonzero(validated_thresholds)[0][index_to_keep])))]

                if self.vars['lighter_background']:
                    gradient_threshold = (1 - ease_slope_segmentation) * np.max(covering_slopes)
                    gradient_segmentation[:-self.lost_frames, :, :] = np.less(derive, gradient_threshold)[
                        self.lost_frames:, :, :]
                else:
                    gradient_threshold = (1 - ease_slope_segmentation) * np.min(covering_slopes)
                    gradient_segmentation[:-self.lost_frames, :, :] = np.greater(derive, gradient_threshold)[
                        self.lost_frames:, :, :]
            else:
                if self.vars['lighter_background']:
                    gradient_segmentation[:-self.lost_frames, :, :] = (derive < (np.min(derive, (1, 2)) * 1.1)[:, None, None])[self.lost_frames:, :, :]
                else:
                    gradient_segmentation[:-self.lost_frames, :, :] = (derive > (np.max(derive, (1, 2)) * 0.1)[:, None, None])[self.lost_frames:, :, :]
        return gradient_segmentation

    def update_ring_width(self):
        """

        Update the `pixel_ring_depth` and create an erodila disk.

        This method ensures that the pixel ring depth is odd and at least 3,
        then creates an erodila disk of that size.
        """
        # Make sure that self.pixels_depths are odd and greater than 3
        if self.pixel_ring_depth <= 3:
            self.pixel_ring_depth = 3
        if self.pixel_ring_depth % 2 == 0:
            self.pixel_ring_depth = self.pixel_ring_depth + 1
        self.erodila_disk = create_ellipse(self.pixel_ring_depth, self.pixel_ring_depth, min_size=3).astype(np.uint8)
        self.max_distance = self.pixel_ring_depth * self.vars['detection_range_factor']

    def initialize_post_processing(self):
        """

        Initialize post-processing for video analysis.

        This function initializes various parameters and prepares the binary
        representation used in post-processing of video data. It logs information about
        the settings, handles initial origin states, sets up segmentation data,
        calculates surface areas, and optionally corrects errors around the initial
        shape or prevents fast growth near the periphery.

        Notes
        -----
        This function performs several initialization steps and logs relevant information,
        including handling different origin states, updating segmentation data, and
        calculating the gravity field based on binary representation.

        """
        ## Initialization
        logging.info(f"Arena n°{self.one_descriptor_per_arena['arena']}. Starting Post_processing. Specimen activity: {self.vars['specimen_activity']}. Fading coef: {self.vars['fading']}, Subtract background: {self.vars['subtract_background']}, Correct errors around initial shape: {self.vars['correct_errors_around_initial']}, Connect distant shapes: {self.vars['detection_range_factor'] > 0}, How to select appearing cell(s): {self.vars['appearance_detection_method']}")
        self.binary = np.zeros(self.dims[:3], dtype=np.uint8)
        if self.origin.shape[0] != self.binary[self.start - 1, :, :].shape[0] or self.origin.shape[1] != self.binary[self.start - 1, :, :].shape[1]:
            logging.error("Unaltered videos deprecated, they have been created with different settings.\nDelete .h5 videos and cellects_settings.json and re-run")

        if self.vars['origin_state'] == "invisible":
            self.binary[self.start - 1, :, :] = self.origin.copy()
            self.covering_intensity[self.origin_idx[0], self.origin_idx[1]] = self.converted_video[self.start, self.origin_idx[0], self.origin_idx[1]]
        else:
            if self.vars['origin_state'] == "fluctuating":
                self.covering_intensity[self.origin_idx[0], self.origin_idx[1]] = np.median(self.converted_video[:self.start, self.origin_idx[0], self.origin_idx[1]], axis=0)

            self.binary[:self.start, :, :] = np.repeat(np.expand_dims(self.origin, 0), self.start, axis=0)
            if self.start < self.step:
                frames_to_assess = self.step
                self.segmented[self.start - 1, ...] = self.binary[self.start - 1, :, :]
                for t in np.arange(self.start, self.lost_frames):
                    # Only keep pixels that are always detected
                    always_found = np.sum(self.segmented[t:(t + frames_to_assess), ...], 0)
                    always_found = always_found == frames_to_assess
                    # Remove too small shapes
                    without_small, stats, centro = cc(always_found.astype(np.uint8))
                    large_enough = np.nonzero(stats[1:, 4] > ((self.vars['first_move_threshold'] + 1) // 2))[0]
                    if len(large_enough) > 0:
                        always_found *= np.isin(always_found, large_enough + 1)
                        always_found = np.logical_or(always_found, self.segmented[t - 1, ...])
                        self.segmented[t, ...] *= always_found
                    else:
                        self.segmented[t, ...] = 0
                    self.segmented[t, ...] = np.logical_or(self.segmented[t - 1, ...], self.segmented[t, ...])
        self.mean_distance_per_frame = None
        self.surfarea = np.zeros(self.dims[0], dtype =np.uint64)
        self.surfarea[:self.start] = np.sum(self.binary[:self.start, :, :], (1, 2))
        self.gravity_field = inverted_distance_transform(self.binary[(self.start - 1), :, :],
                                           np.sqrt(np.sum(self.binary[(self.start - 1), :, :])))
        if self.vars['correct_errors_around_initial']:
            self.rays, self.sun = draw_me_a_sun(self.binary[(self.start - 1), :, :], ray_length_coef=1)  # plt.imshow(sun)
            self.holes = np.zeros(self.dims[1:], dtype=np.uint8)
            self.pixel_ring_depth += 2
            self.update_ring_width()

        if self.vars['prevent_fast_growth_near_periphery']:
            self.near_periphery = np.zeros(self.dims[1:])
            if self.vars['arena_shape'] == 'circle':
                periphery_width = self.vars['periphery_width'] * 2
                elliperiphery = create_ellipse(self.dims[1] - periphery_width, self.dims[2] - periphery_width, min_size=3)
                half_width = periphery_width // 2
                if periphery_width % 2 == 0:
                    self.near_periphery[half_width:-half_width, half_width:-half_width] = elliperiphery
                else:
                    self.near_periphery[half_width:-half_width - 1, half_width:-half_width - 1] = elliperiphery
                self.near_periphery = 1 - self.near_periphery
            else:
                self.near_periphery[:self.vars['periphery_width'], :] = 1
                self.near_periphery[-self.vars['periphery_width']:, :] = 1
                self.near_periphery[:, :self.vars['periphery_width']] = 1
                self.near_periphery[:, -self.vars['periphery_width']:] = 1
            self.near_periphery = np.nonzero(self.near_periphery)

    def update_shape(self, show_seg: bool):
        """
        Update the shape of detected objects in the current frame by analyzing
        segmentation potentials and applying morphological operations.

        Parameters
        ----------
        show_seg : bool
            Flag indicating whether to display segmentation results.

        Notes
        -----
        This function performs several operations to update the shape of detected objects:
        - Analyzes segmentation potentials from previous frames.
        - Applies morphological operations to refine the shape.
        - Updates internal state variables such as `binary` and `covering_intensity`.

        """
        # Get from gradients, a 2D matrix of potentially covered pixels
        # I/ dilate the shape made with covered pixels to assess for covering
        # I/ 1) Only keep pixels that have been detected at least two times in the three previous frames
        if self.vars['sliding_window_segmentation']:
            if self.t > 1:
                new_potentials = np.sum(self.segmented[(self.t - 2): (self.t + 1), :, :], 0, dtype=np.uint8)
            else:
                new_potentials = np.sum(self.segmented[: (self.t + 1), :, :], 0, dtype=np.uint8)
            new_potentials[new_potentials == 1] = 0
            new_potentials[new_potentials > 1] = 1
        else:
            new_potentials = self.segmented[self.t, :, :]

        # I/ 2) If an image displays more new potential pixels than 50% of image pixels,
        # one of these images is considered noisy and we try taking only one.
        frame_counter = -1
        maximal_size = 0.5 * new_potentials.size
        if (self.vars["do_threshold_segmentation"] or self.vars["frame_by_frame_segmentation"]) and self.t > np.max((self.start + self.step, 6)):
           maximal_size = np.min((np.max(self.binary[:self.t].sum((1, 2))) * (1 + self.vars['maximal_growth_factor']), self.borders.sum()))
        while np.logical_and(np.sum(new_potentials) > maximal_size,
                             frame_counter <= 5):  # np.logical_and(np.sum(new_potentials > 0) > 5 * np.sum(dila_ring), frame_counter <= 5):
            frame_counter += 1
            if frame_counter > self.t:
                break
            else:
                if frame_counter < 5:
                    new_potentials = self.segmented[self.t - frame_counter, :, :]
                else:
                # If taking only one image is not enough, use the inverse of the fadinged matrix as new_potentials
                # Given it haven't been processed by any slope calculation, it should be less noisy
                    new_potentials = np.sum(self.segmented[(self.t - 5): (self.t + 1), :, :], 0, dtype=np.uint8)
                    new_potentials[new_potentials < 6] = 0
                    new_potentials[new_potentials == 6] = 1


        if self.vars['morphological_opening']:
            new_potentials = cv2.morphologyEx(new_potentials, cv2.MORPH_CLOSE, cross_33)
        if self.vars['morphological_closing']:
            new_potentials = cv2.morphologyEx(new_potentials, cv2.MORPH_OPEN, cross_33)
        new_potentials *= self.borders

        if self.vars['specimen_activity'] in ['grow', 'move and grow']:
            new_shape = self.binary[self.t - 1, :, :].copy()
            new_shape = np.logical_or(new_shape, new_potentials).astype(np.uint8)
            # Add distant shapes within a radius, score every added pixels according to their distance
            if not self.vars['several_blob_per_arena']:
                if new_shape.sum() == 0:
                    new_shape = new_potentials.copy()
                else:
                    pads = ProgressivelyAddDistantShapes(new_potentials, new_shape, self.max_distance)
                    # If max_distance is non nul look for distant shapes
                    pads.consider_shapes_sizes(self.vars['min_size_for_connection'],
                                                         self.vars['max_size_for_connection'])
                    pads.connect_shapes(only_keep_connected_shapes=True, rank_connecting_pixels=True)

                    new_shape = pads.expanded_shape.copy()
                    new_shape[new_shape > 1] = 1
                    if np.logical_and(self.t > self.step, self.t < self.dims[0]):
                        if np.any(pads.expanded_shape > 5):
                            # Add distant shapes back in time at the covering speed of neighbors
                            self.binary[self.t][np.nonzero(new_shape)] = 1
                            self.binary[(self.step):(self.t + 1), :, :] = \
                                pads.modify_past_analysis(self.binary[(self.step):(self.t + 1), :, :],
                                                          self.segmented[(self.step):(self.t + 1), :, :])
                            new_shape = self.binary[self.t, :, :].copy()
                    del pads

                # Fill holes
                new_shape = cv2.morphologyEx(new_shape, cv2.MORPH_CLOSE, cross_33)
            del new_potentials

            if self.vars['specimen_activity'] == 'move and grow' and (self.t > self.step + self.lost_frames):
                # Shape Erosion
                # I/ After a substantial growth, erode the shape made with covered pixels to assess for fading
                # Use the newly covered pixels to calculate their mean covering intensity
                new_idx = np.nonzero(np.logical_xor(new_shape, self.binary[self.t - 1, :, :]))
                start_intensity_monitoring = self.t - self.lost_frames - self.step
                end_intensity_monitoring = self.t - self.lost_frames
                self.covering_intensity[new_idx[0], new_idx[1]] = np.median(self.converted_video[start_intensity_monitoring:end_intensity_monitoring, new_idx[0], new_idx[1]], axis=0)
                previous_binary = self.binary[self.t - 1, :, :]
                greyscale_image = self.converted_video[self.t - self.lost_frames, :, :]
                protect_from_fading = None
                if self.vars['origin_state'] == 'constant':
                    protect_from_fading = self.origin
                new_shape, self.covering_intensity = cell_leaving_detection(new_shape, self.covering_intensity, previous_binary, greyscale_image, self.vars['fading'], self.vars['lighter_background'], self.vars['several_blob_per_arena'], self.erodila_disk, protect_from_fading)
                del new_idx
        else:
            new_shape = new_potentials

        self.covering_intensity *= new_shape
        self.binary[self.t, :, :] = new_shape * self.borders
        self.surfarea[self.t] = np.sum(self.binary[self.t, :, :])

        if self.vars['specimen_activity'] in ['grow', 'move and grow']:
            # Calculate the mean distance covered per frame and correct for a ring of not really fading pixels
            if self.mean_distance_per_frame is None:
                if self.vars['correct_errors_around_initial'] and not self.vars['several_blob_per_arena']:
                    if np.logical_and((self.t % 20) == 0,
                                      np.logical_and(self.surfarea[self.t] > self.substantial_growth,
                                                     self.surfarea[self.t] < self.substantial_growth * 2)):
                        shape = self.binary[self.t, :, :] * self.sun
                        back = (1 - self.binary[self.t, :, :]) * self.sun
                        for ray in self.rays:
                            # For each sun's ray, see how they cross the shape/back and
                            # store the gravity_field value of these pixels (distance to the original shape).
                            ray_through_shape = (shape == ray) * self.gravity_field
                            ray_through_back = (back == ray) * self.gravity_field
                            if np.any(ray_through_shape):
                                if np.any(ray_through_back):
                                    # If at least one back pixel is nearer to the original shape than a shape pixel,
                                    # there is a hole to fill.
                                    if np.any(ray_through_back > np.min(ray_through_shape[ray_through_shape > 0])):
                                        # Check if the nearest pixels are shape, if so, supress them until the nearest pixel
                                        # becomes back
                                        while np.max(ray_through_back) <= np.max(ray_through_shape):
                                            ray_through_shape[ray_through_shape == np.max(ray_through_shape)] = 0
                                        # Now, all back pixels that are nearer than the closest shape pixel should get filled
                                        # To do so, replace back pixels further than the nearest shape pixel by 0
                                        ray_through_back[ray_through_back < np.max(ray_through_shape)] = 0
                                        self.holes[np.nonzero(ray_through_back)] = 1
                                else:
                                    self.rays = self.rays[self.rays != ray]
                            del ray_through_shape
                            del ray_through_back
                        del shape
                        del back
                if np.any(self.surfarea[:self.t] > self.substantial_growth * 2):

                    if self.vars['correct_errors_around_initial'] and not self.vars['several_blob_per_arena']:
                        # Apply the hole correction
                        self.holes = cv2.morphologyEx(self.holes, cv2.MORPH_CLOSE, cross_33, iterations=10)
                        # If some holes are not covered by now
                        if np.any(self.holes * (1 - self.binary[self.t, :, :])):
                            self.binary[:(self.t + 1), :, :], holes_time_end, distance_against_time = \
                                dynamically_expand_to_fill_holes(self.binary[:(self.t + 1), :, :], self.holes)
                            if holes_time_end is not None:
                                self.binary[holes_time_end:(self.t + 1), :, :] += self.binary[holes_time_end, :, :]
                                self.binary[holes_time_end:(self.t + 1), :, :][
                                    self.binary[holes_time_end:(self.t + 1), :, :] > 1] = 1
                                self.surfarea[:(self.t + 1)] = np.sum(self.binary[:(self.t + 1), :, :], (1, 2))

                        else:
                            distance_against_time = [1, 2]
                    else:
                        distance_against_time = [1, 2]
                    distance_against_time = np.diff(distance_against_time)
                    if len(distance_against_time) > 0:
                        self.mean_distance_per_frame = np.mean(- distance_against_time)
                    else:
                        self.mean_distance_per_frame = 1

            if self.vars['prevent_fast_growth_near_periphery']:
                # growth_near_periphery = np.diff(self.binary[self.t-1:self.t+1, :, :] * self.near_periphery, axis=0)
                growth_near_periphery = np.diff(self.binary[self.t-1:self.t+1, self.near_periphery[0], self.near_periphery[1]], axis=0)
                if (growth_near_periphery == 1).sum() > self.vars['max_periphery_growth']:
                    # self.binary[self.t, self.near_periphery[0], self.near_periphery[1]] = self.binary[self.t - 1, self.near_periphery[0], self.near_periphery[1]]
                    periphery_to_remove = np.zeros(self.dims[1:], dtype=np.uint8)
                    periphery_to_remove[self.near_periphery[0], self.near_periphery[1]] = self.binary[self.t, self.near_periphery[0], self.near_periphery[1]]
                    shapes, stats, centers = cc(periphery_to_remove)
                    periphery_to_remove = np.nonzero(np.isin(shapes, np.nonzero(stats[:, 4] > self.vars['max_periphery_growth'])[0][1:]))
                    self.binary[self.t, periphery_to_remove[0], periphery_to_remove[1]] = self.binary[self.t - 1, periphery_to_remove[0], periphery_to_remove[1]]
                    if not self.vars['several_blob_per_arena']:
                        shapes, stats, centers = cc(self.binary[self.t, ...])
                        shapes[shapes != 1] = 0
                        self.binary[self.t, ...] = shapes
                    del periphery_to_remove
                    del shapes
                    del stats
                    del centers
                del growth_near_periphery

        # Display
        if show_seg:
            if self.visu is not None:
                im_to_display = self.visu[self.t, ...].copy()
                contours = np.nonzero(cv2.morphologyEx(self.binary[self.t, :, :], cv2.MORPH_GRADIENT, cross_33))
                if self.vars['lighter_background']:
                    im_to_display[contours[0], contours[1]] = 0
                else:
                    im_to_display[contours[0], contours[1]] = 255
            else:
                im_to_display = self.binary[self.t, :, :] * 255
            imtoshow = cv2.resize(im_to_display, (540, 540))
            cv2.imshow("shape_motion", imtoshow)
            cv2.waitKey(1)
        self.t += 1

    def get_descriptors_from_binary(self, release_memory: bool=True):
        """

        Methods: get_descriptors_from_binary

        Summary
        -------
        Generates shape descriptors for binary images, computes these descriptors for each frame and handles colony
        tracking. This method can optionally release memory to reduce usage, apply scaling factors to descriptors
        in millimeters and computes solidity separately if requested.

        Parameters
        ----------
        release_memory : bool, optional
            Flag to determine whether memory should be released after computation. Default is True.

        Other Parameters
        ----------------
        **self.one_row_per_frame**
            DataFrame to store one row of descriptors per frame.
            - **'arena'**: Arena identifier, repeated for each frame.
            - **'time'**: Array of time values corresponding to frames.

        **self.binary**
            3D array representing binary images over time.
            - **t,x,y**: Time index, x-coordinate, and y-coordinate.

        **self.dims**
            Tuple containing image dimensions.
            - **0**: Number of time frames.
            - **1,2**: Image width and height respectively.

        **self.surfarea**
            Array containing surface areas for each frame.

        **self.time_interval**
            Time interval between frames, calculated only if provided timings are non-zero.

        Notes
        -----
        This method uses various helper methods and classes like `ShapeDescriptors` for computing shape descriptors,
        `PercentAndTimeTracker` for progress tracking, and other image processing techniques such as connected components analysis.

        """
        logging.info(f"Arena n°{self.one_descriptor_per_arena['arena']}. Computing and saving specimen(s) coordinates and required descriptors")
        if release_memory:
            del self.substantial_image
            del self.covering_intensity
            del self.segmented
            del self.gravity_field
            del self.sun
            del self.rays
            del self.holes
            collect()
        self.surfarea = self.binary.sum((1, 2))
        timings = np.array(self.vars['exif'])
        if len(timings) < self.dims[0]:
            timings = np.arange(self.dims[0])
        if np.any(timings > 0):
            self.time_interval = np.mean(np.diff(timings))
        else:
            self.time_interval = 1.
        timings = timings[:self.dims[0]]

        # Detect first motion
        self.one_descriptor_per_arena['first_move'] = detect_first_move(self.surfarea, self.vars['first_move_threshold'])

        self.compute_solidity_separately: bool = self.vars['iso_digi_analysis'] and not self.vars['several_blob_per_arena'] and not self.vars['descriptors']['solidity']
        if self.compute_solidity_separately:
            self.solidity = np.zeros(self.dims[0], dtype=np.float64)
        if not self.vars['several_blob_per_arena']:
            # solidity must be added if detect growth transition is computed
            if self.compute_solidity_separately:
                for t in np.arange(self.dims[0]):
                    solidity = ShapeDescriptors(self.binary[t, :, :], ["solidity"])
                    self.solidity[t] = solidity.descriptors["solidity"]
            self.one_row_per_frame = compute_one_descriptor_per_frame(self.binary,
                                                                      self.one_descriptor_per_arena['arena'], timings,
                                                                      self.vars['descriptors'],
                                                                      self.vars['output_in_mm'],
                                                                      self.vars['average_pixel_size'],
                                                                      self.vars['specimen_activity'],
                                                                      self.vars['save_coord_specimen'])
        else:
            self.one_row_per_frame = compute_one_descriptor_per_colony(self.binary,
                                                                       self.one_descriptor_per_arena['arena'], timings,
                                                                       self.vars['descriptors'],
                                                                       self.vars['output_in_mm'],
                                                                       self.vars['average_pixel_size'],
                                                                       self.vars['specimen_activity'],
                                                                       self.vars['first_move_threshold'],
                                                                       self.vars['save_coord_specimen'])
        self.one_descriptor_per_arena["final_area"] = self.binary[-1, :, :].sum()
        if self.vars['output_in_mm']:
            self.one_descriptor_per_arena = scale_descriptors(self.one_descriptor_per_arena, self.vars['average_pixel_size'])

    def detect_growth_transitions(self):
        """
        Detect growth transitions in a biological image processing context.

        Analyzes the growth transitions of a shape within an arena, determining
        whether growth is isotropic and identifying any breaking points.

        Notes:
            This method modifies the `one_descriptor_per_arena` dictionary in place
            to include growth transition information.

        """
        if self.vars['iso_digi_analysis'] and not self.vars['several_blob_per_arena']:
            self.one_descriptor_per_arena['iso_digi_transi'] = pd.NA
            if not pd.isna(self.one_descriptor_per_arena['first_move']):
                logging.info(f"Arena n°{self.one_descriptor_per_arena['arena']}. Starting growth transition analysis.")

                # II) Once a pseudopod is deployed, look for a disk/ around the original shape
                growth_begining = self.surfarea < ((self.surfarea[0] * 1.2) + ((self.dims[1] / 4) * (self.dims[2] / 4)))
                dilated_origin = cv2.dilate(self.binary[self.one_descriptor_per_arena['first_move'], :, :], kernel=cross_33, iterations=10, borderType=cv2.BORDER_CONSTANT, borderValue=0)
                isisotropic = np.sum(self.binary[:, :, :] * dilated_origin, (1, 2))
                isisotropic *= growth_begining
                # Ask if the dilated origin area is 90% covered during the growth beginning
                isisotropic = isisotropic > 0.9 * dilated_origin.sum()
                if np.any(isisotropic):
                    self.one_descriptor_per_arena['is_growth_isotropic'] = 1
                    # Determine a solidity reference to look for a potential breaking of the isotropic growth
                    if self.compute_solidity_separately:
                        solidity_reference = np.mean(self.solidity[:self.one_descriptor_per_arena['first_move']])
                        different_solidity = self.solidity < (0.9 * solidity_reference)
                        del self.solidity
                    else:
                        solidity_reference = np.mean(
                            self.one_row_per_frame.iloc[:(self.one_descriptor_per_arena['first_move']), :]["solidity"])
                        different_solidity = self.one_row_per_frame["solidity"].values < (0.9 * solidity_reference)
                    # Make sure that isotropic breaking not occur before isotropic growth
                    if np.any(different_solidity):
                        self.one_descriptor_per_arena["iso_digi_transi"] = np.nonzero(different_solidity)[0][0] * self.time_interval
                else:
                    self.one_descriptor_per_arena['is_growth_isotropic'] = 0
            else:
                self.one_descriptor_per_arena['is_growth_isotropic'] = pd.NA


    def check_converted_video_type(self):
        """
        Check if the converted video type is uint8 and normalize it if necessary.
        """
        if self.converted_video.dtype != "uint8":
            self.converted_video = bracket_to_uint8_image_contrast(self.converted_video)

    def networks_analysis(self, show_seg: bool=False):
        """
        Perform network detection within a given arena.

        This function carries out the task of detecting networks in an arena
        based on several parameters and variables. It involves checking video
        type, performing network detection over time, potentially detecting
        pseudopods, and smoothing segmentation. The results can be visualized or saved.
Extract and analyze graphs from a binary representation of network dynamics, producing vertex
        and edge tables that represent the graph structure over time.

        Args:
            None

        Attributes:
            vars (dict): Dictionary of variables that control the graph extraction process.
                - 'save_graph': Boolean indicating if graph extraction should be performed.
                - 'save_coord_network': Boolean indicating if the coordinate network should be saved.

            one_descriptor_per_arena (dict): Dictionary containing descriptors for each arena.

            dims (tuple): Tuple containing dimension information.
                - [0]: Integer representing the number of time steps.
                - [1]: Integer representing the y-dimension size.
                - [2]: Integer representing the x-dimension size.

            origin (np.ndarray): Binary image representing the origin of the network.

            binary (np.ndarray): Binary representation of network dynamics over time.
                Shape: (time_steps, y_dimension, x_dimension).

            converted_video (np.ndarray): Converted video data.
                Shape: (y_dimension, x_dimension, time_steps).

            network_dynamics (np.ndarray): Network dynamics representation.
                Shape: (time_steps, y_dimension, x_dimension).

        Notes:
            - This method performs graph extraction and saves the vertex and edge tables to CSV files.
            - The CSV files are named according to the arena, time steps, and dimensions.

        Args:
            show_seg: bool = False
                A flag that determines whether to display the segmentation visually.
        """
        coord_pseudopods = None
        if not self.vars['several_blob_per_arena'] and self.vars['save_coord_network']:
            self.check_converted_video_type()

            if self.vars['origin_state'] == "constant":
                self.coord_network, coord_pseudopods = detect_network_dynamics(self.converted_video, self.binary,
                                                           self.one_descriptor_per_arena['arena'], 0,
                                                           self.visu, self.origin, self.vars['sliding_window_segmentation'], 5, True,
                                                           self.vars['save_coord_network'], show_seg)
            else:
                self.coord_network, coord_pseudopods = detect_network_dynamics(self.converted_video, self.binary,
                                                           self.one_descriptor_per_arena['arena'], 0,
                                                           self.visu, None, self.vars['sliding_window_segmentation'], 5, True,
                                                           self.vars['save_coord_network'], show_seg)

        if not self.vars['several_blob_per_arena'] and self.vars['save_graph']:
            if self.coord_network is None:
                self.coord_network = np.array(np.nonzero(self.binary))
            if self.vars['origin_state'] == "constant":
                extract_graph_dynamics(self.converted_video, self.coord_network, self.one_descriptor_per_arena['arena'],
                                       0, self.origin, coord_pseudopods)
            else:
                extract_graph_dynamics(self.converted_video, self.coord_network, self.one_descriptor_per_arena['arena'],
                                       0, None, coord_pseudopods)
        del coord_pseudopods

    def study_cytoscillations(self, show_seg: bool=False):
        """

            Study the cytoskeletal oscillations within a video frame by frame.

            This method performs an analysis of cytoskeletal oscillations in the video,
            identifying regions of influx and efflux based on pixel connectivity.
            It also handles memory allocation for the oscillations video, computes
            connected components, and optionally displays the segmented regions.

            Args:
                show_seg (bool): If True, display the segmentation results.
        """
        if self.vars['save_coord_thickening_slimming'] or self.vars['oscilacyto_analysis']:
            oscillations_video = detect_oscillations_dynamics(self.converted_video, self.binary,
                                                              self.one_descriptor_per_arena['arena'], self.start,
                                                              self.vars['expected_oscillation_period'],
                                                              self.time_interval,
                                                              self.vars['minimal_oscillating_cluster_size'],
                                                              self.vars['min_ram_free'],
                                                              self.vars['lose_accuracy_to_save_memory'],
                                                              self.vars['save_coord_thickening_slimming'])
            del oscillations_video


    def fractal_descriptions(self):
        """

        Method for analyzing fractal patterns in binary data.

        Fractal analysis is performed on the binary representation of the data,
        optionally considering network dynamics if specified. The results
        include fractal dimensions, R-values, and box counts for the data.

        If network analysis is enabled, additional fractal dimensions,
        R-values, and box counts are calculated for the inner network.
        If 'output_in_mm' is True, then values in mm can be obtained.

        """
        if self.vars['fractal_analysis']:
            logging.info(f"Arena n°{self.one_descriptor_per_arena['arena']}. Starting fractal analysis.")

            if self.vars['save_coord_network']:
                box_counting_dimensions = np.zeros((self.dims[0], 7), dtype=np.float64)
            else:
                box_counting_dimensions = np.zeros((self.dims[0], 3), dtype=np.float64)

            for t in np.arange(self.dims[0]):
                if self.vars['save_coord_network']:
                    current_network = np.zeros(self.dims[1:], dtype=np.uint8)
                    net_t = self.coord_network[1:, self.coord_network[0, :] == t]
                    current_network[net_t[0], net_t[1]] = 1
                    box_counting_dimensions[t, 0] = current_network.sum()
                    zoomed_binary, side_lengths = prepare_box_counting(self.binary[t, ...], min_mesh_side=self.vars[
                        'fractal_box_side_threshold'], zoom_step=self.vars['fractal_zoom_step'], contours=True)
                    box_counting_dimensions[t, 1], box_counting_dimensions[t, 2], box_counting_dimensions[
                        t, 3] = box_counting_dimension(zoomed_binary, side_lengths)
                    zoomed_binary, side_lengths = prepare_box_counting(current_network,
                                                                       min_mesh_side=self.vars[
                                                                           'fractal_box_side_threshold'],
                                                                       zoom_step=self.vars['fractal_zoom_step'],
                                                                       contours=False)
                    box_counting_dimensions[t, 4], box_counting_dimensions[t, 5], box_counting_dimensions[
                        t, 6] = box_counting_dimension(zoomed_binary, side_lengths)
                else:
                    zoomed_binary, side_lengths = prepare_box_counting(self.binary[t, ...],
                                                                       min_mesh_side=self.vars['fractal_box_side_threshold'],
                                                                       zoom_step=self.vars['fractal_zoom_step'], contours=True)
                    box_counting_dimensions[t, :] = box_counting_dimension(zoomed_binary, side_lengths)

            if self.vars['save_coord_network']:
                self.one_row_per_frame["inner_network_size"] = box_counting_dimensions[:, 0]
                self.one_row_per_frame["fractal_dimension"] = box_counting_dimensions[:, 1]
                self.one_row_per_frame["fractal_r_value"] = box_counting_dimensions[:, 2]
                self.one_row_per_frame["fractal_box_nb"] = box_counting_dimensions[:, 3]
                self.one_row_per_frame["inner_network_fractal_dimension"] = box_counting_dimensions[:, 4]
                self.one_row_per_frame["inner_network_fractal_r_value"] = box_counting_dimensions[:, 5]
                self.one_row_per_frame["inner_network_fractal_box_nb"] = box_counting_dimensions[:, 6]
                if self.vars['output_in_mm']:
                    self.one_row_per_frame["inner_network_size"] *= self.vars['average_pixel_size']
            else:
                self.one_row_per_frame["fractal_dimension"] = box_counting_dimensions[:, 0]
                self.one_row_per_frame["fractal_box_nb"] = box_counting_dimensions[:, 1]
                self.one_row_per_frame["fractal_r_value"] = box_counting_dimensions[:, 2]

            if self.vars['save_coord_network']:
                del self.coord_network

    def save_efficiency_tests(self):
        """
        Provide images allowing to assess the analysis efficiency

        This method generates two test images used for assessing
        the efficiency of the analysis. It performs various operations on
        video frames to create these images, including copying and manipulating
        frames from the video, detecting contours on binary images,
        and drawing the arena label on the left of the frames.
        """
        # Provide images allowing to assess the analysis efficiency
        if self.dims[0] > 1:
            after_one_tenth_of_time = np.ceil(self.dims[0] / 10).astype(np.uint64)
        else:
            after_one_tenth_of_time = 0

        last_good_detection = self.dims[0] - 1
        if self.dims[0] > self.lost_frames:
            if self.vars['do_threshold_segmentation']:
                last_good_detection -= self.lost_frames
        else:
            last_good_detection = 0
        if self.visu is None:
            if len(self.converted_video.shape) == 3:
                self.converted_video = np.stack((self.converted_video, self.converted_video, self.converted_video),
                                             axis=3)
            self.efficiency_test_1 = self.converted_video[after_one_tenth_of_time, ...].copy()
            self.efficiency_test_2 = self.converted_video[last_good_detection, ...].copy()
        else:
            self.efficiency_test_1 = self.visu[after_one_tenth_of_time, :, :, :].copy()
            self.efficiency_test_2 = self.visu[last_good_detection, :, :, :].copy()

        position = (25, self.dims[1] // 2)
        text = str(self.one_descriptor_per_arena['arena'])
        contours = np.nonzero(get_contours(self.binary[after_one_tenth_of_time, :, :]))
        self.efficiency_test_1[contours[0], contours[1], :] = self.vars['contour_color']
        self.efficiency_test_1 = cv2.putText(self.efficiency_test_1, text, position, cv2.FONT_HERSHEY_SIMPLEX, 1,
                                        (self.vars["contour_color"], self.vars["contour_color"],
                                         self.vars["contour_color"], 255), 3)

        eroded_binary = cv2.erode(self.binary[last_good_detection, :, :], cross_33)
        contours = np.nonzero(self.binary[last_good_detection, :, :] - eroded_binary)
        self.efficiency_test_2[contours[0], contours[1], :] = self.vars['contour_color']
        self.efficiency_test_2 = cv2.putText(self.efficiency_test_2, text, position, cv2.FONT_HERSHEY_SIMPLEX, 1,
                                         (self.vars["contour_color"], self.vars["contour_color"],
                                          self.vars["contour_color"], 255), 3)

    def save_video(self):
        """
        Save processed video with contours and other annotations.

        This method processes the binary image to extract contours, overlay them
        on a video, and save the resulting video file.

        Notes:
            - This method uses OpenCV for image processing and contour extraction.
            - The processed video includes contours colored according to the
              `contour_color` specified in the variables.
            - Additional annotations such as time in minutes are added to each
              frame if applicable.

        """
        if self.vars['save_processed_videos']:
            self.check_converted_video_type()
            if len(self.converted_video.shape) == 3:
                self.converted_video = np.stack((self.converted_video, self.converted_video, self.converted_video),
                                                axis=3)
            for t in np.arange(self.dims[0]):

                eroded_binary = cv2.erode(self.binary[t, :, :], cross_33)
                contours = np.nonzero(self.binary[t, :, :] - eroded_binary)
                self.converted_video[t, contours[0], contours[1], :] = self.vars['contour_color']
                if "iso_digi_transi" in self.one_descriptor_per_arena.keys():
                    if self.vars['iso_digi_analysis']  and not self.vars['several_blob_per_arena'] and not pd.isna(self.one_descriptor_per_arena["iso_digi_transi"]):
                        if self.one_descriptor_per_arena['is_growth_isotropic'] == 1:
                            if t < self.one_descriptor_per_arena["iso_digi_transi"]:
                                self.converted_video[t, contours[0], contours[1], :] = 0, 0, 255
            del self.binary
            del self.surfarea
            del self.borders
            del self.origin
            del self.origin_idx
            del self.mean_intensity_per_frame
            del self.erodila_disk
            collect()
            if self.visu is None:
                true_frame_width = self.dims[2]
                h5_keys = []
                if os.path.isfile(f"ind_{self.one_descriptor_per_arena['arena']}.h5"):
                    h5_keys = get_h5_keys(f"ind_{self.one_descriptor_per_arena['arena']}.h5")
                if 'video' in h5_keys:
                    self.visu = video2numpy(f"ind_{self.one_descriptor_per_arena['arena']}.h5",
                              None, true_frame_width=true_frame_width)
                else:
                    self.visu = self.converted_video
                if len(self.visu.shape) == 3:
                    self.visu = np.stack((self.visu, self.visu, self.visu), axis=3)
            self.converted_video = np.concatenate((self.visu, self.converted_video), axis=2)

            if np.any(self.one_row_per_frame['time'] > 0):
                position = (5, self.dims[1] - 5)
                if self.vars['time_step_is_arbitrary']:
                    time_unit = ""
                else:
                    time_unit = " min"
                for t in np.arange(self.dims[0]):
                    image = self.converted_video[t, ...]
                    text = str(self.one_row_per_frame['time'][t]) + time_unit
                    image = cv2.putText(image,  # numpy array on which text is written
                                    text,  # text
                                    position,  # position at which writing has to start
                                    cv2.FONT_HERSHEY_SIMPLEX,  # font family
                                    1,  # font size
                                    (self.vars["contour_color"], self.vars["contour_color"], self.vars["contour_color"], 255),  #(209, 80, 0, 255),  
                                    2)  # font stroke
                    self.converted_video[t, ...] = image
            vid_name = f"ind_{self.one_descriptor_per_arena['arena']}{self.vars['videos_extension']}"
            write_video(self.converted_video, vid_name, is_color=True, fps=self.vars['video_fps'])

    def save_results(self, with_efficiency_tests: bool = True, with_video:bool = True):
        """
        Save the results of testing and video processing.

        This method handles the saving of efficiency tests, video files,
        and CSV data related to test results. It checks for existing files before writing new data.
        Additionally, it cleans up temporary files if configured to do so.

        Parameters
        ----------
        with_efficiency_tests : bool, optional
            Also save two images showing the analysis efficiency.
        with_video : bool, optional
            Also save a video showing the analysis efficiency.
        """
        if with_efficiency_tests:
            self.save_efficiency_tests()
        if with_video:
            self.save_video()
        if self.vars['several_blob_per_arena']:
            try:
                with open(f"one_row_per_frame_arena{self.one_descriptor_per_arena['arena']}.csv", 'w') as file:
                    self.one_row_per_frame.to_csv(file, sep=';', index=False, lineterminator='\n')
            except PermissionError:
                logging.error(f"Never let one_row_per_frame_arena{self.one_descriptor_per_arena['arena']}.csv open when Cellects runs")

            create_new_csv: bool = False
            if os.path.isfile("one_row_per_arena.csv"):
                try:
                    with open(f"one_row_per_arena.csv", 'r') as file:
                        stats = pd.read_csv(file, header=0, sep=";")
                except PermissionError:
                    logging.error("Never let one_row_per_arena.csv open when Cellects runs")

                if len(self.one_descriptor_per_arena) == len(stats.columns) - 1:
                    try:
                        with open(f"one_row_per_arena.csv", 'w') as file:
                            stats.iloc[(self.one_descriptor_per_arena['arena'] - 1), 1:] = self.one_descriptor_per_arena.values()
                            stats.to_csv(file, sep=';', index=False, lineterminator='\n')
                    except PermissionError:
                        logging.error("Never let one_row_per_arena.csv open when Cellects runs")
                else:
                    create_new_csv = True
            else:
                create_new_csv = True
            if create_new_csv:
                with open(f"one_row_per_arena.csv", 'w') as file:
                    stats = pd.DataFrame(np.zeros((len(self.vars['analyzed_individuals']), len(self.one_descriptor_per_arena))),
                               columns=list(self.one_descriptor_per_arena.keys()))
                    stats.iloc[(self.one_descriptor_per_arena['arena'] - 1), :] = self.one_descriptor_per_arena.values()
                    stats.to_csv(file, sep=';', index=False, lineterminator='\n')
        if not self.vars['keep_unaltered_videos'] and os.path.isfile(f"ind_{self.one_descriptor_per_arena['arena']}.h5"):
            remove_h5_key(f"ind_{self.one_descriptor_per_arena['arena']}.h5", 'video')

    def change_results_of_one_arena(self, save_video: bool = True):
        """
        Manages the saving and updating of CSV files based on data extracted from analyzed
        one arena. Specifically handles three CSV files: "one_row_per_arena.csv",
        "one_row_per_frame.csv".
        Each file is updated or created based on the presence of existing data.
        The method ensures that each CSV file contains the relevant information for
        the given arena, frame, and oscillator cluster data.
        """
        if save_video:
            self.save_video()
        # I/ Update/Create one_row_per_arena.csv
        create_new_csv: bool = False
        new_stats = pd.DataFrame(np.zeros((len(self.vars['analyzed_individuals']), len(self.one_descriptor_per_arena))),
                             columns=list(self.one_descriptor_per_arena.keys()))
        new_stats.iloc[(self.one_descriptor_per_arena['arena'] - 1), :] = self.one_descriptor_per_arena.values()
        if os.path.isfile("one_row_per_arena.csv"):
            try:
                with open(f"one_row_per_arena.csv", 'r') as file:
                    stats = pd.read_csv(file, header=0, sep=";")
                for stat_name, stat_value in new_stats.items():
                    if stat_name in stats.columns:
                        stat_type = stats[stat_name].dtypes
                        if new_stats[stat_name].dtype != stat_type:
                            new_stats[stat_name] = new_stats[stat_name].astype(stat_type)
                        stats.loc[(new_stats['arena'] - 1).astype(int), stat_name] = new_stats[stat_name].values
                with open(f"one_row_per_arena.csv", 'w') as file:
                    stats.to_csv(file, sep=';', index=False, lineterminator='\n')
            except PermissionError:
                logging.error("Never let one_row_per_arena.csv open when Cellects runs")
            except Exception as e:
                logging.error(f"{e}")
                create_new_csv = True
        else:
            create_new_csv = True
        if create_new_csv:
            logging.info("Create a new one_row_per_arena.csv file")
            try:
                with open(f"one_row_per_arena.csv", 'w') as file:
                    new_stats.to_csv(file, sep=';', index=False, lineterminator='\n')
            except PermissionError:
                logging.error("Never let one_row_per_arena.csv open when Cellects runs")

        # II/ Update/Create one_row_per_frame.csv
        create_new_csv = False
        if os.path.isfile("one_row_per_frame.csv"):
            try:
                with open(f"one_row_per_frame.csv", 'r') as file:
                    descriptors = pd.read_csv(file, header=0, sep=";")
                for stat_name, stat_value in self.one_row_per_frame.items():
                    if stat_name in descriptors.columns:
                        stat_type = descriptors[stat_name].dtypes
                        if self.one_row_per_frame[stat_name].dtypes != stat_type:
                            self.one_row_per_frame[stat_name] = self.one_row_per_frame[stat_name].astype(stat_type)
                        descriptors.loc[((self.one_descriptor_per_arena['arena'] - 1) * self.dims[0]):((self.one_descriptor_per_arena['arena']) * self.dims[0] - 1), stat_name] = self.one_row_per_frame.loc[:, stat_name].values[:]
                with open(f"one_row_per_frame.csv", 'w') as file:
                    descriptors.to_csv(file, sep=';', index=False, lineterminator='\n')
            except PermissionError:
                logging.error("Never let one_row_per_frame.csv open when Cellects runs")
            except Exception as e:
                logging.error(f"{e}")
                create_new_csv = True
        else:
            create_new_csv = True
        if create_new_csv:
            logging.info("Create a new one_row_per_frame.csv file")
            try:
                with open(f"one_row_per_frame.csv", 'w') as file:
                    descriptors = pd.DataFrame(np.zeros((len(self.vars['analyzed_individuals']) * self.dims[0], len(self.one_row_per_frame.columns))),
                               columns=list(self.one_row_per_frame.keys()))
                    descriptors.iloc[((self.one_descriptor_per_arena['arena'] - 1) * self.dims[0]):((self.one_descriptor_per_arena['arena']) * self.dims[0]), :] = self.one_row_per_frame
                    descriptors.to_csv(file, sep=';', index=False, lineterminator='\n')
            except PermissionError:
                logging.error("Never let one_row_per_frame.csv open when Cellects runs")

__init__(l)

Analyzes motion in a given arena using video data.

This class processes video frames to analyze motion within a specified area, detecting shapes, covering durations, and generating descriptors for further analysis.

Args: l (list): A list containing various parameters and flags necessary for the motion analysis.

Args: l[0] (int): Arena index. l[1] (str): Arena identifier or name, stored in one_descriptor_per_arena['arena']. l[2] (dict): Variables required for the analysis, stored in vars. l[3] (bool): Flag to detect shape. l[4] (bool): Flag to analyze shape. l[5] (bool): Flag to show segmentation. l[6] (None or list): Videos already in RAM.

Attributes: vars (dict): Variables required for the analysis. visu (None): Placeholder for visualization data. binary (None): Placeholder for binary segmentation data. origin_idx (None): Placeholder for the index of the first frame. smoothing_flag (bool): Flag to indicate if smoothing should be applied. dims (tuple): Dimensions of the converted video. segmentation (ndarray): Array to store segmentation data. covering_intensity (ndarray): Intensity values for covering analysis. mean_intensity_per_frame (ndarray): Mean intensity per frame. borders (object): Borders of the arena. pixel_ring_depth (int): Depth of the pixel ring for analysis, default is 9. step (int): Step size for processing, default is 10. lost_frames (int): Number of lost frames to account for, default is 10. start (None or int): Starting frame index for the analysis.

Methods: load_images_and_videos(videos_already_in_ram, arena_idx): Loads images and videos for the specified arena index. update_ring_width(): Updates the width of the pixel ring for analysis. get_origin_shape(): Detects the origin shape in the video frames. get_covering_duration(step): Calculates the covering duration based on a step size. detection(): Performs motion detection within the arena. initialize_post_processing(): Initializes post-processing steps. update_shape(show_seg): Updates the shape based on segmentation and visualization flags. get_descriptors_from_binary(): Extracts descriptors from binary data. detect_growth_transitions(): Detects growth transitions in the data. networks_analysis(show_seg): Detected networks within the arena based on segmentation visualization. study_cytoscillations(show_seg): Studies cytoscillations within the arena with segmentation visualization. fractal_descriptions(): Generates fractal descriptions of the analyzed data. get_descriptors_summary(): Summarizes the descriptors obtained from the analysis. save_results(): Saves the results of the analysis.

Source code in src/cellects/core/motion_analysis.py
def __init__(self, l: list):

    """
    Analyzes motion in a given arena using video data.

    This class processes video frames to analyze motion within a specified area,
    detecting shapes, covering durations, and generating descriptors for further
    analysis.

    Args:
        l (list): A list containing various parameters and flags necessary for the motion
            analysis.

    Args:
        l[0] (int): Arena index.
        l[1] (str): Arena identifier or name, stored in one_descriptor_per_arena['arena'].
        l[2] (dict): Variables required for the analysis, stored in vars.
        l[3] (bool): Flag to detect shape.
        l[4] (bool): Flag to analyze shape.
        l[5] (bool): Flag to show segmentation.
        l[6] (None or list): Videos already in RAM.

    Attributes:
        vars (dict): Variables required for the analysis.
        visu (None): Placeholder for visualization data.
        binary (None): Placeholder for binary segmentation data.
        origin_idx (None): Placeholder for the index of the first frame.
        smoothing_flag (bool): Flag to indicate if smoothing should be applied.
        dims (tuple): Dimensions of the converted video.
        segmentation (ndarray): Array to store segmentation data.
        covering_intensity (ndarray): Intensity values for covering analysis.
        mean_intensity_per_frame (ndarray): Mean intensity per frame.
        borders (object): Borders of the arena.
        pixel_ring_depth (int): Depth of the pixel ring for analysis, default is 9.
        step (int): Step size for processing, default is 10.
        lost_frames (int): Number of lost frames to account for, default is 10.
        start (None or int): Starting frame index for the analysis.

    Methods:
        load_images_and_videos(videos_already_in_ram, arena_idx): Loads images and videos
            for the specified arena index.
        update_ring_width(): Updates the width of the pixel ring for analysis.
        get_origin_shape(): Detects the origin shape in the video frames.
        get_covering_duration(step): Calculates the covering duration based on a step size.
        detection(): Performs motion detection within the arena.
        initialize_post_processing(): Initializes post-processing steps.
        update_shape(show_seg): Updates the shape based on segmentation and visualization flags.
        get_descriptors_from_binary(): Extracts descriptors from binary data.
        detect_growth_transitions(): Detects growth transitions in the data.
        networks_analysis(show_seg): Detected networks within the arena based on segmentation
            visualization.
        study_cytoscillations(show_seg): Studies cytoscillations within the arena with
            segmentation visualization.
        fractal_descriptions(): Generates fractal descriptions of the analyzed data.
        get_descriptors_summary(): Summarizes the descriptors obtained from the analysis.
        save_results(): Saves the results of the analysis.

    """
    self.one_descriptor_per_arena = {}
    self.one_descriptor_per_arena['arena'] = l[1]
    vars = l[2]
    detect_shape = l[3]
    analyse_shape = l[4]
    show_seg = l[5]
    videos_already_in_ram = l[6]
    self.visu = None
    self.binary = None
    self.origin_idx = None
    self.smoothing_flag: bool = False
    self.drift_mask_coord = None
    self.substantial_image = None
    self.segmented = None
    self.covering_intensity = None
    self.gravity_field = None
    self.rays = None
    self.sun = None
    self.holes = None
    self.coord_network = None
    self.vars = vars
    if not 'contour_color' in self.vars:
        self.vars['contour_color']: np.uint8 = 0
    self.load_images_and_videos(videos_already_in_ram, l[0])

    self.segmented = np.zeros(self.dims, dtype=np.uint8)

    self.covering_intensity = np.zeros(self.dims[1:], dtype=np.float64)
    self.mean_intensity_per_frame = np.mean(self.converted_video, (1, 2))

    self.borders = image_borders(self.dims[1:], shape=self.vars['arena_shape'])
    self.pixel_ring_depth = 9
    self.step: int = 10
    self.lost_frames = 10
    self.update_ring_width()

    self.start = None
    if detect_shape:
        self.assess_motion_detection()
        if self.start is not None:
            self.detection()
            self.initialize_post_processing()
            self.t = self.start
            while self.t < self.dims[0]:  #200:
                self.update_shape(show_seg)
            #

        if analyse_shape:
            self.get_descriptors_from_binary()
            self.detect_growth_transitions()
            self.networks_analysis(show_seg)
            self.study_cytoscillations(show_seg)
            self.fractal_descriptions()
            if videos_already_in_ram is None:
                self.save_results()

assess_motion_detection()

Assess if a motion can be detected using the current parameters.

Validate the specimen(s) detected in the first frame and evaluate roughly how growth occurs during the video.

Source code in src/cellects/core/motion_analysis.py
def assess_motion_detection(self):
    """
    Assess if a motion can be detected using the current parameters.

    Validate the specimen(s) detected in the first frame and evaluate roughly how growth occurs during the video.
    """
    # Here to conditional layers allow to detect if an expansion/exploration occured
    self.get_origin_shape()
    # The first, user-defined is the 'first_move_threshold' and the second is the detection of the
    # substantial image: if any of them is not detected, the program considers there is no motion.
    if self.dims[0] >= 40:
        step = self.dims[0] // 20
    else:
        step = 1
    if self.dims[0] == 1 or self.start >= (self.dims[0] - step - 1):
        self.start = None
        self.binary = np.repeat(np.expand_dims(self.origin, 0), self.converted_video.shape[0], axis=0)
    else:
        self.get_covering_duration(step)

change_results_of_one_arena(save_video=True)

Manages the saving and updating of CSV files based on data extracted from analyzed one arena. Specifically handles three CSV files: "one_row_per_arena.csv", "one_row_per_frame.csv". Each file is updated or created based on the presence of existing data. The method ensures that each CSV file contains the relevant information for the given arena, frame, and oscillator cluster data.

Source code in src/cellects/core/motion_analysis.py
def change_results_of_one_arena(self, save_video: bool = True):
    """
    Manages the saving and updating of CSV files based on data extracted from analyzed
    one arena. Specifically handles three CSV files: "one_row_per_arena.csv",
    "one_row_per_frame.csv".
    Each file is updated or created based on the presence of existing data.
    The method ensures that each CSV file contains the relevant information for
    the given arena, frame, and oscillator cluster data.
    """
    if save_video:
        self.save_video()
    # I/ Update/Create one_row_per_arena.csv
    create_new_csv: bool = False
    new_stats = pd.DataFrame(np.zeros((len(self.vars['analyzed_individuals']), len(self.one_descriptor_per_arena))),
                         columns=list(self.one_descriptor_per_arena.keys()))
    new_stats.iloc[(self.one_descriptor_per_arena['arena'] - 1), :] = self.one_descriptor_per_arena.values()
    if os.path.isfile("one_row_per_arena.csv"):
        try:
            with open(f"one_row_per_arena.csv", 'r') as file:
                stats = pd.read_csv(file, header=0, sep=";")
            for stat_name, stat_value in new_stats.items():
                if stat_name in stats.columns:
                    stat_type = stats[stat_name].dtypes
                    if new_stats[stat_name].dtype != stat_type:
                        new_stats[stat_name] = new_stats[stat_name].astype(stat_type)
                    stats.loc[(new_stats['arena'] - 1).astype(int), stat_name] = new_stats[stat_name].values
            with open(f"one_row_per_arena.csv", 'w') as file:
                stats.to_csv(file, sep=';', index=False, lineterminator='\n')
        except PermissionError:
            logging.error("Never let one_row_per_arena.csv open when Cellects runs")
        except Exception as e:
            logging.error(f"{e}")
            create_new_csv = True
    else:
        create_new_csv = True
    if create_new_csv:
        logging.info("Create a new one_row_per_arena.csv file")
        try:
            with open(f"one_row_per_arena.csv", 'w') as file:
                new_stats.to_csv(file, sep=';', index=False, lineterminator='\n')
        except PermissionError:
            logging.error("Never let one_row_per_arena.csv open when Cellects runs")

    # II/ Update/Create one_row_per_frame.csv
    create_new_csv = False
    if os.path.isfile("one_row_per_frame.csv"):
        try:
            with open(f"one_row_per_frame.csv", 'r') as file:
                descriptors = pd.read_csv(file, header=0, sep=";")
            for stat_name, stat_value in self.one_row_per_frame.items():
                if stat_name in descriptors.columns:
                    stat_type = descriptors[stat_name].dtypes
                    if self.one_row_per_frame[stat_name].dtypes != stat_type:
                        self.one_row_per_frame[stat_name] = self.one_row_per_frame[stat_name].astype(stat_type)
                    descriptors.loc[((self.one_descriptor_per_arena['arena'] - 1) * self.dims[0]):((self.one_descriptor_per_arena['arena']) * self.dims[0] - 1), stat_name] = self.one_row_per_frame.loc[:, stat_name].values[:]
            with open(f"one_row_per_frame.csv", 'w') as file:
                descriptors.to_csv(file, sep=';', index=False, lineterminator='\n')
        except PermissionError:
            logging.error("Never let one_row_per_frame.csv open when Cellects runs")
        except Exception as e:
            logging.error(f"{e}")
            create_new_csv = True
    else:
        create_new_csv = True
    if create_new_csv:
        logging.info("Create a new one_row_per_frame.csv file")
        try:
            with open(f"one_row_per_frame.csv", 'w') as file:
                descriptors = pd.DataFrame(np.zeros((len(self.vars['analyzed_individuals']) * self.dims[0], len(self.one_row_per_frame.columns))),
                           columns=list(self.one_row_per_frame.keys()))
                descriptors.iloc[((self.one_descriptor_per_arena['arena'] - 1) * self.dims[0]):((self.one_descriptor_per_arena['arena']) * self.dims[0]), :] = self.one_row_per_frame
                descriptors.to_csv(file, sep=';', index=False, lineterminator='\n')
        except PermissionError:
            logging.error("Never let one_row_per_frame.csv open when Cellects runs")

check_converted_video_type()

Check if the converted video type is uint8 and normalize it if necessary.

Source code in src/cellects/core/motion_analysis.py
def check_converted_video_type(self):
    """
    Check if the converted video type is uint8 and normalize it if necessary.
    """
    if self.converted_video.dtype != "uint8":
        self.converted_video = bracket_to_uint8_image_contrast(self.converted_video)

detect_growth_transitions()

Detect growth transitions in a biological image processing context.

Analyzes the growth transitions of a shape within an arena, determining whether growth is isotropic and identifying any breaking points.

Notes: This method modifies the one_descriptor_per_arena dictionary in place to include growth transition information.

Source code in src/cellects/core/motion_analysis.py
def detect_growth_transitions(self):
    """
    Detect growth transitions in a biological image processing context.

    Analyzes the growth transitions of a shape within an arena, determining
    whether growth is isotropic and identifying any breaking points.

    Notes:
        This method modifies the `one_descriptor_per_arena` dictionary in place
        to include growth transition information.

    """
    if self.vars['iso_digi_analysis'] and not self.vars['several_blob_per_arena']:
        self.one_descriptor_per_arena['iso_digi_transi'] = pd.NA
        if not pd.isna(self.one_descriptor_per_arena['first_move']):
            logging.info(f"Arena n°{self.one_descriptor_per_arena['arena']}. Starting growth transition analysis.")

            # II) Once a pseudopod is deployed, look for a disk/ around the original shape
            growth_begining = self.surfarea < ((self.surfarea[0] * 1.2) + ((self.dims[1] / 4) * (self.dims[2] / 4)))
            dilated_origin = cv2.dilate(self.binary[self.one_descriptor_per_arena['first_move'], :, :], kernel=cross_33, iterations=10, borderType=cv2.BORDER_CONSTANT, borderValue=0)
            isisotropic = np.sum(self.binary[:, :, :] * dilated_origin, (1, 2))
            isisotropic *= growth_begining
            # Ask if the dilated origin area is 90% covered during the growth beginning
            isisotropic = isisotropic > 0.9 * dilated_origin.sum()
            if np.any(isisotropic):
                self.one_descriptor_per_arena['is_growth_isotropic'] = 1
                # Determine a solidity reference to look for a potential breaking of the isotropic growth
                if self.compute_solidity_separately:
                    solidity_reference = np.mean(self.solidity[:self.one_descriptor_per_arena['first_move']])
                    different_solidity = self.solidity < (0.9 * solidity_reference)
                    del self.solidity
                else:
                    solidity_reference = np.mean(
                        self.one_row_per_frame.iloc[:(self.one_descriptor_per_arena['first_move']), :]["solidity"])
                    different_solidity = self.one_row_per_frame["solidity"].values < (0.9 * solidity_reference)
                # Make sure that isotropic breaking not occur before isotropic growth
                if np.any(different_solidity):
                    self.one_descriptor_per_arena["iso_digi_transi"] = np.nonzero(different_solidity)[0][0] * self.time_interval
            else:
                self.one_descriptor_per_arena['is_growth_isotropic'] = 0
        else:
            self.one_descriptor_per_arena['is_growth_isotropic'] = pd.NA

detection(compute_all_possibilities=False)

Perform frame-by-frame or luminosity-based segmentation on video data to detect cell motion and growth.

This function processes video frames using either frame-by-frame segmentation or luminosity-based segmentation algorithms to detect cell motion and growth. It handles drift correction, adjusts parameters based on configuration settings, and applies logical operations to combine results from different segmentation methods.

Parameters:

Name Type Description Default
compute_all_possibilities bool

Flag to determine if all segmentation possibilities should be computed, by default False

False

Returns:

Type Description
None
Notes

This function modifies the instance variables self.segmented, self.converted_video, and potentially self.luminosity_segmentation and self.gradient_segmentation. Depending on the configuration settings, it performs various segmentation algorithms and updates the instance variables accordingly.

Source code in src/cellects/core/motion_analysis.py
def detection(self, compute_all_possibilities: bool=False):
    """
    Perform frame-by-frame or luminosity-based segmentation on video data to detect cell motion and growth.

    This function processes video frames using either frame-by-frame segmentation or luminosity-based
    segmentation algorithms to detect cell motion and growth. It handles drift correction, adjusts parameters
    based on configuration settings, and applies logical operations to combine results from different segmentation
    methods.

    Parameters
    ----------
    compute_all_possibilities : bool, optional
        Flag to determine if all segmentation possibilities should be computed, by default False

    Returns
    -------
    None

    Notes
    -----
    This function modifies the instance variables `self.segmented`, `self.converted_video`,
    and potentially `self.luminosity_segmentation` and `self.gradient_segmentation`.
    Depending on the configuration settings, it performs various segmentation algorithms and updates
    the instance variables accordingly.

    """
    if self.start is None:
        self.start = 1
    else:
        self.start = np.max((self.start, 1))
    self.lost_frames = np.min((self.step, self.dims[0] // 4))
    # I/ Image by image segmentation algorithms
    # If images contain a drift correction (zeros at borders of the image,
    # Replace these 0 by normal background values before segmenting
    if self.vars['frame_by_frame_segmentation'] or compute_all_possibilities:
        logging.info(f"Arena n°{self.one_descriptor_per_arena['arena']}. Detect cell motion and growth using the frame by frame segmentation algorithm")
        self.segmented = np.zeros(self.dims, dtype=np.uint8)
        for t in np.arange(self.dims[0]):#20):#
            analysisi = self.frame_by_frame_segmentation(t, self.segmented[t - 1, ...])
            self.segmented[t, ...] = analysisi.binary_image

            if self.vars['lose_accuracy_to_save_memory']:
                self.converted_video[t, ...] = bracket_to_uint8_image_contrast(analysisi.image)
            else:
                self.converted_video[t, ...] = analysisi.image
            if self.vars['convert_for_motion']['logical'] != 'None':
                if self.vars['lose_accuracy_to_save_memory']:
                    self.converted_video2[t, ...] = bracket_to_uint8_image_contrast(analysisi.image2)
                else:
                    self.converted_video2[t, ...] = analysisi.image2

    if self.vars['color_number'] == 2:
        luminosity_segmentation, l_threshold_over_time = self.lum_value_segmentation(self.converted_video, do_threshold_segmentation=self.vars['do_threshold_segmentation'] or compute_all_possibilities)
        self.converted_video = self.smooth_pixel_slopes(self.converted_video)
        gradient_segmentation = None
        if self.vars['do_slope_segmentation'] or compute_all_possibilities:
            gradient_segmentation = self.lum_slope_segmentation(self.converted_video)
            if gradient_segmentation is not None:
                gradient_segmentation[-self.lost_frames:, ...] = np.repeat(gradient_segmentation[-self.lost_frames, :, :][np.newaxis, :, :], self.lost_frames, axis=0)
        if self.vars['convert_for_motion']['logical'] != 'None':
            if self.vars['do_threshold_segmentation'] or compute_all_possibilities:
                luminosity_segmentation2, l_threshold_over_time2 = self.lum_value_segmentation(self.converted_video2, do_threshold_segmentation=True)
                if luminosity_segmentation is None:
                    luminosity_segmentation = luminosity_segmentation2
                if luminosity_segmentation is not None:
                    if self.vars['convert_for_motion']['logical'] == 'Or':
                        luminosity_segmentation = np.logical_or(luminosity_segmentation, luminosity_segmentation2)
                    elif self.vars['convert_for_motion']['logical'] == 'And':
                        luminosity_segmentation = np.logical_and(luminosity_segmentation, luminosity_segmentation2)
                    elif self.vars['convert_for_motion']['logical'] == 'Xor':
                        luminosity_segmentation = np.logical_xor(luminosity_segmentation, luminosity_segmentation2)
            self.converted_video2 = self.smooth_pixel_slopes(self.converted_video2)
            if self.vars['do_slope_segmentation'] or compute_all_possibilities:
                gradient_segmentation2 = self.lum_slope_segmentation(self.converted_video2)
                if gradient_segmentation2 is not None:
                    gradient_segmentation2[-self.lost_frames:, ...] = np.repeat(gradient_segmentation2[-self.lost_frames, :, :][np.newaxis, :, :], self.lost_frames, axis=0)
                if gradient_segmentation is None:
                    gradient_segmentation = gradient_segmentation2
                if gradient_segmentation is not None:
                    if self.vars['convert_for_motion']['logical'] == 'Or':
                        gradient_segmentation = np.logical_or(gradient_segmentation, gradient_segmentation2)
                    elif self.vars['convert_for_motion']['logical'] == 'And':
                        gradient_segmentation = np.logical_and(gradient_segmentation, gradient_segmentation2)
                    elif self.vars['convert_for_motion']['logical'] == 'Xor':
                        gradient_segmentation = np.logical_xor(gradient_segmentation, gradient_segmentation2)

        if compute_all_possibilities:
            logging.info(f"Arena n°{self.one_descriptor_per_arena['arena']}. Compute all options to detect cell motion and growth. Maximal growth per frame: {self.vars['maximal_growth_factor']}")
            if luminosity_segmentation is not None:
                self.luminosity_segmentation = np.nonzero(luminosity_segmentation)
            if gradient_segmentation is not None:
                self.gradient_segmentation = np.nonzero(gradient_segmentation)
            if luminosity_segmentation is not None and gradient_segmentation is not None:
                self.logical_and = np.nonzero(np.logical_and(luminosity_segmentation, gradient_segmentation))
                self.logical_or = np.nonzero(np.logical_or(luminosity_segmentation, gradient_segmentation))
        elif not self.vars['frame_by_frame_segmentation']:
            if self.vars['do_threshold_segmentation'] and not self.vars['do_slope_segmentation']:
                logging.info(f"Arena n°{self.one_descriptor_per_arena['arena']}. Detect with luminosity threshold segmentation algorithm")
                self.segmented = luminosity_segmentation
            if self.vars['do_slope_segmentation']:
                logging.info(f"Arena n°{self.one_descriptor_per_arena['arena']}. Detect with luminosity slope segmentation algorithm")
                self.segmented = gradient_segmentation
            if np.logical_and(self.vars['do_threshold_segmentation'], self.vars['do_slope_segmentation']):
                if self.vars['true_if_use_light_AND_slope_else_OR']:
                    logging.info(f"Arena n°{self.one_descriptor_per_arena['arena']}. Detection resuts from threshold AND slope segmentation algorithms")
                    if luminosity_segmentation is not None and gradient_segmentation is not None:
                        self.segmented = np.logical_and(luminosity_segmentation, gradient_segmentation)
                else:
                    logging.info(f"Arena n°{self.one_descriptor_per_arena['arena']}. Detection resuts from threshold OR slope segmentation algorithms")
                    if luminosity_segmentation is not None and gradient_segmentation is not None:
                        self.segmented = np.logical_or(luminosity_segmentation, gradient_segmentation)
            self.segmented = self.segmented.astype(np.uint8)

fractal_descriptions()

Method for analyzing fractal patterns in binary data.

Fractal analysis is performed on the binary representation of the data, optionally considering network dynamics if specified. The results include fractal dimensions, R-values, and box counts for the data.

If network analysis is enabled, additional fractal dimensions, R-values, and box counts are calculated for the inner network. If 'output_in_mm' is True, then values in mm can be obtained.

Source code in src/cellects/core/motion_analysis.py
def fractal_descriptions(self):
    """

    Method for analyzing fractal patterns in binary data.

    Fractal analysis is performed on the binary representation of the data,
    optionally considering network dynamics if specified. The results
    include fractal dimensions, R-values, and box counts for the data.

    If network analysis is enabled, additional fractal dimensions,
    R-values, and box counts are calculated for the inner network.
    If 'output_in_mm' is True, then values in mm can be obtained.

    """
    if self.vars['fractal_analysis']:
        logging.info(f"Arena n°{self.one_descriptor_per_arena['arena']}. Starting fractal analysis.")

        if self.vars['save_coord_network']:
            box_counting_dimensions = np.zeros((self.dims[0], 7), dtype=np.float64)
        else:
            box_counting_dimensions = np.zeros((self.dims[0], 3), dtype=np.float64)

        for t in np.arange(self.dims[0]):
            if self.vars['save_coord_network']:
                current_network = np.zeros(self.dims[1:], dtype=np.uint8)
                net_t = self.coord_network[1:, self.coord_network[0, :] == t]
                current_network[net_t[0], net_t[1]] = 1
                box_counting_dimensions[t, 0] = current_network.sum()
                zoomed_binary, side_lengths = prepare_box_counting(self.binary[t, ...], min_mesh_side=self.vars[
                    'fractal_box_side_threshold'], zoom_step=self.vars['fractal_zoom_step'], contours=True)
                box_counting_dimensions[t, 1], box_counting_dimensions[t, 2], box_counting_dimensions[
                    t, 3] = box_counting_dimension(zoomed_binary, side_lengths)
                zoomed_binary, side_lengths = prepare_box_counting(current_network,
                                                                   min_mesh_side=self.vars[
                                                                       'fractal_box_side_threshold'],
                                                                   zoom_step=self.vars['fractal_zoom_step'],
                                                                   contours=False)
                box_counting_dimensions[t, 4], box_counting_dimensions[t, 5], box_counting_dimensions[
                    t, 6] = box_counting_dimension(zoomed_binary, side_lengths)
            else:
                zoomed_binary, side_lengths = prepare_box_counting(self.binary[t, ...],
                                                                   min_mesh_side=self.vars['fractal_box_side_threshold'],
                                                                   zoom_step=self.vars['fractal_zoom_step'], contours=True)
                box_counting_dimensions[t, :] = box_counting_dimension(zoomed_binary, side_lengths)

        if self.vars['save_coord_network']:
            self.one_row_per_frame["inner_network_size"] = box_counting_dimensions[:, 0]
            self.one_row_per_frame["fractal_dimension"] = box_counting_dimensions[:, 1]
            self.one_row_per_frame["fractal_r_value"] = box_counting_dimensions[:, 2]
            self.one_row_per_frame["fractal_box_nb"] = box_counting_dimensions[:, 3]
            self.one_row_per_frame["inner_network_fractal_dimension"] = box_counting_dimensions[:, 4]
            self.one_row_per_frame["inner_network_fractal_r_value"] = box_counting_dimensions[:, 5]
            self.one_row_per_frame["inner_network_fractal_box_nb"] = box_counting_dimensions[:, 6]
            if self.vars['output_in_mm']:
                self.one_row_per_frame["inner_network_size"] *= self.vars['average_pixel_size']
        else:
            self.one_row_per_frame["fractal_dimension"] = box_counting_dimensions[:, 0]
            self.one_row_per_frame["fractal_box_nb"] = box_counting_dimensions[:, 1]
            self.one_row_per_frame["fractal_r_value"] = box_counting_dimensions[:, 2]

        if self.vars['save_coord_network']:
            del self.coord_network

frame_by_frame_segmentation(t, previous_binary_image=None)

Frame-by-frame segmentation of a video.

Parameters:

Name Type Description Default
t int

The time index of the frame to process.

required
previous_binary_image NDArray

The binary image from the previous frame. Default is None.

None

Returns:

Type Description
OneImageAnalysis

An object containing the analysis of the current frame.

Source code in src/cellects/core/motion_analysis.py
def frame_by_frame_segmentation(self, t: int, previous_binary_image: NDArray=None):
    """

    Frame-by-frame segmentation of a video.

    Parameters
    ----------
    t : int
        The time index of the frame to process.
    previous_binary_image : NDArray, optional
        The binary image from the previous frame. Default is `None`.

    Returns
    -------
    OneImageAnalysis
        An object containing the analysis of the current frame.
    """
    contrasted_im = bracket_to_uint8_image_contrast(self.converted_video[t, :, :])
    # 1. Get the mask valid for a number of images around it (step).
    allowed_window = None
    if self.vars['drift_already_corrected']:
        half_step = np.ceil(self.step / 2).astype(int)
        t_start = t - half_step
        t_end = t + half_step
        t_start = np.max((t_start, 0))
        t_end = np.min((t_end, self.dims[0]))
        min_y, max_y = np.max(self.drift_mask_coord[t_start:t_end, 0]), np.min(self.drift_mask_coord[t_start:t_end, 1])
        min_x, max_x = np.max(self.drift_mask_coord[t_start:t_end, 2]), np.min(self.drift_mask_coord[t_start:t_end, 3])
        allowed_window = min_y, max_y, min_x, max_x

    analysisi = OneImageAnalysis(contrasted_im)
    if self.vars['convert_for_motion']['logical'] != 'None':
        contrasted_im2 = bracket_to_uint8_image_contrast(self.converted_video2[t, :, :])
        analysisi.image2 = contrasted_im2

    if previous_binary_image is None or t == 0:
        analysisi.previous_binary_image = self.origin
    else:
        analysisi.previous_binary_image = previous_binary_image

    analysisi.segmentation(self.vars['convert_for_motion']['logical'], self.vars['color_number'],
                           bio_label=self.vars["bio_label"], bio_label2=self.vars["bio_label2"],
                           rolling_window_segmentation=self.vars['rolling_window_segmentation'],
                           lighter_background=self.vars['lighter_background'],
                           allowed_window=allowed_window, filter_spec=self.vars['filter_spec']) # filtering already done when creating converted_video

    return analysisi

get_covering_duration(step)

Determine the number of frames necessary for a pixel to get covered.

This function identifies the time when significant growth or motion occurs in a video and calculates the number of frames needed for a pixel to be completely covered. It also handles noise and ensures that the calculated step value is reasonable.

Parameters:

Name Type Description Default
step int

The initial step size for frame analysis.

required

Raises:

Type Description
Exception

If an error occurs during the calculation process.

Notes

This function may modify several instance attributes including substantial_time, step, and start.

Source code in src/cellects/core/motion_analysis.py
def get_covering_duration(self, step: int):
    """
    Determine the number of frames necessary for a pixel to get covered.

    This function identifies the time when significant growth or motion occurs
    in a video and calculates the number of frames needed for a pixel to be
    completely covered. It also handles noise and ensures that the calculated
    step value is reasonable.

    Parameters
    ----------
    step : int
        The initial step size for frame analysis.

    Raises
    ------
    Exception
        If an error occurs during the calculation process.

    Notes
    -----
    This function may modify several instance attributes including
    `substantial_time`, `step`, and `start`.
    """
    logging.info(f"Arena n°{self.one_descriptor_per_arena['arena']}. Find a frame with a significant growth/motion and determine the number of frames necessary for a pixel to get covered")
    ## Find the time at which growth reached a substantial growth.
    self.substantial_time = self.start
    # To avoid noisy images to have deleterious effects, make sure that area area reaches the threshold thrice.
    occurrence = 0
    allowed_window = None
    if self.vars['drift_already_corrected']:
        allowed_window = self.drift_mask_coord[:, 0].max(), self.drift_mask_coord[:, 1].min(), self.drift_mask_coord[:, 2].max(), self.drift_mask_coord[:, 3].min()
    prev_bin_im = self.origin
    while np.logical_and(occurrence < 3, self.substantial_time < (self.dims[0] - step - 1)):
        self.substantial_time += step
        growth_vision = self.frame_by_frame_segmentation(self.substantial_time, prev_bin_im)
        prev_bin_im = growth_vision.binary_image * self.borders
        surfarea = np.sum(prev_bin_im)
        prev_bin_im = np.logical_or(prev_bin_im, self.origin).astype(np.uint8)
        if surfarea > self.substantial_growth:
            occurrence += 1
    # get a rough idea of the area covered during this time
    if (self.substantial_time - self.start) > 20:
        if self.vars['lighter_background']:
            growth = (np.sum(self.converted_video[self.start:(self.start + 10), :, :], 0) / 10) - (np.sum(self.converted_video[(self.substantial_time - 10):self.substantial_time, :, :], 0) / 10)
        else:
            growth = (np.sum(self.converted_video[(self.substantial_time - 10):self.substantial_time, :, :], 0) / 10) - (
                        np.sum(self.converted_video[self.start:(self.start + 10), :, :], 0) / 10)
    else:
        if self.vars['lighter_background']:
            growth = self.converted_video[self.start, ...] - self.converted_video[self.substantial_time, ...]
        else:
            growth = self.converted_video[self.substantial_time, ...] - self.converted_video[self.start, ...]
    intensity_extent = np.ptp(self.converted_video[self.start:self.substantial_time, :, :], axis=0)
    growth[np.logical_or(growth < 0, intensity_extent < np.median(intensity_extent))] = 0
    growth = bracket_to_uint8_image_contrast(growth)
    growth *= self.borders
    growth_vision = OneImageAnalysis(growth)
    growth_vision.segmentation(allowed_window=allowed_window)
    if self.vars['several_blob_per_arena']:
        _, _, stats, _ = cv2.connectedComponentsWithStats(self.origin)
        do_erode = np.any(stats[1:, 4] > 50)
    else:
        do_erode = self.origin.sum() > 50
    if do_erode:
        self.substantial_image = cv2.erode(growth_vision.binary_image, cross_33, iterations=2)
    else:
        self.substantial_image = growth_vision.binary_image

    if np.any(self.substantial_image):
        natural_noise = np.nonzero(intensity_extent == np.min(intensity_extent))
        natural_noise = self.converted_video[self.start:self.substantial_time, natural_noise[0][0], natural_noise[1][0]]
        natural_noise = moving_average(natural_noise, 5)
        natural_noise = np.ptp(natural_noise)
        subst_idx = np.nonzero(self.substantial_image)
        cover_lengths = np.zeros(len(subst_idx[0]), dtype=np.uint32)
        for index in np.arange(len(subst_idx[0])):
            vector = self.converted_video[self.start:self.substantial_time, subst_idx[0][index], subst_idx[1][index]]
            left, right = find_major_incline(vector, natural_noise)
            # If find_major_incline did find a major incline: (otherwise it put 0 to left and 1 to right)
            if not np.logical_and(left == 0, right == 1):
                cover_lengths[index] = len(vector[left:-right])
        # If this analysis fails put a deterministic step
        if len(cover_lengths[cover_lengths > 0]) > 0:
            self.step = (np.round(np.mean(cover_lengths[cover_lengths > 0])).astype(int) // 2) + 1
            logging.info(f"Arena n°{self.one_descriptor_per_arena['arena']}. Pre-processing detection: the time for a pixel to get covered is set to {self.step}")
        else:
            logging.info(f"Arena n°{self.one_descriptor_per_arena['arena']}. Pre-processing detection: could not automatically find the time for a pixel to get covered. Default value is 1 for video length < 40 and 10 otherwise")
        # Make sure to avoid a step overestimation
        if self.step > self.dims[0] // 20:
            self.step: int = self.dims[0] // 20

        if self.step == 0:
            self.step: int = 1
    # When the first_move_threshold is not stringent enough the program may detect a movement due to noise
    # In that case, the substantial_image is empty and there is no reason to proceed further
    else:
        self.start = None

get_descriptors_from_binary(release_memory=True)

Methods: get_descriptors_from_binary

Summary

Generates shape descriptors for binary images, computes these descriptors for each frame and handles colony tracking. This method can optionally release memory to reduce usage, apply scaling factors to descriptors in millimeters and computes solidity separately if requested.

Parameters:

Name Type Description Default
release_memory bool

Flag to determine whether memory should be released after computation. Default is True.

True

Other Parameters:

Name Type Description
**self

DataFrame to store one row of descriptors per frame. - 'arena': Arena identifier, repeated for each frame. - 'time': Array of time values corresponding to frames.

**self

3D array representing binary images over time. - t,x,y: Time index, x-coordinate, and y-coordinate.

**self

Tuple containing image dimensions. - 0: Number of time frames. - 1,2: Image width and height respectively.

**self

Array containing surface areas for each frame.

**self

Time interval between frames, calculated only if provided timings are non-zero.

Notes

This method uses various helper methods and classes like ShapeDescriptors for computing shape descriptors, PercentAndTimeTracker for progress tracking, and other image processing techniques such as connected components analysis.

Source code in src/cellects/core/motion_analysis.py
def get_descriptors_from_binary(self, release_memory: bool=True):
    """

    Methods: get_descriptors_from_binary

    Summary
    -------
    Generates shape descriptors for binary images, computes these descriptors for each frame and handles colony
    tracking. This method can optionally release memory to reduce usage, apply scaling factors to descriptors
    in millimeters and computes solidity separately if requested.

    Parameters
    ----------
    release_memory : bool, optional
        Flag to determine whether memory should be released after computation. Default is True.

    Other Parameters
    ----------------
    **self.one_row_per_frame**
        DataFrame to store one row of descriptors per frame.
        - **'arena'**: Arena identifier, repeated for each frame.
        - **'time'**: Array of time values corresponding to frames.

    **self.binary**
        3D array representing binary images over time.
        - **t,x,y**: Time index, x-coordinate, and y-coordinate.

    **self.dims**
        Tuple containing image dimensions.
        - **0**: Number of time frames.
        - **1,2**: Image width and height respectively.

    **self.surfarea**
        Array containing surface areas for each frame.

    **self.time_interval**
        Time interval between frames, calculated only if provided timings are non-zero.

    Notes
    -----
    This method uses various helper methods and classes like `ShapeDescriptors` for computing shape descriptors,
    `PercentAndTimeTracker` for progress tracking, and other image processing techniques such as connected components analysis.

    """
    logging.info(f"Arena n°{self.one_descriptor_per_arena['arena']}. Computing and saving specimen(s) coordinates and required descriptors")
    if release_memory:
        del self.substantial_image
        del self.covering_intensity
        del self.segmented
        del self.gravity_field
        del self.sun
        del self.rays
        del self.holes
        collect()
    self.surfarea = self.binary.sum((1, 2))
    timings = np.array(self.vars['exif'])
    if len(timings) < self.dims[0]:
        timings = np.arange(self.dims[0])
    if np.any(timings > 0):
        self.time_interval = np.mean(np.diff(timings))
    else:
        self.time_interval = 1.
    timings = timings[:self.dims[0]]

    # Detect first motion
    self.one_descriptor_per_arena['first_move'] = detect_first_move(self.surfarea, self.vars['first_move_threshold'])

    self.compute_solidity_separately: bool = self.vars['iso_digi_analysis'] and not self.vars['several_blob_per_arena'] and not self.vars['descriptors']['solidity']
    if self.compute_solidity_separately:
        self.solidity = np.zeros(self.dims[0], dtype=np.float64)
    if not self.vars['several_blob_per_arena']:
        # solidity must be added if detect growth transition is computed
        if self.compute_solidity_separately:
            for t in np.arange(self.dims[0]):
                solidity = ShapeDescriptors(self.binary[t, :, :], ["solidity"])
                self.solidity[t] = solidity.descriptors["solidity"]
        self.one_row_per_frame = compute_one_descriptor_per_frame(self.binary,
                                                                  self.one_descriptor_per_arena['arena'], timings,
                                                                  self.vars['descriptors'],
                                                                  self.vars['output_in_mm'],
                                                                  self.vars['average_pixel_size'],
                                                                  self.vars['specimen_activity'],
                                                                  self.vars['save_coord_specimen'])
    else:
        self.one_row_per_frame = compute_one_descriptor_per_colony(self.binary,
                                                                   self.one_descriptor_per_arena['arena'], timings,
                                                                   self.vars['descriptors'],
                                                                   self.vars['output_in_mm'],
                                                                   self.vars['average_pixel_size'],
                                                                   self.vars['specimen_activity'],
                                                                   self.vars['first_move_threshold'],
                                                                   self.vars['save_coord_specimen'])
    self.one_descriptor_per_arena["final_area"] = self.binary[-1, :, :].sum()
    if self.vars['output_in_mm']:
        self.one_descriptor_per_arena = scale_descriptors(self.one_descriptor_per_arena, self.vars['average_pixel_size'])

get_origin_shape()

Determine the origin shape and initialize variables based on the state of the current analysis.

This method analyzes the initial frame or frames to determine the origin shape of an object in a video, initializing necessary variables and matrices for further processing.

Attributes Modified: start: (int) Indicates the starting frame index. origin_idx: (np.ndarray) The indices of non-zero values in the origin matrix. covering_intensity: (np.ndarray) Matrix used for pixel fading intensity. substantial_growth: (int) Represents a significant growth measure based on the origin.

Notes: - The method behavior varies if 'origin_state' is set to "constant" or not. - If the background is lighter, 'covering_intensity' matrix is initialized. - Uses connected components to determine which shape is closest to the center or largest, based on 'appearance_detection_method'.

Source code in src/cellects/core/motion_analysis.py
def get_origin_shape(self):
    """
    Determine the origin shape and initialize variables based on the state of the current analysis.

    This method analyzes the initial frame or frames to determine the origin shape
    of an object in a video, initializing necessary variables and matrices for
    further processing.


    Attributes Modified:
        start: (int) Indicates the starting frame index.
        origin_idx: (np.ndarray) The indices of non-zero values in the origin matrix.
        covering_intensity: (np.ndarray) Matrix used for pixel fading intensity.
        substantial_growth: (int) Represents a significant growth measure based on the origin.

    Notes:
        - The method behavior varies if 'origin_state' is set to "constant" or not.
        - If the background is lighter, 'covering_intensity' matrix is initialized.
        - Uses connected components to determine which shape is closest to the center
          or largest, based on 'appearance_detection_method'.
    """
    logging.info(f"Arena n°{self.one_descriptor_per_arena['arena']}. Make sure of origin shape")
    if self.vars['drift_already_corrected']:
        self.drift_mask_coord = np.zeros((self.dims[0], 4), dtype=np.uint32)
        for frame_i in np.arange(self.dims[0]):  # 100):#
            true_pixels = np.nonzero(self.converted_video[frame_i, ...])
            self.drift_mask_coord[frame_i, :] = np.min(true_pixels[0]), np.max(true_pixels[0]) + 1, np.min(true_pixels[1]), np.max(true_pixels[1]) + 1
        if np.all(self.drift_mask_coord[:, 0] == 0) and np.all(self.drift_mask_coord[:, 1] == self.dims[1] - 1) and np.all(
                self.drift_mask_coord[:, 2] == 0) and np.all(self.drift_mask_coord[:, 3] == self.dims[2] - 1):
            logging.error(f"Drift correction has been wrongly detected. Images do not contain zero-valued pixels")
            self.vars['drift_already_corrected'] = False
    self.start = 1
    if self.vars['origin_state'] == "invisible":
        self.start += self.vars['first_detection_frame']
        analysisi = self.frame_by_frame_segmentation(self.start, self.origin)
        # Use connected components to find which shape is the nearest from the image center.
        if self.vars['several_blob_per_arena']:
            self.origin = analysisi.binary_image
        else:
            if self.vars['appearance_detection_method'] == 'largest':
                self.origin = keep_one_connected_component(analysisi.binary_image)
            elif self.vars['appearance_detection_method'] == 'most_central':
                nb_components, output, stats, centroids = cv2.connectedComponentsWithStats(analysisi.binary_image,
                                                                                           connectivity=8)
                center = np.array((self.dims[2] // 2, self.dims[1] // 2))
                stats = np.zeros(nb_components - 1)
                for shape_i in np.arange(1, nb_components):
                    stats[shape_i - 1] = eudist(center, centroids[shape_i, :])
                # The shape having the minimal euclidean distance from the center will be the original shape
                self.origin = np.zeros((self.dims[1], self.dims[2]), dtype=np.uint8)
                self.origin[output == (np.argmin(stats) + 1)] = 1
    self.origin_idx = np.nonzero(self.origin)
    if self.vars['origin_state'] == "constant":
        if self.vars['lighter_background']:
            # Initialize the covering_intensity matrix as a reference for pixel fading
            self.covering_intensity[self.origin_idx[0], self.origin_idx[1]] = 200
    self.substantial_growth = np.min((1.2 * self.origin.sum(), self.origin.sum() + 250))

initialize_post_processing()

Initialize post-processing for video analysis.

This function initializes various parameters and prepares the binary representation used in post-processing of video data. It logs information about the settings, handles initial origin states, sets up segmentation data, calculates surface areas, and optionally corrects errors around the initial shape or prevents fast growth near the periphery.

Notes

This function performs several initialization steps and logs relevant information, including handling different origin states, updating segmentation data, and calculating the gravity field based on binary representation.

Source code in src/cellects/core/motion_analysis.py
def initialize_post_processing(self):
    """

    Initialize post-processing for video analysis.

    This function initializes various parameters and prepares the binary
    representation used in post-processing of video data. It logs information about
    the settings, handles initial origin states, sets up segmentation data,
    calculates surface areas, and optionally corrects errors around the initial
    shape or prevents fast growth near the periphery.

    Notes
    -----
    This function performs several initialization steps and logs relevant information,
    including handling different origin states, updating segmentation data, and
    calculating the gravity field based on binary representation.

    """
    ## Initialization
    logging.info(f"Arena n°{self.one_descriptor_per_arena['arena']}. Starting Post_processing. Specimen activity: {self.vars['specimen_activity']}. Fading coef: {self.vars['fading']}, Subtract background: {self.vars['subtract_background']}, Correct errors around initial shape: {self.vars['correct_errors_around_initial']}, Connect distant shapes: {self.vars['detection_range_factor'] > 0}, How to select appearing cell(s): {self.vars['appearance_detection_method']}")
    self.binary = np.zeros(self.dims[:3], dtype=np.uint8)
    if self.origin.shape[0] != self.binary[self.start - 1, :, :].shape[0] or self.origin.shape[1] != self.binary[self.start - 1, :, :].shape[1]:
        logging.error("Unaltered videos deprecated, they have been created with different settings.\nDelete .h5 videos and cellects_settings.json and re-run")

    if self.vars['origin_state'] == "invisible":
        self.binary[self.start - 1, :, :] = self.origin.copy()
        self.covering_intensity[self.origin_idx[0], self.origin_idx[1]] = self.converted_video[self.start, self.origin_idx[0], self.origin_idx[1]]
    else:
        if self.vars['origin_state'] == "fluctuating":
            self.covering_intensity[self.origin_idx[0], self.origin_idx[1]] = np.median(self.converted_video[:self.start, self.origin_idx[0], self.origin_idx[1]], axis=0)

        self.binary[:self.start, :, :] = np.repeat(np.expand_dims(self.origin, 0), self.start, axis=0)
        if self.start < self.step:
            frames_to_assess = self.step
            self.segmented[self.start - 1, ...] = self.binary[self.start - 1, :, :]
            for t in np.arange(self.start, self.lost_frames):
                # Only keep pixels that are always detected
                always_found = np.sum(self.segmented[t:(t + frames_to_assess), ...], 0)
                always_found = always_found == frames_to_assess
                # Remove too small shapes
                without_small, stats, centro = cc(always_found.astype(np.uint8))
                large_enough = np.nonzero(stats[1:, 4] > ((self.vars['first_move_threshold'] + 1) // 2))[0]
                if len(large_enough) > 0:
                    always_found *= np.isin(always_found, large_enough + 1)
                    always_found = np.logical_or(always_found, self.segmented[t - 1, ...])
                    self.segmented[t, ...] *= always_found
                else:
                    self.segmented[t, ...] = 0
                self.segmented[t, ...] = np.logical_or(self.segmented[t - 1, ...], self.segmented[t, ...])
    self.mean_distance_per_frame = None
    self.surfarea = np.zeros(self.dims[0], dtype =np.uint64)
    self.surfarea[:self.start] = np.sum(self.binary[:self.start, :, :], (1, 2))
    self.gravity_field = inverted_distance_transform(self.binary[(self.start - 1), :, :],
                                       np.sqrt(np.sum(self.binary[(self.start - 1), :, :])))
    if self.vars['correct_errors_around_initial']:
        self.rays, self.sun = draw_me_a_sun(self.binary[(self.start - 1), :, :], ray_length_coef=1)  # plt.imshow(sun)
        self.holes = np.zeros(self.dims[1:], dtype=np.uint8)
        self.pixel_ring_depth += 2
        self.update_ring_width()

    if self.vars['prevent_fast_growth_near_periphery']:
        self.near_periphery = np.zeros(self.dims[1:])
        if self.vars['arena_shape'] == 'circle':
            periphery_width = self.vars['periphery_width'] * 2
            elliperiphery = create_ellipse(self.dims[1] - periphery_width, self.dims[2] - periphery_width, min_size=3)
            half_width = periphery_width // 2
            if periphery_width % 2 == 0:
                self.near_periphery[half_width:-half_width, half_width:-half_width] = elliperiphery
            else:
                self.near_periphery[half_width:-half_width - 1, half_width:-half_width - 1] = elliperiphery
            self.near_periphery = 1 - self.near_periphery
        else:
            self.near_periphery[:self.vars['periphery_width'], :] = 1
            self.near_periphery[-self.vars['periphery_width']:, :] = 1
            self.near_periphery[:, :self.vars['periphery_width']] = 1
            self.near_periphery[:, -self.vars['periphery_width']:] = 1
        self.near_periphery = np.nonzero(self.near_periphery)

load_images_and_videos(videos_already_in_ram, i)

Load images and videos from disk or RAM.

Parameters:

Name Type Description Default
videos_already_in_ram ndarray or None

Video data that is already loaded into RAM. If None, videos will be loaded from disk.

required
i int

Index used to select the origin and background data.

required
Notes

This method logs information about the arena number and loads necessary data from disk or RAM based on whether videos are already in memory. It sets various attributes like self.origin, self.background, and self.converted_video.

Source code in src/cellects/core/motion_analysis.py
def load_images_and_videos(self, videos_already_in_ram, i: int):
    """

    Load images and videos from disk or RAM.

    Parameters
    ----------
    videos_already_in_ram : numpy.ndarray or None
        Video data that is already loaded into RAM. If `None`, videos will be
        loaded from disk.
    i : int
        Index used to select the origin and background data.

    Notes
    -----
    This method logs information about the arena number and loads necessary data
    from disk or RAM based on whether videos are already in memory. It sets various
    attributes like `self.origin`, `self.background`, and `self.converted_video`.

    """
    if 'crop_coord' in self.vars and self.vars['crop_coord'] is not None:
        crop_top, crop_bot, crop_left, crop_right = self.vars['crop_coord']
    else:
        crop_top, crop_bot, crop_left, crop_right = 0, -1, 0, -1
    if 'arenas_coord' in self.vars:
        top, bot, left, right = self.vars['arenas_coord']
    else:
        top, bot, left, right = [0], [crop_bot], [0], [crop_right]
    frame_height = bot[i] - top[i]
    true_frame_width = right[i] - left[i]

    if true_frame_width == -1:
        true_frame_width = None
    vid_name = None
    if self.vars['video_list'] is not None:
        vid_name = self.vars['video_list'][i]
    self.background = read_h5(f'ind_{self.one_descriptor_per_arena['arena']}.h5', 'background')
    self.background2 = read_h5(f'ind_{self.one_descriptor_per_arena['arena']}.h5', 'background2')
    vids = read_one_arena(self.one_descriptor_per_arena['arena'], self.vars['already_greyscale'],
                          self.vars['convert_for_motion'], videos_already_in_ram, true_frame_width, vid_name,
                          self.background, self.background2)
    self.visu, self.converted_video, self.converted_video2 = vids

    if self.visu is not None:
        self.dims = self.visu.shape[:3]
    else:
        self.dims = self.converted_video.shape[:3]

    # When video(s) need to be sliced (only occurs with datasets originally stored as videos):
    if frame_height != - 1 and (self.dims[1] != frame_height or self.dims[2] != true_frame_width):
        if self.visu is not None:
            self.visu = self.visu[:, crop_top:crop_bot, crop_left:crop_right, ...]
            self.visu = self.visu[:, top[i]:bot[i], left[i]:right[i], ...]
        if self.converted_video is not None:
            self.converted_video = self.converted_video[:, crop_top:crop_bot, crop_left:crop_right]
            self.converted_video = self.converted_video[:, top[i]:bot[i], left[i]:right[i]]

    if self.converted_video is None:
        logging.info(
            f"Arena n°{self.one_descriptor_per_arena['arena']}. Convert the RGB visu video into a greyscale image using the color space combination: {self.vars['convert_for_motion']}")
        vids = convert_subtract_and_filter_video(self.visu, self.vars['convert_for_motion'],
                                                 self.background, self.background2,
                                                 self.vars['lose_accuracy_to_save_memory'],
                                                 self.vars['filter_spec'])
        self.converted_video, self.converted_video2 = vids

    self.origin = np.zeros((self.dims[1], self.dims[2]), dtype=np.uint8)
    self.origin_idx = read_h5(f'ind_{self.one_descriptor_per_arena['arena']}.h5', 'origin_coord')
    self.origin[self.origin_idx[0], self.origin_idx[1]] = 1

lum_slope_segmentation(converted_video)

Perform lum slope segmentation on the given video.

Parameters:

Name Type Description Default
converted_video NDArray

The input video array for segmentation processing.

required

Returns:

Type Description
NDArray

Segmented gradient array of the video. If segmentation fails, returns None for the corresponding frames.

Notes

This function may consume significant memory and adjusts data types (float32 or float64) based on available RAM.

Examples:

>>> result = lum_slope_segmentation(converted_video)
Source code in src/cellects/core/motion_analysis.py
def lum_slope_segmentation(self, converted_video: NDArray) -> NDArray:
    """
    Perform lum slope segmentation on the given video.

    Parameters
    ----------
    converted_video : NDArray
        The input video array for segmentation processing.

    Returns
    -------
    NDArray
        Segmented gradient array of the video. If segmentation fails,
        returns `None` for the corresponding frames.

    Notes
    -----
    This function may consume significant memory and adjusts
    data types (float32 or float64) based on available RAM.

    Examples
    --------
    >>> result = lum_slope_segmentation(converted_video)
    """
    shape_motion_failed : bool = False
    # 2) Contrast increase
    oridx = np.nonzero(self.origin)
    notoridx = np.nonzero(1 - self.origin)
    do_increase_contrast = np.mean(converted_video[0, oridx[0], oridx[1]]) * 10 > np.mean(
            converted_video[0, notoridx[0], notoridx[1]])
    necessary_memory = self.dims[0] * self.dims[1] * self.dims[2] * 64 * 2 * 1.16415e-10
    available_memory = (virtual_memory().available >> 30) - self.vars['min_ram_free']
    if self.vars['lose_accuracy_to_save_memory']:
        derive = converted_video.astype(np.float32)
    else:
        derive = converted_video.astype(np.float64)
    if necessary_memory > available_memory:
        converted_video = None

    if do_increase_contrast:
        derive = np.square(derive)

    # 3) Get the gradient
    necessary_memory = derive.size * 64 * 4 * 1.16415e-10
    available_memory = (virtual_memory().available >> 30) - self.vars['min_ram_free']
    if necessary_memory > available_memory:
        for cy in np.arange(self.dims[1]):
            for cx in np.arange(self.dims[2]):
                if self.vars['lose_accuracy_to_save_memory']:
                    derive[:, cy, cx] = np.gradient(derive[:, cy, cx], self.step).astype(np.float32)
                else:
                    derive[:, cy, cx] = np.gradient(derive[:, cy, cx], self.step)
    else:
        if self.vars['lose_accuracy_to_save_memory']:
            derive = np.gradient(derive, self.step, axis=0).astype(np.float32)
        else:
            derive = np.gradient(derive, self.step, axis=0)

    # 4) Segment
    if self.vars['lighter_background']:
        covering_slopes = np.min(derive[:self.substantial_time, :, :], 0) * self.substantial_image
    else:
        covering_slopes = np.max(derive[:self.substantial_time, :, :], 0) * self.substantial_image
    covering_slopes = covering_slopes[covering_slopes != 0]
    if len(covering_slopes) == 0:
        shape_motion_failed = True

    gradient_segmentation = None
    if not shape_motion_failed:
        gradient_segmentation = np.zeros(self.dims, np.uint8)
        ####
        # ease_slope_segmentation = 0.8
        value_segmentation_thresholds = np.arange(0.8, -0.7, -0.1)
        validated_thresholds = np.zeros(value_segmentation_thresholds.shape, dtype=bool)
        counter = 0
        while_condition = True
        max_motion_per_frame = (self.dims[1] * self.dims[2]) * self.vars['maximal_growth_factor']
        # Try different values of do_slope_segmentation and keep the one that does not
        # segment more than x percent of the image
        while counter < value_segmentation_thresholds.shape[0]:
            ease_slope_segmentation = value_segmentation_thresholds[counter]
            if self.vars['lighter_background']:
                gradient_threshold = (1 + ease_slope_segmentation) * np.max(covering_slopes)
                sample = np.less(derive[:self.substantial_time], gradient_threshold)
            else:
                gradient_threshold = (1 - ease_slope_segmentation) * np.min(covering_slopes)
                sample = np.greater(derive[:self.substantial_time], gradient_threshold)
            changing_pixel_number = np.sum(np.absolute(np.diff(sample.astype(np.int8), 1, 0)), (1, 2))
            validation = np.max(np.sum(sample, (1, 2))) < max_motion_per_frame and (
                    np.max(changing_pixel_number) < max_motion_per_frame)
            validated_thresholds[counter] = validation
            if np.any(validated_thresholds):
                if not validation:
                    break
            counter += 1
            # If any threshold is accepted, use their average to proceed the final thresholding
        valid_number = validated_thresholds.sum()
        if valid_number > 0:
            if valid_number > 2:
                index_to_keep = 2
            else:
                index_to_keep = valid_number - 1
            ease_slope_segmentation = value_segmentation_thresholds[
                np.uint8(np.floor(np.mean(np.nonzero(validated_thresholds)[0][index_to_keep])))]

            if self.vars['lighter_background']:
                gradient_threshold = (1 - ease_slope_segmentation) * np.max(covering_slopes)
                gradient_segmentation[:-self.lost_frames, :, :] = np.less(derive, gradient_threshold)[
                    self.lost_frames:, :, :]
            else:
                gradient_threshold = (1 - ease_slope_segmentation) * np.min(covering_slopes)
                gradient_segmentation[:-self.lost_frames, :, :] = np.greater(derive, gradient_threshold)[
                    self.lost_frames:, :, :]
        else:
            if self.vars['lighter_background']:
                gradient_segmentation[:-self.lost_frames, :, :] = (derive < (np.min(derive, (1, 2)) * 1.1)[:, None, None])[self.lost_frames:, :, :]
            else:
                gradient_segmentation[:-self.lost_frames, :, :] = (derive > (np.max(derive, (1, 2)) * 0.1)[:, None, None])[self.lost_frames:, :, :]
    return gradient_segmentation

lum_value_segmentation(converted_video, do_threshold_segmentation)

Perform segmentation based on luminosity values from a video.

Parameters:

Name Type Description Default
converted_video NDArray

The input video data in a NumPy array format.

required
do_threshold_segmentation bool

Flag to determine whether threshold segmentation should be applied.

required

Returns:

Type Description
Tuple[NDArray, NDArray]

A tuple containing two NumPy arrays: - The first array is the luminosity segmentation of the video. - The second array represents the luminosity threshold over time.

Notes

This function operates under the assumption that there is sufficient motion in the video data. If no valid thresholds are found for segmentation, the function returns None for luminosity_segmentation.

Source code in src/cellects/core/motion_analysis.py
def lum_value_segmentation(self, converted_video: NDArray, do_threshold_segmentation: bool) -> Tuple[NDArray, NDArray]:
    """
    Perform segmentation based on luminosity values from a video.

    Parameters
    ----------
    converted_video : NDArray
        The input video data in a NumPy array format.
    do_threshold_segmentation : bool
        Flag to determine whether threshold segmentation should be applied.

    Returns
    -------
    Tuple[NDArray, NDArray]
        A tuple containing two NumPy arrays:
        - The first array is the luminosity segmentation of the video.
        - The second array represents the luminosity threshold over time.

    Notes
    -----
    This function operates under the assumption that there is sufficient motion in the video data.
    If no valid thresholds are found for segmentation, the function returns None for
    `luminosity_segmentation`.
    """
    shape_motion_failed: bool = False
    if self.vars['lighter_background']:
        covering_l_values = np.min(converted_video[:self.substantial_time, :, :],
                                         0) * self.substantial_image
    else:
        covering_l_values = np.max(converted_video[:self.substantial_time, :, :],
                                         0) * self.substantial_image
    # Avoid errors by checking whether the covering values are nonzero
    covering_l_values = covering_l_values[covering_l_values != 0]
    if len(covering_l_values) == 0:
        shape_motion_failed = True

    luminosity_segmentation = None
    l_threshold_over_time = None
    if not shape_motion_failed:
        value_segmentation_thresholds = np.arange(0.8, -0.7, -0.1)
        validated_thresholds = np.zeros(value_segmentation_thresholds.shape, dtype=bool)
        counter = 0
        max_motion_per_frame = (self.dims[1] * self.dims[2]) * self.vars['maximal_growth_factor'] * 2
        if self.vars['lighter_background']:
            basic_bckgrnd_values = np.quantile(converted_video[:(self.lost_frames + 1), ...], 0.9, axis=(1, 2))
        else:
            basic_bckgrnd_values = np.quantile(converted_video[:(self.lost_frames + 1), ...], 0.1, axis=(1, 2))
        # Try different values of do_threshold_segmentation and keep the one that does not
        # segment more than x percent of the image
        while counter <= 14:
            value_threshold = value_segmentation_thresholds[counter]
            if self.vars['lighter_background']:
                l_threshold = (1 + value_threshold) * np.max(covering_l_values)
            else:
                l_threshold = (1 - value_threshold) * np.min(covering_l_values)
            starting_segmentation, l_threshold_over_time = segment_with_lum_value(converted_video[:(self.lost_frames + 1), ...],
                                                           basic_bckgrnd_values, l_threshold,
                                                           self.vars['lighter_background'])

            changing_pixel_number = np.sum(np.absolute(np.diff(starting_segmentation.astype(np.int8), 1, 0)), (1, 2))
            validation = np.max(np.sum(starting_segmentation, (1, 2))) < max_motion_per_frame and (
                    np.max(changing_pixel_number) < max_motion_per_frame)
            validated_thresholds[counter] = validation
            if np.any(validated_thresholds):
                if not validation:
                    break
            counter += 1
        # If any threshold is accepted, use their average to proceed the final thresholding
        valid_number = validated_thresholds.sum()
        if valid_number > 0:
            if valid_number > 2:
                index_to_keep = 2
            else:
                index_to_keep = valid_number - 1
            value_threshold = value_segmentation_thresholds[
                np.uint8(np.floor(np.mean(np.nonzero(validated_thresholds)[0][index_to_keep])))]
        else:
            value_threshold = 0

        if self.vars['lighter_background']:
            l_threshold = (1 + value_threshold) * np.max(covering_l_values)
        else:
            l_threshold = (1 - value_threshold) * np.min(covering_l_values)
        if do_threshold_segmentation:
            if self.vars['lighter_background']:
                basic_bckgrnd_values = np.quantile(converted_video, 0.9, axis=(1, 2))
            else:
                basic_bckgrnd_values = np.quantile(converted_video, 0.1, axis=(1, 2))
            luminosity_segmentation, l_threshold_over_time = segment_with_lum_value(converted_video, basic_bckgrnd_values,
                                                             l_threshold, self.vars['lighter_background'])
        else:
            luminosity_segmentation, l_threshold_over_time = segment_with_lum_value(converted_video[:(self.lost_frames + 1), ...],
                                                           basic_bckgrnd_values, l_threshold,
                                                           self.vars['lighter_background'])
    return luminosity_segmentation, l_threshold_over_time

networks_analysis(show_seg=False)

    Perform network detection within a given arena.

    This function carries out the task of detecting networks in an arena
    based on several parameters and variables. It involves checking video
    type, performing network detection over time, potentially detecting
    pseudopods, and smoothing segmentation. The results can be visualized or saved.

Extract and analyze graphs from a binary representation of network dynamics, producing vertex and edge tables that represent the graph structure over time.

    Args:
        None

    Attributes:
        vars (dict): Dictionary of variables that control the graph extraction process.
            - 'save_graph': Boolean indicating if graph extraction should be performed.
            - 'save_coord_network': Boolean indicating if the coordinate network should be saved.

        one_descriptor_per_arena (dict): Dictionary containing descriptors for each arena.

        dims (tuple): Tuple containing dimension information.
            - [0]: Integer representing the number of time steps.
            - [1]: Integer representing the y-dimension size.
            - [2]: Integer representing the x-dimension size.

        origin (np.ndarray): Binary image representing the origin of the network.

        binary (np.ndarray): Binary representation of network dynamics over time.
            Shape: (time_steps, y_dimension, x_dimension).

        converted_video (np.ndarray): Converted video data.
            Shape: (y_dimension, x_dimension, time_steps).

        network_dynamics (np.ndarray): Network dynamics representation.
            Shape: (time_steps, y_dimension, x_dimension).

    Notes:
        - This method performs graph extraction and saves the vertex and edge tables to CSV files.
        - The CSV files are named according to the arena, time steps, and dimensions.

    Args:
        show_seg: bool = False
            A flag that determines whether to display the segmentation visually.
Source code in src/cellects/core/motion_analysis.py
    def networks_analysis(self, show_seg: bool=False):
        """
        Perform network detection within a given arena.

        This function carries out the task of detecting networks in an arena
        based on several parameters and variables. It involves checking video
        type, performing network detection over time, potentially detecting
        pseudopods, and smoothing segmentation. The results can be visualized or saved.
Extract and analyze graphs from a binary representation of network dynamics, producing vertex
        and edge tables that represent the graph structure over time.

        Args:
            None

        Attributes:
            vars (dict): Dictionary of variables that control the graph extraction process.
                - 'save_graph': Boolean indicating if graph extraction should be performed.
                - 'save_coord_network': Boolean indicating if the coordinate network should be saved.

            one_descriptor_per_arena (dict): Dictionary containing descriptors for each arena.

            dims (tuple): Tuple containing dimension information.
                - [0]: Integer representing the number of time steps.
                - [1]: Integer representing the y-dimension size.
                - [2]: Integer representing the x-dimension size.

            origin (np.ndarray): Binary image representing the origin of the network.

            binary (np.ndarray): Binary representation of network dynamics over time.
                Shape: (time_steps, y_dimension, x_dimension).

            converted_video (np.ndarray): Converted video data.
                Shape: (y_dimension, x_dimension, time_steps).

            network_dynamics (np.ndarray): Network dynamics representation.
                Shape: (time_steps, y_dimension, x_dimension).

        Notes:
            - This method performs graph extraction and saves the vertex and edge tables to CSV files.
            - The CSV files are named according to the arena, time steps, and dimensions.

        Args:
            show_seg: bool = False
                A flag that determines whether to display the segmentation visually.
        """
        coord_pseudopods = None
        if not self.vars['several_blob_per_arena'] and self.vars['save_coord_network']:
            self.check_converted_video_type()

            if self.vars['origin_state'] == "constant":
                self.coord_network, coord_pseudopods = detect_network_dynamics(self.converted_video, self.binary,
                                                           self.one_descriptor_per_arena['arena'], 0,
                                                           self.visu, self.origin, self.vars['sliding_window_segmentation'], 5, True,
                                                           self.vars['save_coord_network'], show_seg)
            else:
                self.coord_network, coord_pseudopods = detect_network_dynamics(self.converted_video, self.binary,
                                                           self.one_descriptor_per_arena['arena'], 0,
                                                           self.visu, None, self.vars['sliding_window_segmentation'], 5, True,
                                                           self.vars['save_coord_network'], show_seg)

        if not self.vars['several_blob_per_arena'] and self.vars['save_graph']:
            if self.coord_network is None:
                self.coord_network = np.array(np.nonzero(self.binary))
            if self.vars['origin_state'] == "constant":
                extract_graph_dynamics(self.converted_video, self.coord_network, self.one_descriptor_per_arena['arena'],
                                       0, self.origin, coord_pseudopods)
            else:
                extract_graph_dynamics(self.converted_video, self.coord_network, self.one_descriptor_per_arena['arena'],
                                       0, None, coord_pseudopods)
        del coord_pseudopods

save_efficiency_tests()

Provide images allowing to assess the analysis efficiency

This method generates two test images used for assessing the efficiency of the analysis. It performs various operations on video frames to create these images, including copying and manipulating frames from the video, detecting contours on binary images, and drawing the arena label on the left of the frames.

Source code in src/cellects/core/motion_analysis.py
def save_efficiency_tests(self):
    """
    Provide images allowing to assess the analysis efficiency

    This method generates two test images used for assessing
    the efficiency of the analysis. It performs various operations on
    video frames to create these images, including copying and manipulating
    frames from the video, detecting contours on binary images,
    and drawing the arena label on the left of the frames.
    """
    # Provide images allowing to assess the analysis efficiency
    if self.dims[0] > 1:
        after_one_tenth_of_time = np.ceil(self.dims[0] / 10).astype(np.uint64)
    else:
        after_one_tenth_of_time = 0

    last_good_detection = self.dims[0] - 1
    if self.dims[0] > self.lost_frames:
        if self.vars['do_threshold_segmentation']:
            last_good_detection -= self.lost_frames
    else:
        last_good_detection = 0
    if self.visu is None:
        if len(self.converted_video.shape) == 3:
            self.converted_video = np.stack((self.converted_video, self.converted_video, self.converted_video),
                                         axis=3)
        self.efficiency_test_1 = self.converted_video[after_one_tenth_of_time, ...].copy()
        self.efficiency_test_2 = self.converted_video[last_good_detection, ...].copy()
    else:
        self.efficiency_test_1 = self.visu[after_one_tenth_of_time, :, :, :].copy()
        self.efficiency_test_2 = self.visu[last_good_detection, :, :, :].copy()

    position = (25, self.dims[1] // 2)
    text = str(self.one_descriptor_per_arena['arena'])
    contours = np.nonzero(get_contours(self.binary[after_one_tenth_of_time, :, :]))
    self.efficiency_test_1[contours[0], contours[1], :] = self.vars['contour_color']
    self.efficiency_test_1 = cv2.putText(self.efficiency_test_1, text, position, cv2.FONT_HERSHEY_SIMPLEX, 1,
                                    (self.vars["contour_color"], self.vars["contour_color"],
                                     self.vars["contour_color"], 255), 3)

    eroded_binary = cv2.erode(self.binary[last_good_detection, :, :], cross_33)
    contours = np.nonzero(self.binary[last_good_detection, :, :] - eroded_binary)
    self.efficiency_test_2[contours[0], contours[1], :] = self.vars['contour_color']
    self.efficiency_test_2 = cv2.putText(self.efficiency_test_2, text, position, cv2.FONT_HERSHEY_SIMPLEX, 1,
                                     (self.vars["contour_color"], self.vars["contour_color"],
                                      self.vars["contour_color"], 255), 3)

save_results(with_efficiency_tests=True, with_video=True)

Save the results of testing and video processing.

This method handles the saving of efficiency tests, video files, and CSV data related to test results. It checks for existing files before writing new data. Additionally, it cleans up temporary files if configured to do so.

Parameters:

Name Type Description Default
with_efficiency_tests bool

Also save two images showing the analysis efficiency.

True
with_video bool

Also save a video showing the analysis efficiency.

True
Source code in src/cellects/core/motion_analysis.py
def save_results(self, with_efficiency_tests: bool = True, with_video:bool = True):
    """
    Save the results of testing and video processing.

    This method handles the saving of efficiency tests, video files,
    and CSV data related to test results. It checks for existing files before writing new data.
    Additionally, it cleans up temporary files if configured to do so.

    Parameters
    ----------
    with_efficiency_tests : bool, optional
        Also save two images showing the analysis efficiency.
    with_video : bool, optional
        Also save a video showing the analysis efficiency.
    """
    if with_efficiency_tests:
        self.save_efficiency_tests()
    if with_video:
        self.save_video()
    if self.vars['several_blob_per_arena']:
        try:
            with open(f"one_row_per_frame_arena{self.one_descriptor_per_arena['arena']}.csv", 'w') as file:
                self.one_row_per_frame.to_csv(file, sep=';', index=False, lineterminator='\n')
        except PermissionError:
            logging.error(f"Never let one_row_per_frame_arena{self.one_descriptor_per_arena['arena']}.csv open when Cellects runs")

        create_new_csv: bool = False
        if os.path.isfile("one_row_per_arena.csv"):
            try:
                with open(f"one_row_per_arena.csv", 'r') as file:
                    stats = pd.read_csv(file, header=0, sep=";")
            except PermissionError:
                logging.error("Never let one_row_per_arena.csv open when Cellects runs")

            if len(self.one_descriptor_per_arena) == len(stats.columns) - 1:
                try:
                    with open(f"one_row_per_arena.csv", 'w') as file:
                        stats.iloc[(self.one_descriptor_per_arena['arena'] - 1), 1:] = self.one_descriptor_per_arena.values()
                        stats.to_csv(file, sep=';', index=False, lineterminator='\n')
                except PermissionError:
                    logging.error("Never let one_row_per_arena.csv open when Cellects runs")
            else:
                create_new_csv = True
        else:
            create_new_csv = True
        if create_new_csv:
            with open(f"one_row_per_arena.csv", 'w') as file:
                stats = pd.DataFrame(np.zeros((len(self.vars['analyzed_individuals']), len(self.one_descriptor_per_arena))),
                           columns=list(self.one_descriptor_per_arena.keys()))
                stats.iloc[(self.one_descriptor_per_arena['arena'] - 1), :] = self.one_descriptor_per_arena.values()
                stats.to_csv(file, sep=';', index=False, lineterminator='\n')
    if not self.vars['keep_unaltered_videos'] and os.path.isfile(f"ind_{self.one_descriptor_per_arena['arena']}.h5"):
        remove_h5_key(f"ind_{self.one_descriptor_per_arena['arena']}.h5", 'video')

save_video()

Save processed video with contours and other annotations.

This method processes the binary image to extract contours, overlay them on a video, and save the resulting video file.

Notes: - This method uses OpenCV for image processing and contour extraction. - The processed video includes contours colored according to the contour_color specified in the variables. - Additional annotations such as time in minutes are added to each frame if applicable.

Source code in src/cellects/core/motion_analysis.py
def save_video(self):
    """
    Save processed video with contours and other annotations.

    This method processes the binary image to extract contours, overlay them
    on a video, and save the resulting video file.

    Notes:
        - This method uses OpenCV for image processing and contour extraction.
        - The processed video includes contours colored according to the
          `contour_color` specified in the variables.
        - Additional annotations such as time in minutes are added to each
          frame if applicable.

    """
    if self.vars['save_processed_videos']:
        self.check_converted_video_type()
        if len(self.converted_video.shape) == 3:
            self.converted_video = np.stack((self.converted_video, self.converted_video, self.converted_video),
                                            axis=3)
        for t in np.arange(self.dims[0]):

            eroded_binary = cv2.erode(self.binary[t, :, :], cross_33)
            contours = np.nonzero(self.binary[t, :, :] - eroded_binary)
            self.converted_video[t, contours[0], contours[1], :] = self.vars['contour_color']
            if "iso_digi_transi" in self.one_descriptor_per_arena.keys():
                if self.vars['iso_digi_analysis']  and not self.vars['several_blob_per_arena'] and not pd.isna(self.one_descriptor_per_arena["iso_digi_transi"]):
                    if self.one_descriptor_per_arena['is_growth_isotropic'] == 1:
                        if t < self.one_descriptor_per_arena["iso_digi_transi"]:
                            self.converted_video[t, contours[0], contours[1], :] = 0, 0, 255
        del self.binary
        del self.surfarea
        del self.borders
        del self.origin
        del self.origin_idx
        del self.mean_intensity_per_frame
        del self.erodila_disk
        collect()
        if self.visu is None:
            true_frame_width = self.dims[2]
            h5_keys = []
            if os.path.isfile(f"ind_{self.one_descriptor_per_arena['arena']}.h5"):
                h5_keys = get_h5_keys(f"ind_{self.one_descriptor_per_arena['arena']}.h5")
            if 'video' in h5_keys:
                self.visu = video2numpy(f"ind_{self.one_descriptor_per_arena['arena']}.h5",
                          None, true_frame_width=true_frame_width)
            else:
                self.visu = self.converted_video
            if len(self.visu.shape) == 3:
                self.visu = np.stack((self.visu, self.visu, self.visu), axis=3)
        self.converted_video = np.concatenate((self.visu, self.converted_video), axis=2)

        if np.any(self.one_row_per_frame['time'] > 0):
            position = (5, self.dims[1] - 5)
            if self.vars['time_step_is_arbitrary']:
                time_unit = ""
            else:
                time_unit = " min"
            for t in np.arange(self.dims[0]):
                image = self.converted_video[t, ...]
                text = str(self.one_row_per_frame['time'][t]) + time_unit
                image = cv2.putText(image,  # numpy array on which text is written
                                text,  # text
                                position,  # position at which writing has to start
                                cv2.FONT_HERSHEY_SIMPLEX,  # font family
                                1,  # font size
                                (self.vars["contour_color"], self.vars["contour_color"], self.vars["contour_color"], 255),  #(209, 80, 0, 255),  
                                2)  # font stroke
                self.converted_video[t, ...] = image
        vid_name = f"ind_{self.one_descriptor_per_arena['arena']}{self.vars['videos_extension']}"
        write_video(self.converted_video, vid_name, is_color=True, fps=self.vars['video_fps'])

smooth_pixel_slopes(converted_video)

Apply smoothing to pixel slopes in a video by convolving with a moving average kernel.

Parameters:

Name Type Description Default
converted_video NDArray

The input video array to be smoothed.

required

Returns:

Type Description
NDArray

Smoothed video array with pixel slopes averaged using a moving average kernel.

Raises:

Type Description
MemoryError

If there is not enough RAM available to perform the smoothing operation.

Notes

This function applies a moving average kernel to each pixel across the frames of the input video. The smoothing operation can be repeated based on user-defined settings. The precision of the output array is controlled by a flag that determines whether to save memory at the cost of accuracy.

Examples:

>>> smoothed = smooth_pixel_slopes(converted_video)
>>> print(smoothed.shape)  # Expected output will vary depending on the input video shape
Source code in src/cellects/core/motion_analysis.py
def smooth_pixel_slopes(self, converted_video: NDArray) -> NDArray:
    """
    Apply smoothing to pixel slopes in a video by convolving with a moving average kernel.

    Parameters
    ----------
    converted_video : NDArray
        The input video array to be smoothed.

    Returns
    -------
    NDArray
        Smoothed video array with pixel slopes averaged using a moving average kernel.

    Raises
    ------
    MemoryError
        If there is not enough RAM available to perform the smoothing operation.

    Notes
    -----
    This function applies a moving average kernel to each pixel across the frames of
    the input video. The smoothing operation can be repeated based on user-defined settings.
    The precision of the output array is controlled by a flag that determines whether to
    save memory at the cost of accuracy.

    Examples
    --------
    >>> smoothed = smooth_pixel_slopes(converted_video)
    >>> print(smoothed.shape)  # Expected output will vary depending on the input video shape
    """
    try:
        if self.vars['lose_accuracy_to_save_memory']:
            smoothed_video = np.zeros(self.dims, dtype=np.float32)
            smooth_kernel = np.ones(self.step, dtype=np.float64) / self.step
            for i in np.arange(converted_video.shape[1]):
                for j in np.arange(converted_video.shape[2]):
                    padded = np.pad(converted_video[:, i, j] / self.mean_intensity_per_frame,
                                 (self.step // 2, self.step - 1 - self.step // 2), mode='edge')
                    moving_average = np.convolve(padded, smooth_kernel, mode='valid')
                    if self.vars['repeat_video_smoothing'] > 1:
                        for it in np.arange(1, self.vars['repeat_video_smoothing']):
                            padded = np.pad(moving_average,
                                         (self.step // 2, self.step - 1 - self.step // 2), mode='edge')
                            moving_average = np.convolve(padded, smooth_kernel, mode='valid')
                    smoothed_video[:, i, j] = moving_average.astype(np.float32)
        else:
            smoothed_video = np.zeros(self.dims, dtype=np.float64)
            smooth_kernel = np.ones(self.step) / self.step
            for i in np.arange(converted_video.shape[1]):
                for j in np.arange(converted_video.shape[2]):
                    padded = np.pad(converted_video[:, i, j] / self.mean_intensity_per_frame,
                                 (self.step // 2, self.step - 1 - self.step // 2), mode='edge')
                    moving_average = np.convolve(padded, smooth_kernel, mode='valid')
                    if self.vars['repeat_video_smoothing'] > 1:
                        for it in np.arange(1, self.vars['repeat_video_smoothing']):
                            padded = np.pad(moving_average,
                                         (self.step // 2, self.step - 1 - self.step // 2), mode='edge')
                            moving_average = np.convolve(padded, smooth_kernel, mode='valid')
                    smoothed_video[:, i, j] = moving_average
        return smoothed_video

    except MemoryError:
        logging.error("Not enough RAM available to smooth pixel curves. Detection may fail.")
        smoothed_video = converted_video
        return smoothed_video

study_cytoscillations(show_seg=False)

Study the cytoskeletal oscillations within a video frame by frame.

This method performs an analysis of cytoskeletal oscillations in the video, identifying regions of influx and efflux based on pixel connectivity. It also handles memory allocation for the oscillations video, computes connected components, and optionally displays the segmented regions.

Args: show_seg (bool): If True, display the segmentation results.

Source code in src/cellects/core/motion_analysis.py
def study_cytoscillations(self, show_seg: bool=False):
    """

        Study the cytoskeletal oscillations within a video frame by frame.

        This method performs an analysis of cytoskeletal oscillations in the video,
        identifying regions of influx and efflux based on pixel connectivity.
        It also handles memory allocation for the oscillations video, computes
        connected components, and optionally displays the segmented regions.

        Args:
            show_seg (bool): If True, display the segmentation results.
    """
    if self.vars['save_coord_thickening_slimming'] or self.vars['oscilacyto_analysis']:
        oscillations_video = detect_oscillations_dynamics(self.converted_video, self.binary,
                                                          self.one_descriptor_per_arena['arena'], self.start,
                                                          self.vars['expected_oscillation_period'],
                                                          self.time_interval,
                                                          self.vars['minimal_oscillating_cluster_size'],
                                                          self.vars['min_ram_free'],
                                                          self.vars['lose_accuracy_to_save_memory'],
                                                          self.vars['save_coord_thickening_slimming'])
        del oscillations_video

update_ring_width()

Update the pixel_ring_depth and create an erodila disk.

This method ensures that the pixel ring depth is odd and at least 3, then creates an erodila disk of that size.

Source code in src/cellects/core/motion_analysis.py
def update_ring_width(self):
    """

    Update the `pixel_ring_depth` and create an erodila disk.

    This method ensures that the pixel ring depth is odd and at least 3,
    then creates an erodila disk of that size.
    """
    # Make sure that self.pixels_depths are odd and greater than 3
    if self.pixel_ring_depth <= 3:
        self.pixel_ring_depth = 3
    if self.pixel_ring_depth % 2 == 0:
        self.pixel_ring_depth = self.pixel_ring_depth + 1
    self.erodila_disk = create_ellipse(self.pixel_ring_depth, self.pixel_ring_depth, min_size=3).astype(np.uint8)
    self.max_distance = self.pixel_ring_depth * self.vars['detection_range_factor']

update_shape(show_seg)

Update the shape of detected objects in the current frame by analyzing segmentation potentials and applying morphological operations.

Parameters:

Name Type Description Default
show_seg bool

Flag indicating whether to display segmentation results.

required
Notes

This function performs several operations to update the shape of detected objects: - Analyzes segmentation potentials from previous frames. - Applies morphological operations to refine the shape. - Updates internal state variables such as binary and covering_intensity.

Source code in src/cellects/core/motion_analysis.py
def update_shape(self, show_seg: bool):
    """
    Update the shape of detected objects in the current frame by analyzing
    segmentation potentials and applying morphological operations.

    Parameters
    ----------
    show_seg : bool
        Flag indicating whether to display segmentation results.

    Notes
    -----
    This function performs several operations to update the shape of detected objects:
    - Analyzes segmentation potentials from previous frames.
    - Applies morphological operations to refine the shape.
    - Updates internal state variables such as `binary` and `covering_intensity`.

    """
    # Get from gradients, a 2D matrix of potentially covered pixels
    # I/ dilate the shape made with covered pixels to assess for covering
    # I/ 1) Only keep pixels that have been detected at least two times in the three previous frames
    if self.vars['sliding_window_segmentation']:
        if self.t > 1:
            new_potentials = np.sum(self.segmented[(self.t - 2): (self.t + 1), :, :], 0, dtype=np.uint8)
        else:
            new_potentials = np.sum(self.segmented[: (self.t + 1), :, :], 0, dtype=np.uint8)
        new_potentials[new_potentials == 1] = 0
        new_potentials[new_potentials > 1] = 1
    else:
        new_potentials = self.segmented[self.t, :, :]

    # I/ 2) If an image displays more new potential pixels than 50% of image pixels,
    # one of these images is considered noisy and we try taking only one.
    frame_counter = -1
    maximal_size = 0.5 * new_potentials.size
    if (self.vars["do_threshold_segmentation"] or self.vars["frame_by_frame_segmentation"]) and self.t > np.max((self.start + self.step, 6)):
       maximal_size = np.min((np.max(self.binary[:self.t].sum((1, 2))) * (1 + self.vars['maximal_growth_factor']), self.borders.sum()))
    while np.logical_and(np.sum(new_potentials) > maximal_size,
                         frame_counter <= 5):  # np.logical_and(np.sum(new_potentials > 0) > 5 * np.sum(dila_ring), frame_counter <= 5):
        frame_counter += 1
        if frame_counter > self.t:
            break
        else:
            if frame_counter < 5:
                new_potentials = self.segmented[self.t - frame_counter, :, :]
            else:
            # If taking only one image is not enough, use the inverse of the fadinged matrix as new_potentials
            # Given it haven't been processed by any slope calculation, it should be less noisy
                new_potentials = np.sum(self.segmented[(self.t - 5): (self.t + 1), :, :], 0, dtype=np.uint8)
                new_potentials[new_potentials < 6] = 0
                new_potentials[new_potentials == 6] = 1


    if self.vars['morphological_opening']:
        new_potentials = cv2.morphologyEx(new_potentials, cv2.MORPH_CLOSE, cross_33)
    if self.vars['morphological_closing']:
        new_potentials = cv2.morphologyEx(new_potentials, cv2.MORPH_OPEN, cross_33)
    new_potentials *= self.borders

    if self.vars['specimen_activity'] in ['grow', 'move and grow']:
        new_shape = self.binary[self.t - 1, :, :].copy()
        new_shape = np.logical_or(new_shape, new_potentials).astype(np.uint8)
        # Add distant shapes within a radius, score every added pixels according to their distance
        if not self.vars['several_blob_per_arena']:
            if new_shape.sum() == 0:
                new_shape = new_potentials.copy()
            else:
                pads = ProgressivelyAddDistantShapes(new_potentials, new_shape, self.max_distance)
                # If max_distance is non nul look for distant shapes
                pads.consider_shapes_sizes(self.vars['min_size_for_connection'],
                                                     self.vars['max_size_for_connection'])
                pads.connect_shapes(only_keep_connected_shapes=True, rank_connecting_pixels=True)

                new_shape = pads.expanded_shape.copy()
                new_shape[new_shape > 1] = 1
                if np.logical_and(self.t > self.step, self.t < self.dims[0]):
                    if np.any(pads.expanded_shape > 5):
                        # Add distant shapes back in time at the covering speed of neighbors
                        self.binary[self.t][np.nonzero(new_shape)] = 1
                        self.binary[(self.step):(self.t + 1), :, :] = \
                            pads.modify_past_analysis(self.binary[(self.step):(self.t + 1), :, :],
                                                      self.segmented[(self.step):(self.t + 1), :, :])
                        new_shape = self.binary[self.t, :, :].copy()
                del pads

            # Fill holes
            new_shape = cv2.morphologyEx(new_shape, cv2.MORPH_CLOSE, cross_33)
        del new_potentials

        if self.vars['specimen_activity'] == 'move and grow' and (self.t > self.step + self.lost_frames):
            # Shape Erosion
            # I/ After a substantial growth, erode the shape made with covered pixels to assess for fading
            # Use the newly covered pixels to calculate their mean covering intensity
            new_idx = np.nonzero(np.logical_xor(new_shape, self.binary[self.t - 1, :, :]))
            start_intensity_monitoring = self.t - self.lost_frames - self.step
            end_intensity_monitoring = self.t - self.lost_frames
            self.covering_intensity[new_idx[0], new_idx[1]] = np.median(self.converted_video[start_intensity_monitoring:end_intensity_monitoring, new_idx[0], new_idx[1]], axis=0)
            previous_binary = self.binary[self.t - 1, :, :]
            greyscale_image = self.converted_video[self.t - self.lost_frames, :, :]
            protect_from_fading = None
            if self.vars['origin_state'] == 'constant':
                protect_from_fading = self.origin
            new_shape, self.covering_intensity = cell_leaving_detection(new_shape, self.covering_intensity, previous_binary, greyscale_image, self.vars['fading'], self.vars['lighter_background'], self.vars['several_blob_per_arena'], self.erodila_disk, protect_from_fading)
            del new_idx
    else:
        new_shape = new_potentials

    self.covering_intensity *= new_shape
    self.binary[self.t, :, :] = new_shape * self.borders
    self.surfarea[self.t] = np.sum(self.binary[self.t, :, :])

    if self.vars['specimen_activity'] in ['grow', 'move and grow']:
        # Calculate the mean distance covered per frame and correct for a ring of not really fading pixels
        if self.mean_distance_per_frame is None:
            if self.vars['correct_errors_around_initial'] and not self.vars['several_blob_per_arena']:
                if np.logical_and((self.t % 20) == 0,
                                  np.logical_and(self.surfarea[self.t] > self.substantial_growth,
                                                 self.surfarea[self.t] < self.substantial_growth * 2)):
                    shape = self.binary[self.t, :, :] * self.sun
                    back = (1 - self.binary[self.t, :, :]) * self.sun
                    for ray in self.rays:
                        # For each sun's ray, see how they cross the shape/back and
                        # store the gravity_field value of these pixels (distance to the original shape).
                        ray_through_shape = (shape == ray) * self.gravity_field
                        ray_through_back = (back == ray) * self.gravity_field
                        if np.any(ray_through_shape):
                            if np.any(ray_through_back):
                                # If at least one back pixel is nearer to the original shape than a shape pixel,
                                # there is a hole to fill.
                                if np.any(ray_through_back > np.min(ray_through_shape[ray_through_shape > 0])):
                                    # Check if the nearest pixels are shape, if so, supress them until the nearest pixel
                                    # becomes back
                                    while np.max(ray_through_back) <= np.max(ray_through_shape):
                                        ray_through_shape[ray_through_shape == np.max(ray_through_shape)] = 0
                                    # Now, all back pixels that are nearer than the closest shape pixel should get filled
                                    # To do so, replace back pixels further than the nearest shape pixel by 0
                                    ray_through_back[ray_through_back < np.max(ray_through_shape)] = 0
                                    self.holes[np.nonzero(ray_through_back)] = 1
                            else:
                                self.rays = self.rays[self.rays != ray]
                        del ray_through_shape
                        del ray_through_back
                    del shape
                    del back
            if np.any(self.surfarea[:self.t] > self.substantial_growth * 2):

                if self.vars['correct_errors_around_initial'] and not self.vars['several_blob_per_arena']:
                    # Apply the hole correction
                    self.holes = cv2.morphologyEx(self.holes, cv2.MORPH_CLOSE, cross_33, iterations=10)
                    # If some holes are not covered by now
                    if np.any(self.holes * (1 - self.binary[self.t, :, :])):
                        self.binary[:(self.t + 1), :, :], holes_time_end, distance_against_time = \
                            dynamically_expand_to_fill_holes(self.binary[:(self.t + 1), :, :], self.holes)
                        if holes_time_end is not None:
                            self.binary[holes_time_end:(self.t + 1), :, :] += self.binary[holes_time_end, :, :]
                            self.binary[holes_time_end:(self.t + 1), :, :][
                                self.binary[holes_time_end:(self.t + 1), :, :] > 1] = 1
                            self.surfarea[:(self.t + 1)] = np.sum(self.binary[:(self.t + 1), :, :], (1, 2))

                    else:
                        distance_against_time = [1, 2]
                else:
                    distance_against_time = [1, 2]
                distance_against_time = np.diff(distance_against_time)
                if len(distance_against_time) > 0:
                    self.mean_distance_per_frame = np.mean(- distance_against_time)
                else:
                    self.mean_distance_per_frame = 1

        if self.vars['prevent_fast_growth_near_periphery']:
            # growth_near_periphery = np.diff(self.binary[self.t-1:self.t+1, :, :] * self.near_periphery, axis=0)
            growth_near_periphery = np.diff(self.binary[self.t-1:self.t+1, self.near_periphery[0], self.near_periphery[1]], axis=0)
            if (growth_near_periphery == 1).sum() > self.vars['max_periphery_growth']:
                # self.binary[self.t, self.near_periphery[0], self.near_periphery[1]] = self.binary[self.t - 1, self.near_periphery[0], self.near_periphery[1]]
                periphery_to_remove = np.zeros(self.dims[1:], dtype=np.uint8)
                periphery_to_remove[self.near_periphery[0], self.near_periphery[1]] = self.binary[self.t, self.near_periphery[0], self.near_periphery[1]]
                shapes, stats, centers = cc(periphery_to_remove)
                periphery_to_remove = np.nonzero(np.isin(shapes, np.nonzero(stats[:, 4] > self.vars['max_periphery_growth'])[0][1:]))
                self.binary[self.t, periphery_to_remove[0], periphery_to_remove[1]] = self.binary[self.t - 1, periphery_to_remove[0], periphery_to_remove[1]]
                if not self.vars['several_blob_per_arena']:
                    shapes, stats, centers = cc(self.binary[self.t, ...])
                    shapes[shapes != 1] = 0
                    self.binary[self.t, ...] = shapes
                del periphery_to_remove
                del shapes
                del stats
                del centers
            del growth_near_periphery

    # Display
    if show_seg:
        if self.visu is not None:
            im_to_display = self.visu[self.t, ...].copy()
            contours = np.nonzero(cv2.morphologyEx(self.binary[self.t, :, :], cv2.MORPH_GRADIENT, cross_33))
            if self.vars['lighter_background']:
                im_to_display[contours[0], contours[1]] = 0
            else:
                im_to_display[contours[0], contours[1]] = 255
        else:
            im_to_display = self.binary[self.t, :, :] * 255
        imtoshow = cv2.resize(im_to_display, (540, 540))
        cv2.imshow("shape_motion", imtoshow)
        cv2.waitKey(1)
    self.t += 1

NetworkDetection

NetworkDetection

Class for detecting vessels in images using Frangi and Sato filters with various parameter sets. It applies different thresholding methods, calculates quality metrics, and selects the best detection method.

Source code in src/cellects/image_analysis/network_functions.py
class  NetworkDetection:
    """
    NetworkDetection

    Class for detecting vessels in images using Frangi and Sato filters with various parameter sets.
    It applies different thresholding methods, calculates quality metrics, and selects the best detection method.
    """
    def __init__(self, greyscale_image: NDArray[np.uint8], possibly_filled_pixels: NDArray[np.uint8]=None, add_rolling_window: bool=False, origin_to_add: NDArray[np.uint8]=None, edge_max_width: int=5, best_result: dict=None):
        """
        Initialize the object with given parameters.

        Parameters
        ----------
        greyscale_image : NDArray[np.uint8]
            The input greyscale image.
        possibly_filled_pixels : NDArray[np.uint8], optional
            Image containing possibly filled pixels. Defaults to None.
        add_rolling_window : bool, optional
            Flag to add rolling window. Defaults to False.
        origin_to_add : NDArray[np.uint8], optional
            Origin to add. Defaults to None.
        edge_max_width : int, optional
            Maximal width of network edges. Defaults to 5.
        best_result : dict, optional
            Best result dictionary. Defaults to None.
        """
        self.greyscale_image = greyscale_image
        if possibly_filled_pixels is None:
            self.possibly_filled_pixels = np.ones(self.greyscale_image.shape, dtype=np.uint8)
        else:
            self.possibly_filled_pixels = possibly_filled_pixels
        self.edge_max_width = edge_max_width
        k_size = edge_max_width // 2
        k_size = k_size - k_size % 2 + 1
        self.kernel = create_ellipse(k_size, k_size).astype(np.uint8)
        self.best_result = best_result
        self.add_rolling_window = add_rolling_window
        self.origin_to_add = origin_to_add
        self.frangi_beta = 1.
        self.frangi_gamma = 1.
        self.black_ridges = True

    def apply_frangi_variations(self) -> list:
        """
        Applies various Frangi filter variations with different sigma values and thresholding methods.

        This method applies the Frangi vesselness filter with multiple sets of sigma values
        to detect vessels at different scales. It applies both Otsu thresholding and rolling window
        segmentation to the filtered results and calculates binary quality indices.

        Returns
        -------
        results : list of dict
            A list containing dictionaries with the method name, binary result, quality index,
            filtered image, filter type, rolling window flag, and sigma values used.
        """
        results = []

        # Parameter variations for Frangi filter
        frangi_sigmas = {
            's_fine_vessels': [0.75],
            'fine_vessels': [0.5, 1.0],  # Very fine capillaries, thin fibers
            'small_vessels': [1.0, 2.0],  # Small vessels, fine structures
            'multi_scale_medium': [1.0, 2.0, 3.0],  # Standard multi-scale
            'ultra_fine': [0.3, 0.5, 0.8],  # Ultra-fine structures
            'comprehensive': [0.5, 1.0, 2.0, 4.0],  # Multi-scale
            'retinal_vessels': [1.0, 2.0, 4.0, 8.0],  # Optimized for retinal imaging
            'microscopy': [0.5, 1.0, 1.5, 2.5],  # Microscopy applications
            'broad_spectrum': [0.5, 1.5, 3.0, 6.0, 10.0]
        }

        for i, (key, sigmas) in enumerate(frangi_sigmas.items()):
            # Apply Frangi filter
            frangi_result = frangi(self.greyscale_image, sigmas=sigmas, beta=self.frangi_beta, gamma=self.frangi_gamma, black_ridges=self.black_ridges)

            # Apply both thresholding methods
            # Method 1: Otsu thresholding
            thresh_otsu = threshold_otsu(frangi_result)
            binary_otsu = frangi_result > thresh_otsu
            quality_otsu = binary_quality_index(self.possibly_filled_pixels * binary_otsu)

            # Method 2: Rolling window thresholding

            # Store results
            results.append({
                'method': f'f_{sigmas}_thresh',
                'binary': binary_otsu,
                'quality': quality_otsu,
                'filtered': frangi_result,
                'filter': f'Frangi',
                'rolling_window': False,
                'sigmas': sigmas
            })
            # Method 2: Rolling window thresholding
            if self.add_rolling_window:
                binary_rolling = rolling_window_segmentation(frangi_result, self.possibly_filled_pixels, patch_size=(10, 10))
                quality_rolling = binary_quality_index(binary_rolling)
                results.append({
                    'method': f'f_{sigmas}_roll',
                    'binary': binary_rolling,
                    'quality': quality_rolling,
                    'filtered': frangi_result,
                    'filter': f'Frangi',
                    'rolling_window': True,
                    'sigmas': sigmas
                })

        return results


    def apply_sato_variations(self) -> list:
        """
        Apply various Sato filter variations to an image and store the results.

        This function applies different parameter sets for the Sato vesselness
        filter to an image, applies two thresholding methods (Otsu and rolling window),
        and stores the results. The function supports optional rolling window
        segmentation based on a configuration flag.

        Returns
        -------
        list of dict
            A list containing dictionaries with the results for each filter variation.
            Each dictionary includes method name, binary image, quality index,
            filtered result, filter type, rolling window flag, and sigma values.
        """
        results = []

        # Parameter variations for Frangi filter
        sato_sigmas = {
            'super_small_tubes': [0.01, 0.05, 0.1, 0.15],  #
            'small_tubes': [0.1, 0.2, 0.4, 0.8],  #
            's_thick_ridges': [0.25, 0.75],  # Thick ridges/tubes
            'small_multi_scale': [0.1, 0.2, 0.4, 0.8, 1.6],  #
            'fine_ridges': [0.8, 1.5],  # Fine ridge detection
            'medium_ridges': [1.5, 3.0],  # Medium ridge structures
            'multi_scale_fine': [0.8, 1.5, 2.5],  # Multi-scale fine detection
            'multi_scale_standard': [1.0, 2.5, 5.0],  # Standard multi-scale
            'edge_enhanced': [0.5, 1.0, 2.0],  # Edge-enhanced detection
            'noise_robust': [1.5, 2.5, 4.0],  # Robust to noise
            'fingerprint': [1.0, 1.5, 2.0, 3.0],  # Fingerprint ridge detection
            'geological': [2.0, 5.0, 10.0, 15.0]  # Geological structures
        }

        for i, (key, sigmas) in enumerate(sato_sigmas.items()):
            # Apply sato filter
            sato_result = sato(self.greyscale_image, sigmas=sigmas, black_ridges=self.black_ridges, mode='reflect')

            # Apply both thresholding methods
            # Method 1: Otsu thresholding
            thresh_otsu = threshold_otsu(sato_result)
            binary_otsu = sato_result > thresh_otsu
            quality_otsu = binary_quality_index(self.possibly_filled_pixels * binary_otsu)


            # Store results
            results.append({
                'method': f's_{sigmas}_thresh',
                'binary': binary_otsu,
                'quality': quality_otsu,
                'filtered': sato_result,
                'filter': f'Sato',
                'rolling_window': False,
                'sigmas': sigmas
            })

            # Method 2: Rolling window thresholding
            if self.add_rolling_window:
                binary_rolling = rolling_window_segmentation(sato_result, self.possibly_filled_pixels, patch_size=(10, 10))
                quality_rolling = binary_quality_index(binary_rolling)

                results.append({
                    'method': f's_{sigmas}_roll',
                    'binary': binary_rolling,
                    'quality': quality_rolling,
                    'filtered': sato_result,
                    'filter': f'Sato',
                    'rolling_window': True,
                    'sigmas': sigmas
                })

        return results

    def get_best_network_detection_method(self):
        """
        Get the best network detection method based on quality metrics.

        This function applies Frangi and Sato variations, combines their results,
        calculates quality metrics for each result, and selects the best method.

        Attributes
        ----------
        all_results : list of dicts
            Combined results from Frangi and Sato variations.
        quality_metrics : ndarray of float64
            Quality metrics for each detection result.
        best_idx : int
            Index of the best detection method based on quality metrics.
        best_result : dict
            The best detection result from all possible methods.
        incomplete_network : ndarray of bool
            Binary representation of the best detection result.

        Examples
        ----------
        >>> possibly_filled_pixels = np.zeros((9, 9), dtype=np.uint8)
        >>> possibly_filled_pixels[3:6, 3:6] = 1
        >>> possibly_filled_pixels[1:6, 3] = 1
        >>> possibly_filled_pixels[6:-1, 5] = 1
        >>> possibly_filled_pixels[4, 1:-1] = 1
        >>> greyscale_image = possibly_filled_pixels.copy()
        >>> greyscale_image[greyscale_image > 0] = np.random.randint(170, 255, possibly_filled_pixels.sum())
        >>> greyscale_image[greyscale_image == 0] = np.random.randint(0, 120, possibly_filled_pixels.size - possibly_filled_pixels.sum())
        >>> add_rolling_window=False
        >>> origin_to_add = np.zeros((9, 9), dtype=np.uint8)
        >>> origin_to_add[3:6, 3:6] = 1
        >>> NetDet = NetworkDetection(greyscale_image, possibly_filled_pixels, add_rolling_window, origin_to_add)
        >>> NetDet.get_best_network_detection_method()
        >>> print(NetDet.best_result['method'])
        >>> print(NetDet.best_result['binary'])
        >>> print(NetDet.best_result['quality'])
        >>> print(NetDet.best_result['filtered'])
        >>> print(NetDet.best_result['filter'])
        >>> print(NetDet.best_result['rolling_window'])
        >>> print(NetDet.best_result['sigmas'])
        bgr_image = np.random.randint(0, 256, (100, 100, 3), dtype=np.uint8)
        """
        frangi_res = self.apply_frangi_variations()
        sato_res = self.apply_sato_variations()
        self.all_results = frangi_res + sato_res
        self.quality_metrics = np.array([result['quality'] for result in self.all_results])
        self.best_idx = np.argmax(self.quality_metrics)
        self.best_result = self.all_results[self.best_idx]
        self.incomplete_network = self.best_result['binary'] * self.possibly_filled_pixels
        self.incomplete_network = cv2.morphologyEx(self.incomplete_network, cv2.MORPH_CLOSE, kernel=self.kernel)


    def detect_network(self):
        """
        Process and detect network features in the greyscale image.

        This method applies a frangi or sato filter based on the best result and
        performs segmentation using either rolling window or Otsu's thresholding.
        The final network detection result is stored in `self.incomplete_network`.
        """
        if self.best_result['filter'] == 'Frangi':
            filtered_result = frangi(self.greyscale_image, sigmas=self.best_result['sigmas'], beta=self.frangi_beta, gamma=self.frangi_gamma, black_ridges=self.black_ridges)
        else:
            filtered_result = sato(self.greyscale_image, sigmas=self.best_result['sigmas'], black_ridges=self.black_ridges, mode='reflect')

        if self.best_result['rolling_window']:
            binary_image = rolling_window_segmentation(filtered_result, self.possibly_filled_pixels, patch_size=(10, 10))
        else:
            thresh_otsu = threshold_otsu(filtered_result)
            binary_image = filtered_result > thresh_otsu
        self.incomplete_network = binary_image * self.possibly_filled_pixels
        self.incomplete_network = cv2.morphologyEx(self.incomplete_network.astype(np.uint8), cv2.MORPH_CLOSE, kernel=self.kernel)

    def change_greyscale(self, img: NDArray[np.uint8], first_dict: dict):
        """
        Change the image to greyscale using color space combinations.

        This function converts an input image to greyscale by generating
        and applying a combination of color spaces specified in the dictionary.
        The resulting greyscale image is stored as an attribute of the instance.

        Parameters
        ----------
        img : ndarray of uint8
            The input image to be converted to greyscale.
        """
        self.greyscale_image, g2, all_c_spaces, first_pc_vector  = generate_color_space_combination(img, list(first_dict.keys()), first_dict)

    def detect_pseudopods(self, lighter_background: bool, pseudopod_min_size: int=50, only_one_connected_component: bool=True):
        """
        Detect pseudopods in a binary image.

        Identify and process regions that resemble pseudopods based on width, size,
        and connectivity criteria. This function is used to detect and label areas
        that are indicative of pseudopod-like structures within a binary image.

        Parameters
        ----------
        lighter_background : bool
            Boolean flag to indicate if the background should be considered lighter.
        pseudopod_min_size : int, optional
            Minimum size for pseudopods to be considered valid. Default is 50.
        only_one_connected_component : bool, optional
            Flag to ensure only one connected component is kept. Default is True.

        Returns
        -------
        None

        Notes
        -----
        This function modifies internal attributes of the object, specifically setting `self.pseudopods` to an array indicating pseudopod regions.

        Examples
        --------
        >>> result = detect_pseudopods(True, 5, 50)
        >>> print(self.pseudopods)
        array([[0, 1, ..., 0],
               [0, 0, ..., 0],
               ...,
               [0, 1, ..., 0]], dtype=uint8)

        """
        if self.possibly_filled_pixels.all():
            dist_trans = self.possibly_filled_pixels
        else:
            closed_im = close_holes(self.possibly_filled_pixels)
            dist_trans = distance_transform_edt(closed_im)
            dist_trans = dist_trans.max() - dist_trans
        # Add dilatation of bracket of distances from medial_axis to the multiplication
        if lighter_background:
            grey = self.greyscale_image.max() - self.greyscale_image
        else:
            grey = self.greyscale_image
        if self.origin_to_add is not None:
            dist_trans_ori = distance_transform_edt(1 - self.origin_to_add)
            scored_im = dist_trans * dist_trans_ori * grey
        else:
            scored_im = (dist_trans**2) * grey
        scored_im = bracket_to_uint8_image_contrast(scored_im)
        thresh = threshold_otsu(scored_im)
        thresh = find_threshold_given_mask(scored_im, self.possibly_filled_pixels, min_threshold=thresh)
        high_int_in_periphery = (scored_im > thresh).astype(np.uint8) * self.possibly_filled_pixels

        _, pseudopod_widths = morphology.medial_axis(high_int_in_periphery, return_distance=True, rng=0)
        bin_im = pseudopod_widths >= self.edge_max_width
        dil_bin_im = cv2.dilate(bin_im.astype(np.uint8), kernel=create_ellipse(7, 7).astype(np.uint8), iterations=1)
        bin_im = high_int_in_periphery * dil_bin_im
        nb, shapes, stats, centro = cv2.connectedComponentsWithStats(bin_im)
        true_pseudopods = np.nonzero(stats[:, 4] > pseudopod_min_size)[0][1:]
        true_pseudopods = np.isin(shapes, true_pseudopods)

        # Make sure that the tubes connecting two pseudopods belong to pseudopods if removing pseudopods cuts the network
        complete_network = np.logical_or(true_pseudopods, self.incomplete_network).astype(np.uint8)
        complete_network = cv2.morphologyEx(complete_network.astype(np.uint8), cv2.MORPH_CLOSE, kernel=self.kernel)
        if only_one_connected_component:
            complete_network = keep_one_connected_component(complete_network)
            without_pseudopods = complete_network.copy()
            true_pseudopods = cv2.dilate(true_pseudopods.astype(np.uint8), kernel=self.kernel, iterations=2)
            without_pseudopods[true_pseudopods > 0] = 0
            only_connected_network = keep_one_connected_component(without_pseudopods)
            self.pseudopods = (1 - only_connected_network) * complete_network
            # Merge the connected network with pseudopods to get the complete network
            self.complete_network = np.logical_or(only_connected_network, self.pseudopods).astype(np.uint8)
        else:
            self.pseudopods = true_pseudopods.astype(np.uint8)
            self.complete_network = complete_network
        self.incomplete_network *= (1 - self.pseudopods)

__init__(greyscale_image, possibly_filled_pixels=None, add_rolling_window=False, origin_to_add=None, edge_max_width=5, best_result=None)

Initialize the object with given parameters.

Parameters:

Name Type Description Default
greyscale_image NDArray[uint8]

The input greyscale image.

required
possibly_filled_pixels NDArray[uint8]

Image containing possibly filled pixels. Defaults to None.

None
add_rolling_window bool

Flag to add rolling window. Defaults to False.

False
origin_to_add NDArray[uint8]

Origin to add. Defaults to None.

None
edge_max_width int

Maximal width of network edges. Defaults to 5.

5
best_result dict

Best result dictionary. Defaults to None.

None
Source code in src/cellects/image_analysis/network_functions.py
def __init__(self, greyscale_image: NDArray[np.uint8], possibly_filled_pixels: NDArray[np.uint8]=None, add_rolling_window: bool=False, origin_to_add: NDArray[np.uint8]=None, edge_max_width: int=5, best_result: dict=None):
    """
    Initialize the object with given parameters.

    Parameters
    ----------
    greyscale_image : NDArray[np.uint8]
        The input greyscale image.
    possibly_filled_pixels : NDArray[np.uint8], optional
        Image containing possibly filled pixels. Defaults to None.
    add_rolling_window : bool, optional
        Flag to add rolling window. Defaults to False.
    origin_to_add : NDArray[np.uint8], optional
        Origin to add. Defaults to None.
    edge_max_width : int, optional
        Maximal width of network edges. Defaults to 5.
    best_result : dict, optional
        Best result dictionary. Defaults to None.
    """
    self.greyscale_image = greyscale_image
    if possibly_filled_pixels is None:
        self.possibly_filled_pixels = np.ones(self.greyscale_image.shape, dtype=np.uint8)
    else:
        self.possibly_filled_pixels = possibly_filled_pixels
    self.edge_max_width = edge_max_width
    k_size = edge_max_width // 2
    k_size = k_size - k_size % 2 + 1
    self.kernel = create_ellipse(k_size, k_size).astype(np.uint8)
    self.best_result = best_result
    self.add_rolling_window = add_rolling_window
    self.origin_to_add = origin_to_add
    self.frangi_beta = 1.
    self.frangi_gamma = 1.
    self.black_ridges = True

apply_frangi_variations()

Applies various Frangi filter variations with different sigma values and thresholding methods.

This method applies the Frangi vesselness filter with multiple sets of sigma values to detect vessels at different scales. It applies both Otsu thresholding and rolling window segmentation to the filtered results and calculates binary quality indices.

Returns:

Name Type Description
results list of dict

A list containing dictionaries with the method name, binary result, quality index, filtered image, filter type, rolling window flag, and sigma values used.

Source code in src/cellects/image_analysis/network_functions.py
def apply_frangi_variations(self) -> list:
    """
    Applies various Frangi filter variations with different sigma values and thresholding methods.

    This method applies the Frangi vesselness filter with multiple sets of sigma values
    to detect vessels at different scales. It applies both Otsu thresholding and rolling window
    segmentation to the filtered results and calculates binary quality indices.

    Returns
    -------
    results : list of dict
        A list containing dictionaries with the method name, binary result, quality index,
        filtered image, filter type, rolling window flag, and sigma values used.
    """
    results = []

    # Parameter variations for Frangi filter
    frangi_sigmas = {
        's_fine_vessels': [0.75],
        'fine_vessels': [0.5, 1.0],  # Very fine capillaries, thin fibers
        'small_vessels': [1.0, 2.0],  # Small vessels, fine structures
        'multi_scale_medium': [1.0, 2.0, 3.0],  # Standard multi-scale
        'ultra_fine': [0.3, 0.5, 0.8],  # Ultra-fine structures
        'comprehensive': [0.5, 1.0, 2.0, 4.0],  # Multi-scale
        'retinal_vessels': [1.0, 2.0, 4.0, 8.0],  # Optimized for retinal imaging
        'microscopy': [0.5, 1.0, 1.5, 2.5],  # Microscopy applications
        'broad_spectrum': [0.5, 1.5, 3.0, 6.0, 10.0]
    }

    for i, (key, sigmas) in enumerate(frangi_sigmas.items()):
        # Apply Frangi filter
        frangi_result = frangi(self.greyscale_image, sigmas=sigmas, beta=self.frangi_beta, gamma=self.frangi_gamma, black_ridges=self.black_ridges)

        # Apply both thresholding methods
        # Method 1: Otsu thresholding
        thresh_otsu = threshold_otsu(frangi_result)
        binary_otsu = frangi_result > thresh_otsu
        quality_otsu = binary_quality_index(self.possibly_filled_pixels * binary_otsu)

        # Method 2: Rolling window thresholding

        # Store results
        results.append({
            'method': f'f_{sigmas}_thresh',
            'binary': binary_otsu,
            'quality': quality_otsu,
            'filtered': frangi_result,
            'filter': f'Frangi',
            'rolling_window': False,
            'sigmas': sigmas
        })
        # Method 2: Rolling window thresholding
        if self.add_rolling_window:
            binary_rolling = rolling_window_segmentation(frangi_result, self.possibly_filled_pixels, patch_size=(10, 10))
            quality_rolling = binary_quality_index(binary_rolling)
            results.append({
                'method': f'f_{sigmas}_roll',
                'binary': binary_rolling,
                'quality': quality_rolling,
                'filtered': frangi_result,
                'filter': f'Frangi',
                'rolling_window': True,
                'sigmas': sigmas
            })

    return results

apply_sato_variations()

Apply various Sato filter variations to an image and store the results.

This function applies different parameter sets for the Sato vesselness filter to an image, applies two thresholding methods (Otsu and rolling window), and stores the results. The function supports optional rolling window segmentation based on a configuration flag.

Returns:

Type Description
list of dict

A list containing dictionaries with the results for each filter variation. Each dictionary includes method name, binary image, quality index, filtered result, filter type, rolling window flag, and sigma values.

Source code in src/cellects/image_analysis/network_functions.py
def apply_sato_variations(self) -> list:
    """
    Apply various Sato filter variations to an image and store the results.

    This function applies different parameter sets for the Sato vesselness
    filter to an image, applies two thresholding methods (Otsu and rolling window),
    and stores the results. The function supports optional rolling window
    segmentation based on a configuration flag.

    Returns
    -------
    list of dict
        A list containing dictionaries with the results for each filter variation.
        Each dictionary includes method name, binary image, quality index,
        filtered result, filter type, rolling window flag, and sigma values.
    """
    results = []

    # Parameter variations for Frangi filter
    sato_sigmas = {
        'super_small_tubes': [0.01, 0.05, 0.1, 0.15],  #
        'small_tubes': [0.1, 0.2, 0.4, 0.8],  #
        's_thick_ridges': [0.25, 0.75],  # Thick ridges/tubes
        'small_multi_scale': [0.1, 0.2, 0.4, 0.8, 1.6],  #
        'fine_ridges': [0.8, 1.5],  # Fine ridge detection
        'medium_ridges': [1.5, 3.0],  # Medium ridge structures
        'multi_scale_fine': [0.8, 1.5, 2.5],  # Multi-scale fine detection
        'multi_scale_standard': [1.0, 2.5, 5.0],  # Standard multi-scale
        'edge_enhanced': [0.5, 1.0, 2.0],  # Edge-enhanced detection
        'noise_robust': [1.5, 2.5, 4.0],  # Robust to noise
        'fingerprint': [1.0, 1.5, 2.0, 3.0],  # Fingerprint ridge detection
        'geological': [2.0, 5.0, 10.0, 15.0]  # Geological structures
    }

    for i, (key, sigmas) in enumerate(sato_sigmas.items()):
        # Apply sato filter
        sato_result = sato(self.greyscale_image, sigmas=sigmas, black_ridges=self.black_ridges, mode='reflect')

        # Apply both thresholding methods
        # Method 1: Otsu thresholding
        thresh_otsu = threshold_otsu(sato_result)
        binary_otsu = sato_result > thresh_otsu
        quality_otsu = binary_quality_index(self.possibly_filled_pixels * binary_otsu)


        # Store results
        results.append({
            'method': f's_{sigmas}_thresh',
            'binary': binary_otsu,
            'quality': quality_otsu,
            'filtered': sato_result,
            'filter': f'Sato',
            'rolling_window': False,
            'sigmas': sigmas
        })

        # Method 2: Rolling window thresholding
        if self.add_rolling_window:
            binary_rolling = rolling_window_segmentation(sato_result, self.possibly_filled_pixels, patch_size=(10, 10))
            quality_rolling = binary_quality_index(binary_rolling)

            results.append({
                'method': f's_{sigmas}_roll',
                'binary': binary_rolling,
                'quality': quality_rolling,
                'filtered': sato_result,
                'filter': f'Sato',
                'rolling_window': True,
                'sigmas': sigmas
            })

    return results

change_greyscale(img, first_dict)

Change the image to greyscale using color space combinations.

This function converts an input image to greyscale by generating and applying a combination of color spaces specified in the dictionary. The resulting greyscale image is stored as an attribute of the instance.

Parameters:

Name Type Description Default
img ndarray of uint8

The input image to be converted to greyscale.

required
Source code in src/cellects/image_analysis/network_functions.py
def change_greyscale(self, img: NDArray[np.uint8], first_dict: dict):
    """
    Change the image to greyscale using color space combinations.

    This function converts an input image to greyscale by generating
    and applying a combination of color spaces specified in the dictionary.
    The resulting greyscale image is stored as an attribute of the instance.

    Parameters
    ----------
    img : ndarray of uint8
        The input image to be converted to greyscale.
    """
    self.greyscale_image, g2, all_c_spaces, first_pc_vector  = generate_color_space_combination(img, list(first_dict.keys()), first_dict)

detect_network()

Process and detect network features in the greyscale image.

This method applies a frangi or sato filter based on the best result and performs segmentation using either rolling window or Otsu's thresholding. The final network detection result is stored in self.incomplete_network.

Source code in src/cellects/image_analysis/network_functions.py
def detect_network(self):
    """
    Process and detect network features in the greyscale image.

    This method applies a frangi or sato filter based on the best result and
    performs segmentation using either rolling window or Otsu's thresholding.
    The final network detection result is stored in `self.incomplete_network`.
    """
    if self.best_result['filter'] == 'Frangi':
        filtered_result = frangi(self.greyscale_image, sigmas=self.best_result['sigmas'], beta=self.frangi_beta, gamma=self.frangi_gamma, black_ridges=self.black_ridges)
    else:
        filtered_result = sato(self.greyscale_image, sigmas=self.best_result['sigmas'], black_ridges=self.black_ridges, mode='reflect')

    if self.best_result['rolling_window']:
        binary_image = rolling_window_segmentation(filtered_result, self.possibly_filled_pixels, patch_size=(10, 10))
    else:
        thresh_otsu = threshold_otsu(filtered_result)
        binary_image = filtered_result > thresh_otsu
    self.incomplete_network = binary_image * self.possibly_filled_pixels
    self.incomplete_network = cv2.morphologyEx(self.incomplete_network.astype(np.uint8), cv2.MORPH_CLOSE, kernel=self.kernel)

detect_pseudopods(lighter_background, pseudopod_min_size=50, only_one_connected_component=True)

Detect pseudopods in a binary image.

Identify and process regions that resemble pseudopods based on width, size, and connectivity criteria. This function is used to detect and label areas that are indicative of pseudopod-like structures within a binary image.

Parameters:

Name Type Description Default
lighter_background bool

Boolean flag to indicate if the background should be considered lighter.

required
pseudopod_min_size int

Minimum size for pseudopods to be considered valid. Default is 50.

50
only_one_connected_component bool

Flag to ensure only one connected component is kept. Default is True.

True

Returns:

Type Description
None
Notes

This function modifies internal attributes of the object, specifically setting self.pseudopods to an array indicating pseudopod regions.

Examples:

>>> result = detect_pseudopods(True, 5, 50)
>>> print(self.pseudopods)
array([[0, 1, ..., 0],
       [0, 0, ..., 0],
       ...,
       [0, 1, ..., 0]], dtype=uint8)
Source code in src/cellects/image_analysis/network_functions.py
def detect_pseudopods(self, lighter_background: bool, pseudopod_min_size: int=50, only_one_connected_component: bool=True):
    """
    Detect pseudopods in a binary image.

    Identify and process regions that resemble pseudopods based on width, size,
    and connectivity criteria. This function is used to detect and label areas
    that are indicative of pseudopod-like structures within a binary image.

    Parameters
    ----------
    lighter_background : bool
        Boolean flag to indicate if the background should be considered lighter.
    pseudopod_min_size : int, optional
        Minimum size for pseudopods to be considered valid. Default is 50.
    only_one_connected_component : bool, optional
        Flag to ensure only one connected component is kept. Default is True.

    Returns
    -------
    None

    Notes
    -----
    This function modifies internal attributes of the object, specifically setting `self.pseudopods` to an array indicating pseudopod regions.

    Examples
    --------
    >>> result = detect_pseudopods(True, 5, 50)
    >>> print(self.pseudopods)
    array([[0, 1, ..., 0],
           [0, 0, ..., 0],
           ...,
           [0, 1, ..., 0]], dtype=uint8)

    """
    if self.possibly_filled_pixels.all():
        dist_trans = self.possibly_filled_pixels
    else:
        closed_im = close_holes(self.possibly_filled_pixels)
        dist_trans = distance_transform_edt(closed_im)
        dist_trans = dist_trans.max() - dist_trans
    # Add dilatation of bracket of distances from medial_axis to the multiplication
    if lighter_background:
        grey = self.greyscale_image.max() - self.greyscale_image
    else:
        grey = self.greyscale_image
    if self.origin_to_add is not None:
        dist_trans_ori = distance_transform_edt(1 - self.origin_to_add)
        scored_im = dist_trans * dist_trans_ori * grey
    else:
        scored_im = (dist_trans**2) * grey
    scored_im = bracket_to_uint8_image_contrast(scored_im)
    thresh = threshold_otsu(scored_im)
    thresh = find_threshold_given_mask(scored_im, self.possibly_filled_pixels, min_threshold=thresh)
    high_int_in_periphery = (scored_im > thresh).astype(np.uint8) * self.possibly_filled_pixels

    _, pseudopod_widths = morphology.medial_axis(high_int_in_periphery, return_distance=True, rng=0)
    bin_im = pseudopod_widths >= self.edge_max_width
    dil_bin_im = cv2.dilate(bin_im.astype(np.uint8), kernel=create_ellipse(7, 7).astype(np.uint8), iterations=1)
    bin_im = high_int_in_periphery * dil_bin_im
    nb, shapes, stats, centro = cv2.connectedComponentsWithStats(bin_im)
    true_pseudopods = np.nonzero(stats[:, 4] > pseudopod_min_size)[0][1:]
    true_pseudopods = np.isin(shapes, true_pseudopods)

    # Make sure that the tubes connecting two pseudopods belong to pseudopods if removing pseudopods cuts the network
    complete_network = np.logical_or(true_pseudopods, self.incomplete_network).astype(np.uint8)
    complete_network = cv2.morphologyEx(complete_network.astype(np.uint8), cv2.MORPH_CLOSE, kernel=self.kernel)
    if only_one_connected_component:
        complete_network = keep_one_connected_component(complete_network)
        without_pseudopods = complete_network.copy()
        true_pseudopods = cv2.dilate(true_pseudopods.astype(np.uint8), kernel=self.kernel, iterations=2)
        without_pseudopods[true_pseudopods > 0] = 0
        only_connected_network = keep_one_connected_component(without_pseudopods)
        self.pseudopods = (1 - only_connected_network) * complete_network
        # Merge the connected network with pseudopods to get the complete network
        self.complete_network = np.logical_or(only_connected_network, self.pseudopods).astype(np.uint8)
    else:
        self.pseudopods = true_pseudopods.astype(np.uint8)
        self.complete_network = complete_network
    self.incomplete_network *= (1 - self.pseudopods)

get_best_network_detection_method()

Get the best network detection method based on quality metrics.

This function applies Frangi and Sato variations, combines their results, calculates quality metrics for each result, and selects the best method.

Attributes:

Name Type Description
all_results list of dicts

Combined results from Frangi and Sato variations.

quality_metrics ndarray of float64

Quality metrics for each detection result.

best_idx int

Index of the best detection method based on quality metrics.

best_result dict

The best detection result from all possible methods.

incomplete_network ndarray of bool

Binary representation of the best detection result.

Examples:

>>> possibly_filled_pixels = np.zeros((9, 9), dtype=np.uint8)
>>> possibly_filled_pixels[3:6, 3:6] = 1
>>> possibly_filled_pixels[1:6, 3] = 1
>>> possibly_filled_pixels[6:-1, 5] = 1
>>> possibly_filled_pixels[4, 1:-1] = 1
>>> greyscale_image = possibly_filled_pixels.copy()
>>> greyscale_image[greyscale_image > 0] = np.random.randint(170, 255, possibly_filled_pixels.sum())
>>> greyscale_image[greyscale_image == 0] = np.random.randint(0, 120, possibly_filled_pixels.size - possibly_filled_pixels.sum())
>>> add_rolling_window=False
>>> origin_to_add = np.zeros((9, 9), dtype=np.uint8)
>>> origin_to_add[3:6, 3:6] = 1
>>> NetDet = NetworkDetection(greyscale_image, possibly_filled_pixels, add_rolling_window, origin_to_add)
>>> NetDet.get_best_network_detection_method()
>>> print(NetDet.best_result['method'])
>>> print(NetDet.best_result['binary'])
>>> print(NetDet.best_result['quality'])
>>> print(NetDet.best_result['filtered'])
>>> print(NetDet.best_result['filter'])
>>> print(NetDet.best_result['rolling_window'])
>>> print(NetDet.best_result['sigmas'])
bgr_image = np.random.randint(0, 256, (100, 100, 3), dtype=np.uint8)
Source code in src/cellects/image_analysis/network_functions.py
def get_best_network_detection_method(self):
    """
    Get the best network detection method based on quality metrics.

    This function applies Frangi and Sato variations, combines their results,
    calculates quality metrics for each result, and selects the best method.

    Attributes
    ----------
    all_results : list of dicts
        Combined results from Frangi and Sato variations.
    quality_metrics : ndarray of float64
        Quality metrics for each detection result.
    best_idx : int
        Index of the best detection method based on quality metrics.
    best_result : dict
        The best detection result from all possible methods.
    incomplete_network : ndarray of bool
        Binary representation of the best detection result.

    Examples
    ----------
    >>> possibly_filled_pixels = np.zeros((9, 9), dtype=np.uint8)
    >>> possibly_filled_pixels[3:6, 3:6] = 1
    >>> possibly_filled_pixels[1:6, 3] = 1
    >>> possibly_filled_pixels[6:-1, 5] = 1
    >>> possibly_filled_pixels[4, 1:-1] = 1
    >>> greyscale_image = possibly_filled_pixels.copy()
    >>> greyscale_image[greyscale_image > 0] = np.random.randint(170, 255, possibly_filled_pixels.sum())
    >>> greyscale_image[greyscale_image == 0] = np.random.randint(0, 120, possibly_filled_pixels.size - possibly_filled_pixels.sum())
    >>> add_rolling_window=False
    >>> origin_to_add = np.zeros((9, 9), dtype=np.uint8)
    >>> origin_to_add[3:6, 3:6] = 1
    >>> NetDet = NetworkDetection(greyscale_image, possibly_filled_pixels, add_rolling_window, origin_to_add)
    >>> NetDet.get_best_network_detection_method()
    >>> print(NetDet.best_result['method'])
    >>> print(NetDet.best_result['binary'])
    >>> print(NetDet.best_result['quality'])
    >>> print(NetDet.best_result['filtered'])
    >>> print(NetDet.best_result['filter'])
    >>> print(NetDet.best_result['rolling_window'])
    >>> print(NetDet.best_result['sigmas'])
    bgr_image = np.random.randint(0, 256, (100, 100, 3), dtype=np.uint8)
    """
    frangi_res = self.apply_frangi_variations()
    sato_res = self.apply_sato_variations()
    self.all_results = frangi_res + sato_res
    self.quality_metrics = np.array([result['quality'] for result in self.all_results])
    self.best_idx = np.argmax(self.quality_metrics)
    self.best_result = self.all_results[self.best_idx]
    self.incomplete_network = self.best_result['binary'] * self.possibly_filled_pixels
    self.incomplete_network = cv2.morphologyEx(self.incomplete_network, cv2.MORPH_CLOSE, kernel=self.kernel)

PickleRick

A class to handle safe file reading and writing operations using pickle.

This class ensures that files are not being accessed concurrently by creating a lock file (PickleRickX.pkl) to signal that the file is open. It includes methods to check for the lock file, write data safely, and read data safely.

Source code in src/cellects/utils/load_display_save.py
class PickleRick:
    """
    A class to handle safe file reading and writing operations using pickle.

    This class ensures that files are not being accessed concurrently by
    creating a lock file (PickleRickX.pkl) to signal that the file is open.
    It includes methods to check for the lock file, write data safely,
    and read data safely.
    """
    def __init__(self, pickle_rick_number=""):
        """
        Initialize a new instance of the class.

        This constructor sets up initial attributes for tracking Rick's state, including
        a boolean flag for waiting for Pickle Rick, a counter, the provided pickle Rick number,
        and the time when the first check was performed.

        Parameters
        ----------
        pickle_rick_number : str, optional
            The number associated with Pickle Rick. Defaults to an empty string.
        """
        self.wait_for_pickle_rick: bool = False
        self.counter = 0
        self.pickle_rick_number = pickle_rick_number
        self.first_check_time = default_timer()

    def _check_that_file_is_not_open(self):
        """
        Check if a specific pickle file exists and handle it accordingly.

        This function checks whether a file named `PickleRick{self.pickle_rick_number}.pkl`
        exists. If the file has not been modified for more than 2 seconds, it is removed.
        The function then updates an attribute to indicate whether the file exists.

        Parameters
        ----------
        self : PickleRickObject
            The instance of the class containing this method.

        Returns
        -------
        None
            This function does not return any value.
            It updates the `self.wait_for_pickle_rick` attribute.

        Notes
        -----
        This function removes the pickle file if it has not been modified for more than 2 seconds.
        The `self.wait_for_pickle_rick` attribute is updated based on the existence of the file.
        """
        if os.path.isfile(f"PickleRick{self.pickle_rick_number}.pkl"):
            if default_timer() - self.first_check_time > 2:
                os.remove(f"PickleRick{self.pickle_rick_number}.pkl")
            # logging.error((f"Cannot read/write, Trying again... tip: unlock by deleting the file named PickleRick{self.pickle_rick_number}.pkl"))
        self.wait_for_pickle_rick = os.path.isfile(f"PickleRick{self.pickle_rick_number}.pkl")

    def _write_pickle_rick(self):
        """
        Write pickle data to a file for Pickle Rick.

        Parameters
        ----------
        self : object
            The instance of the class that this method belongs to.
            This typically contains attributes and methods relevant to managing
            pickle operations for Pickle Rick.

        Raises
        ------
        Exception
            General exception raised if there is any issue with writing the file.
            The error details are logged.

        Notes
        -----
        This function creates a file named `PickleRick{self.pickle_rick_number}.pkl`
        with a dictionary indicating readiness for Pickle Rick.

        Examples
        --------
        >>> obj = PickleRick()  # Assuming `YourClassInstance` is the class containing this method
        >>> obj.pickle_rick_number = 1  # Set an example value for the attribute
        >>> obj._write_pickle_rick()     # Call the method to create and write to file
        """
        try:
            with open(f"PickleRick{self.pickle_rick_number}.pkl", 'wb') as file_to_write:
                pickle.dump({'wait_for_pickle_rick': True}, file_to_write)
        except Exception as exc:
            logging.error(f"Don't know how but Pickle Rick failed... Error is: {exc}")

    def _delete_pickle_rick(self):
        """

        Delete a specific Pickle Rick file.

        Deletes the pickle file associated with the current instance's
        `pickle_rick_number`.

        Raises
        ------
        FileNotFoundError
            If the file with name `PickleRick{self.pickle_rick_number}.pkl` does not exist.
        """
        if os.path.isfile(f"PickleRick{self.pickle_rick_number}.pkl"):
            os.remove(f"PickleRick{self.pickle_rick_number}.pkl")

    def write_file(self, file_content, file_name):
        """
        Write content to a file with error handling and retry logic.

        This function attempts to write the provided content into a file.
        If it fails, it retries up to 100 times with some additional checks
        and delays. Note that the content is serialized using pickle.

        Parameters
        ----------
        file_content : Any
            The data to be written into the file. This will be pickled.
        file_name : str
            The name of the file where data should be written.

        Returns
        -------
        None

        Raises
        ------
        Exception
            If the file cannot be written after 100 attempts, an error is logged.

        Notes
        -----
        This function uses pickle to serialize the data, which can introduce security risks
        if untrusted content is being written. It performs some internal state checks,
        such as verifying that the target file isn't open and whether it should delete
        some internal state, represented by `_delete_pickle_rick`.

        The function implements a retry mechanism with a backoff strategy that can include
        random delays, though the example code does not specify these details explicitly.

        Examples
        --------
        >>> result = PickleRick().write_file({'key': 'value'}, 'test.pkl')
        Success to write file
        """
        self.counter += 1
        if self.counter < 100:
            if self.counter > 95:
                self._delete_pickle_rick()
            # time.sleep(np.random.choice(np.arange(1, os.cpu_count(), 0.5)))
            self._check_that_file_is_not_open()
            if self.wait_for_pickle_rick:
                time.sleep(2)
                self.write_file(file_content, file_name)
            else:
                self._write_pickle_rick()
                try:
                    with open(file_name, 'wb') as file_to_write:
                        pickle.dump(file_content, file_to_write, protocol=0)
                    self._delete_pickle_rick()
                    logging.info(f"Success to write file")
                except Exception as exc:
                    logging.error(f"The Pickle error on the file {file_name} is: {exc}")
                    self._delete_pickle_rick()
                    self.write_file(file_content, file_name)
        else:
            logging.error(f"Failed to write {file_name}")

    def read_file(self, file_name):
        """
        Reads the contents of a file using pickle and returns it.

        Parameters
        ----------
        file_name : str
            The name of the file to be read.

        Returns
        -------
        Union[Any, None]
            The content of the file if successfully read; otherwise, `None`.

        Raises
        ------
        Exception
            If there is an error reading the file.

        Notes
        -----
        This function attempts to read a file multiple times if it fails.
        If the number of attempts exceeds 1000, it logs an error and returns `None`.

        Examples
        --------
        >>> PickleRick().read_file("example.pkl")
        """
        self.counter += 1
        if self.counter < 1000:
            if self.counter > 950:
                self._delete_pickle_rick()
            self._check_that_file_is_not_open()
            if self.wait_for_pickle_rick:
                time.sleep(2)
                self.read_file(file_name)
            else:
                self._write_pickle_rick()
                try:
                    with open(file_name, 'rb') as fileopen:
                        file_content = pickle.load(fileopen)
                except Exception as exc:
                    logging.error(f"The Pickle error on the file {file_name} is: {exc}")
                    file_content = None
                self._delete_pickle_rick()
                if file_content is None:
                    self.read_file(file_name)
                else:
                    logging.info(f"Success to read file")
                return file_content
        else:
            logging.error(f"Failed to read {file_name}")

__init__(pickle_rick_number='')

Initialize a new instance of the class.

This constructor sets up initial attributes for tracking Rick's state, including a boolean flag for waiting for Pickle Rick, a counter, the provided pickle Rick number, and the time when the first check was performed.

Parameters:

Name Type Description Default
pickle_rick_number str

The number associated with Pickle Rick. Defaults to an empty string.

''
Source code in src/cellects/utils/load_display_save.py
def __init__(self, pickle_rick_number=""):
    """
    Initialize a new instance of the class.

    This constructor sets up initial attributes for tracking Rick's state, including
    a boolean flag for waiting for Pickle Rick, a counter, the provided pickle Rick number,
    and the time when the first check was performed.

    Parameters
    ----------
    pickle_rick_number : str, optional
        The number associated with Pickle Rick. Defaults to an empty string.
    """
    self.wait_for_pickle_rick: bool = False
    self.counter = 0
    self.pickle_rick_number = pickle_rick_number
    self.first_check_time = default_timer()

read_file(file_name)

Reads the contents of a file using pickle and returns it.

Parameters:

Name Type Description Default
file_name str

The name of the file to be read.

required

Returns:

Type Description
Union[Any, None]

The content of the file if successfully read; otherwise, None.

Raises:

Type Description
Exception

If there is an error reading the file.

Notes

This function attempts to read a file multiple times if it fails. If the number of attempts exceeds 1000, it logs an error and returns None.

Examples:

>>> PickleRick().read_file("example.pkl")
Source code in src/cellects/utils/load_display_save.py
def read_file(self, file_name):
    """
    Reads the contents of a file using pickle and returns it.

    Parameters
    ----------
    file_name : str
        The name of the file to be read.

    Returns
    -------
    Union[Any, None]
        The content of the file if successfully read; otherwise, `None`.

    Raises
    ------
    Exception
        If there is an error reading the file.

    Notes
    -----
    This function attempts to read a file multiple times if it fails.
    If the number of attempts exceeds 1000, it logs an error and returns `None`.

    Examples
    --------
    >>> PickleRick().read_file("example.pkl")
    """
    self.counter += 1
    if self.counter < 1000:
        if self.counter > 950:
            self._delete_pickle_rick()
        self._check_that_file_is_not_open()
        if self.wait_for_pickle_rick:
            time.sleep(2)
            self.read_file(file_name)
        else:
            self._write_pickle_rick()
            try:
                with open(file_name, 'rb') as fileopen:
                    file_content = pickle.load(fileopen)
            except Exception as exc:
                logging.error(f"The Pickle error on the file {file_name} is: {exc}")
                file_content = None
            self._delete_pickle_rick()
            if file_content is None:
                self.read_file(file_name)
            else:
                logging.info(f"Success to read file")
            return file_content
    else:
        logging.error(f"Failed to read {file_name}")

write_file(file_content, file_name)

Write content to a file with error handling and retry logic.

This function attempts to write the provided content into a file. If it fails, it retries up to 100 times with some additional checks and delays. Note that the content is serialized using pickle.

Parameters:

Name Type Description Default
file_content Any

The data to be written into the file. This will be pickled.

required
file_name str

The name of the file where data should be written.

required

Returns:

Type Description
None

Raises:

Type Description
Exception

If the file cannot be written after 100 attempts, an error is logged.

Notes

This function uses pickle to serialize the data, which can introduce security risks if untrusted content is being written. It performs some internal state checks, such as verifying that the target file isn't open and whether it should delete some internal state, represented by _delete_pickle_rick.

The function implements a retry mechanism with a backoff strategy that can include random delays, though the example code does not specify these details explicitly.

Examples:

>>> result = PickleRick().write_file({'key': 'value'}, 'test.pkl')
Success to write file
Source code in src/cellects/utils/load_display_save.py
def write_file(self, file_content, file_name):
    """
    Write content to a file with error handling and retry logic.

    This function attempts to write the provided content into a file.
    If it fails, it retries up to 100 times with some additional checks
    and delays. Note that the content is serialized using pickle.

    Parameters
    ----------
    file_content : Any
        The data to be written into the file. This will be pickled.
    file_name : str
        The name of the file where data should be written.

    Returns
    -------
    None

    Raises
    ------
    Exception
        If the file cannot be written after 100 attempts, an error is logged.

    Notes
    -----
    This function uses pickle to serialize the data, which can introduce security risks
    if untrusted content is being written. It performs some internal state checks,
    such as verifying that the target file isn't open and whether it should delete
    some internal state, represented by `_delete_pickle_rick`.

    The function implements a retry mechanism with a backoff strategy that can include
    random delays, though the example code does not specify these details explicitly.

    Examples
    --------
    >>> result = PickleRick().write_file({'key': 'value'}, 'test.pkl')
    Success to write file
    """
    self.counter += 1
    if self.counter < 100:
        if self.counter > 95:
            self._delete_pickle_rick()
        # time.sleep(np.random.choice(np.arange(1, os.cpu_count(), 0.5)))
        self._check_that_file_is_not_open()
        if self.wait_for_pickle_rick:
            time.sleep(2)
            self.write_file(file_content, file_name)
        else:
            self._write_pickle_rick()
            try:
                with open(file_name, 'wb') as file_to_write:
                    pickle.dump(file_content, file_to_write, protocol=0)
                self._delete_pickle_rick()
                logging.info(f"Success to write file")
            except Exception as exc:
                logging.error(f"The Pickle error on the file {file_name} is: {exc}")
                self._delete_pickle_rick()
                self.write_file(file_content, file_name)
    else:
        logging.error(f"Failed to write {file_name}")

ad_pad(arr)

Pad the input array with a single layer of zeros around its edges.

Parameters:

Name Type Description Default
arr ndarray

The input array to pad. Must be at least 2-dimensional.

required

Returns:

Name Type Description
padded_arr ndarray

The output array with a single 0-padded layer around its edges.

Notes

This function uses NumPy's pad with mode='constant' to add a single layer of zeros around the edges of the input array.

Examples:

>>> arr = np.array([[1, 2], [3, 4]])
>>> ad_pad(arr)
array([[0, 0, 0, 0],
   [0, 1, 2, 0],
   [0, 3, 4, 0],
   [0, 0, 0, 0]])
Source code in src/cellects/image_analysis/network_functions.py
def ad_pad(arr: NDArray) -> NDArray:
    """
    Pad the input array with a single layer of zeros around its edges.

    Parameters
    ----------
    arr : ndarray
        The input array to pad. Must be at least 2-dimensional.

    Returns
    -------
    padded_arr : ndarray
        The output array with a single 0-padded layer around its edges.

    Notes
    -----
    This function uses NumPy's `pad` with mode='constant' to add a single layer
    of zeros around the edges of the input array.

    Examples
    --------
    >>> arr = np.array([[1, 2], [3, 4]])
    >>> ad_pad(arr)
    array([[0, 0, 0, 0],
       [0, 1, 2, 0],
       [0, 3, 4, 0],
       [0, 0, 0, 0]])
    """
    return np.pad(arr, [(1, ), (1, )], mode='constant')

add_padding(array_list)

Add padding to each 2D array in a list.

Parameters:

Name Type Description Default
array_list list of ndarrays

List of 2D NumPy arrays to be processed.

required

Returns:

Name Type Description
out list of ndarrays

List of 2D NumPy arrays with the padding removed.

Examples:

>>> array_list = [np.array([[1, 2], [3, 4]])]
>>> padded_list = add_padding(array_list)
>>> print(padded_list[0])
[[0 0 0]
 [0 1 2 0]
 [0 3 4 0]
 [0 0 0]]
Source code in src/cellects/image_analysis/network_functions.py
def add_padding(array_list: list) -> list:
    """
    Add padding to each 2D array in a list.

    Parameters
    ----------
    array_list : list of ndarrays
        List of 2D NumPy arrays to be processed.

    Returns
    -------
    out : list of ndarrays
        List of 2D NumPy arrays with the padding removed.

    Examples
    --------
    >>> array_list = [np.array([[1, 2], [3, 4]])]
    >>> padded_list = add_padding(array_list)
    >>> print(padded_list[0])
    [[0 0 0]
     [0 1 2 0]
     [0 3 4 0]
     [0 0 0]]
    """
    new_array_list = []
    for arr in array_list:
        new_array_list.append(ad_pad(arr))
    return new_array_list

binary_quality_index(binary_img)

Calculate the binary quality index for a binary image.

The binary quality index is computed based on the perimeter of the largest connected component in the binary image, normalized by the total number of pixels.

Parameters:

Name Type Description Default
binary_img ndarray of uint8

Input binary image array.

required

Returns:

Name Type Description
out float

The binary quality index value.

Source code in src/cellects/image_analysis/image_segmentation.py
def binary_quality_index(binary_img: NDArray[np.uint8]) -> float:
    """
    Calculate the binary quality index for a binary image.

    The binary quality index is computed based on the perimeter of the largest
    connected component in the binary image, normalized by the total number of
    pixels.

    Parameters
    ----------
    binary_img : ndarray of uint8
        Input binary image array.

    Returns
    -------
    out : float
        The binary quality index value.
    """
    if np.any(binary_img):
        # SD = ShapeDescriptors(binary_img, ["euler_number"])
        # index = - SD.descriptors['euler_number']
        size, largest_cc = get_largest_connected_component(binary_img)
        index = np.square(perimeter(largest_cc)) / binary_img.sum()
        # index = (largest_cc.sum() * perimeter(largest_cc)) / binary_img.sum()
    else:
        index = 0.
    return index

bracket_to_uint8_image_contrast(image)

Convert an image with bracket contrast values to uint8 type.

This function normalizes an input image by scaling the minimum and maximum values of the image to the range [0, 255] and then converts it to uint8 data type.

Parameters:

Name Type Description Default
image ndarray

Input image as a numpy array with floating-point values.

required

Returns:

Type Description
ndarray of uint8

Output image converted to uint8 type after normalization.

Examples:

>>> image = np.random.randint(0, 255, (10, 10), dtype=np.uint8)
>>> res = bracket_to_uint8_image_contrast(image)
>>> print(res)
>>> image = np.zeros((10, 10), dtype=np.uint8)
>>> res = bracket_to_uint8_image_contrast(image)
>>> print(res)
Source code in src/cellects/utils/formulas.py
@njit()
def bracket_to_uint8_image_contrast(image: NDArray):
    """
    Convert an image with bracket contrast values to uint8 type.

    This function normalizes an input image by scaling the minimum and maximum
    values of the image to the range [0, 255] and then converts it to uint8
    data type.

    Parameters
    ----------
    image : ndarray
        Input image as a numpy array with floating-point values.

    Returns
    -------
    ndarray of uint8
        Output image converted to uint8 type after normalization.

    Examples
    --------
    >>> image = np.random.randint(0, 255, (10, 10), dtype=np.uint8)
    >>> res = bracket_to_uint8_image_contrast(image)
    >>> print(res)

    >>> image = np.zeros((10, 10), dtype=np.uint8)
    >>> res = bracket_to_uint8_image_contrast(image)
    >>> print(res)
    """
    image -= image.min()
    if image.max() == 0:
        return np.zeros_like(image, dtype=np.uint8)
    else:
        return to_uint8(255 * (image / np.max(image)))

close_holes(binary_img)

Close holes in a binary image using connected components analysis.

This function identifies and closes small holes within the foreground objects of a binary image. It uses connected component analysis to find and fill holes that are smaller than the main object.

Parameters:

Name Type Description Default
binary_img ndarray of uint8

Binary input image where holes need to be closed.

required

Returns:

Name Type Description
out ndarray of uint8

Binary image with closed holes.

Examples:

>>> binary_img = np.zeros((10, 10), dtype=np.uint8)
>>> binary_img[2:8, 2:8] = 1
>>> binary_img[4:6, 4:6] = 0  # Creating a hole
>>> result = close_holes(binary_img)
>>> print(result)
[[0 0 0 0 0 0 0 0 0 0]
 [0 0 0 0 0 0 0 0 0 0]
 [0 0 1 1 1 1 1 1 0 0]
 [0 0 1 1 1 1 1 1 0 0]
 [0 0 1 1 1 1 1 1 0 0]
 [0 0 1 1 1 1 1 1 0 0]
 [0 0 1 1 1 1 1 1 0 0]
 [0 0 1 1 1 1 1 1 0 0]
 [0 0 0 0 0 0 0 0 0 0]
 [0 0 0 0 0 0 0 0 0 0]]
Source code in src/cellects/image_analysis/morphological_operations.py
def close_holes(binary_img: NDArray[np.uint8]) -> NDArray[np.uint8]:
    """
    Close holes in a binary image using connected components analysis.

    This function identifies and closes small holes within the foreground objects of a binary image. It uses connected component analysis to find and fill holes that are smaller than the main object.

    Parameters
    ----------
    binary_img : ndarray of uint8
        Binary input image where holes need to be closed.

    Returns
    -------
    out : ndarray of uint8
        Binary image with closed holes.

    Examples
    --------
    >>> binary_img = np.zeros((10, 10), dtype=np.uint8)
    >>> binary_img[2:8, 2:8] = 1
    >>> binary_img[4:6, 4:6] = 0  # Creating a hole
    >>> result = close_holes(binary_img)
    >>> print(result)
    [[0 0 0 0 0 0 0 0 0 0]
     [0 0 0 0 0 0 0 0 0 0]
     [0 0 1 1 1 1 1 1 0 0]
     [0 0 1 1 1 1 1 1 0 0]
     [0 0 1 1 1 1 1 1 0 0]
     [0 0 1 1 1 1 1 1 0 0]
     [0 0 1 1 1 1 1 1 0 0]
     [0 0 1 1 1 1 1 1 0 0]
     [0 0 0 0 0 0 0 0 0 0]
     [0 0 0 0 0 0 0 0 0 0]]
    """
    if binary_img.dtype != np.uint8:
        binary_img = binary_img.astype(np.uint8)
    nb, new_order = cv2.connectedComponents(1 - binary_img)
    if nb > 2:
        binary_img[new_order > 1] = 1
    return binary_img

create_empty_videos(image_list, cr, lose_accuracy_to_save_memory, already_greyscale, csc_dict)

Create empty video arrays based on input parameters.

Parameters:

Name Type Description Default
image_list list

List of images.

required
cr list

Crop region defined by [x_start, y_start, x_end, y_end].

required
lose_accuracy_to_save_memory bool

Boolean flag to determine if memory should be saved by using uint8 data type.

required
already_greyscale bool

Boolean flag indicating if the images are already in greyscale format.

required
csc_dict dict

Dictionary containing color space conversion settings, including 'logical' key.

required

Returns:

Type Description
tuple

A tuple containing three elements: - visu: NumPy array with shape (len(image_list), cr[1] - cr[0] + 1, cr[3] - cr[2] + 1, 3) and dtype uint8 for RGB images. - converted_video: NumPy array with shape (len(image_list), cr[1] - cr[0] + 1, cr[3] - cr[2] + 1) and dtype uint8 or float according to lose_accuracy_to_save_memory. - converted_video2: NumPy array with shape same as converted_video and dtype uint8 or float according to lose_accuracy_to_save_memory.

Notes

Performance considerations: - If lose_accuracy_to_save_memory is True, the function uses np.uint8 for memory efficiency. - If already_greyscale is False, additional arrays are created to store RGB data.

Source code in src/cellects/utils/load_display_save.py
def create_empty_videos(image_list: list, cr: list, lose_accuracy_to_save_memory: bool,
                        already_greyscale: bool, csc_dict: dict):
    """

    Create empty video arrays based on input parameters.

    Parameters
    ----------
    image_list : list
        List of images.
    cr : list
        Crop region defined by [x_start, y_start, x_end, y_end].
    lose_accuracy_to_save_memory : bool
        Boolean flag to determine if memory should be saved by using uint8 data type.
    already_greyscale : bool
        Boolean flag indicating if the images are already in greyscale format.
    csc_dict : dict
        Dictionary containing color space conversion settings, including 'logical' key.

    Returns
    -------
    tuple
        A tuple containing three elements:
            - `visu`: NumPy array with shape (len(image_list), cr[1] - cr[0] + 1, cr[3] - cr[2] + 1, 3) and dtype uint8 for RGB images.
            - `converted_video`: NumPy array with shape (len(image_list), cr[1] - cr[0] + 1, cr[3] - cr[2] + 1) and dtype uint8 or float according to `lose_accuracy_to_save_memory`.
            - `converted_video2`: NumPy array with shape same as `converted_video` and dtype uint8 or float according to `lose_accuracy_to_save_memory`.

    Notes
    -----
    Performance considerations:
        - If `lose_accuracy_to_save_memory` is True, the function uses np.uint8 for memory efficiency.
        - If `already_greyscale` is False, additional arrays are created to store RGB data.
    """
    visu, converted_video, converted_video2 = None, None, None
    dims = len(image_list), cr[1] - cr[0], cr[3] - cr[2]
    if lose_accuracy_to_save_memory:
        converted_video = np.zeros(dims, dtype=np.uint8)
    else:
        converted_video = np.zeros(dims, dtype=float)
    if not already_greyscale:
        visu = np.zeros((dims[0], dims[1], dims[2], 3), dtype=np.uint8)
        if csc_dict['logical'] != 'None':
            if lose_accuracy_to_save_memory:
                converted_video2 = np.zeros(dims, dtype=np.uint8)
            else:
                converted_video2 = np.zeros(dims, dtype=float)
    return visu, converted_video, converted_video2

detect_network_dynamics(converted_video, binary, arena_label=1, starting_time=0, visu=None, origin=None, smooth_segmentation_over_time=True, edge_max_width=5, detect_pseudopods=True, save_coord_network=True, show_seg=False)

Detects and tracks dynamic features (e.g., pseudopods) in a biological network over time from video data.

Analyzes spatiotemporal dynamics of a network structure using binary masks and grayscale video data. Processes each frame to detect network components, optionally identifies pseudopods, applies temporal smoothing, and generates visualization overlays. Saves coordinate data for detected networks if enabled.

Parameters:

Name Type Description Default
converted_video NDArray

Input video data array with shape (time x y x z) representing grayscale intensities.

required
binary NDArray[uint8]

Binary mask array with shape (time x y x z) indicating filled regions in each frame.

required
arena_label int

Unique identifier for the current processing arena/session to name saved output files.

1
starting_time int

Zero-based index of the first frame to begin network detection and analysis from.

0
visu NDArray

Visualization video array (time x y x z) with RGB channels for overlay rendering.

None
origin NDArray[uint8]

Binary mask defining a central region of interest to exclude from network detection.

None
smooth_segmentation_over_time (bool, optional(default=True))

Flag indicating whether to apply temporal smoothing using adjacent frame data.

True
edge_max_width int

Maximal width of network edges. Defaults to 5.

5
detect_pseudopods (bool, optional(default=True))

Determines if pseudopod regions should be detected and merged with the network.

True
save_coord_network (bool, optional(default=True))

Controls saving of detected network/pseudopod coordinates as NumPy arrays.

True
show_seg (bool, optional(default=False))

Enables real-time visualization display during processing.

False

Returns:

Type Description
NDArray[uint8]

3D array containing detected network structures with shape (time x y x z). Uses: - 0 for background, - 1 for regular network components, - 2 for pseudopod regions when detect_pseudopods is True.

Notes
  • Memory-intensive operations on large arrays may require system resources.
  • Temporal smoothing effectiveness depends on network dynamics consistency between frames.
  • Pseudopod detection requires sufficient contrast with the background in grayscale images.
Source code in src/cellects/image_analysis/network_functions.py
def detect_network_dynamics(converted_video: NDArray, binary: NDArray[np.uint8], arena_label: int=1,
                            starting_time: int=0, visu: NDArray=None, origin: NDArray[np.uint8]=None,
                            smooth_segmentation_over_time: bool = True, edge_max_width:int = 5,
                            detect_pseudopods: bool = True, save_coord_network: bool = True, show_seg: bool = False):
    """
    Detects and tracks dynamic features (e.g., pseudopods) in a biological network over time from video data.

    Analyzes spatiotemporal dynamics of a network structure using binary masks and grayscale video data. Processes each frame to detect network components, optionally identifies pseudopods, applies temporal smoothing, and generates visualization overlays. Saves coordinate data for detected networks if enabled.

    Parameters
    ----------
    converted_video : NDArray
        Input video data array with shape (time x y x z) representing grayscale intensities.
    binary : NDArray[np.uint8]
        Binary mask array with shape (time x y x z) indicating filled regions in each frame.
    arena_label : int
        Unique identifier for the current processing arena/session to name saved output files.
    starting_time : int
        Zero-based index of the first frame to begin network detection and analysis from.
    visu : NDArray
        Visualization video array (time x y x z) with RGB channels for overlay rendering.
    origin : NDArray[np.uint8]
        Binary mask defining a central region of interest to exclude from network detection.
    smooth_segmentation_over_time : bool, optional (default=True)
        Flag indicating whether to apply temporal smoothing using adjacent frame data.
    edge_max_width : int, optional
        Maximal width of network edges. Defaults to 5.
    detect_pseudopods : bool, optional (default=True)
        Determines if pseudopod regions should be detected and merged with the network.
    save_coord_network : bool, optional (default=True)
        Controls saving of detected network/pseudopod coordinates as NumPy arrays.
    show_seg : bool, optional (default=False)
        Enables real-time visualization display during processing.

    Returns
    -------
    NDArray[np.uint8]
        3D array containing detected network structures with shape (time x y x z). Uses:
        - `0` for background,
        - `1` for regular network components,
        - `2` for pseudopod regions when detect_pseudopods is True.

    Notes
    -----
    - Memory-intensive operations on large arrays may require system resources.
    - Temporal smoothing effectiveness depends on network dynamics consistency between frames.
    - Pseudopod detection requires sufficient contrast with the background in grayscale images.
    """
    logging.info(f"Arena n°{arena_label}. Starting network detection.")
    # converted_video = self.converted_video; binary=self.binary; arena_label=1; starting_time=0; visu=self.visu; origin=None; smooth_segmentation_over_time=True; detect_pseudopods=True;save_coord_network=True; show_seg=False
    dims = binary.shape
    pseudopod_min_size = 50
    if detect_pseudopods:
        pseudopod_vid = np.zeros_like(binary, dtype=bool)
    potential_network = np.zeros_like(binary, dtype=bool)
    network_dynamics = np.zeros_like(binary, dtype=np.uint8)
    do_convert = True
    if visu is None:
        do_convert = False
        visu = np.stack((converted_video, converted_video, converted_video), axis=3)
        greyscale = converted_video[-1, ...]
    else:
        greyscale = visu[-1, ...].mean(axis=-1)
    NetDet = NetworkDetection(greyscale, possibly_filled_pixels=binary[-1, ...],
                              origin_to_add=origin)
    NetDet.get_best_network_detection_method()
    if do_convert:
        NetDet.greyscale_image = converted_video[-1, ...]
    lighter_background = NetDet.greyscale_image[binary[-1, ...] > 0].mean() < NetDet.greyscale_image[
        binary[-1, ...] == 0].mean()

    for t in np.arange(starting_time, dims[0]):  # 20):#
        if do_convert:
            greyscale = visu[t, ...].mean(axis=-1)
        else:
            greyscale = converted_video[t, ...]
        NetDet_fast = NetworkDetection(greyscale, possibly_filled_pixels=binary[t, ...], origin_to_add=origin,
                                       edge_max_width=edge_max_width, best_result=NetDet.best_result)
        NetDet_fast.detect_network()
        NetDet_fast.greyscale_image = converted_video[t, ...]
        if detect_pseudopods:
            NetDet_fast.detect_pseudopods(lighter_background, pseudopod_min_size=pseudopod_min_size)
            pseudopod_vid[t, ...] = NetDet_fast.pseudopods
        potential_network[t, ...] = NetDet_fast.complete_network
    del NetDet_fast
    del NetDet
    if dims[0] == 1:
        network_dynamics = potential_network
    else:
        for t in np.arange(starting_time, dims[0]):  # 20):#
            if smooth_segmentation_over_time:
                if 2 <= t <= (dims[0] - 2):
                    computed_network = potential_network[(t - 2):(t + 3), :, :].sum(axis=0)
                    computed_network[computed_network == 1] = 0
                    computed_network[computed_network > 1] = 1
                else:
                    if t < 2:
                        computed_network = potential_network[:2, :, :].sum(axis=0)
                    else:
                        computed_network = potential_network[-2:, :, :].sum(axis=0)
                    computed_network[computed_network > 0] = 1
            else:
                computed_network = potential_network[t, :, :].copy()

            if origin is not None:
                computed_network = computed_network * (1 - origin)
                origin_contours = get_contours(origin)
                complete_network = np.logical_or(origin_contours, computed_network).astype(np.uint8)
            else:
                complete_network = computed_network
            complete_network = keep_one_connected_component(complete_network)

            if detect_pseudopods:
                # Make sure that removing pseudopods do not cut the network:
                without_pseudopods = complete_network * (1 - pseudopod_vid[t])
                only_connected_network = keep_one_connected_component(without_pseudopods)
                # # Option A: To add these cutting regions to the pseudopods do:
                pseudopods = (1 - only_connected_network) * complete_network
                pseudopod_vid[t] = pseudopods
            network_dynamics[t] = complete_network

            imtoshow = visu[t, ...]
            eroded_binary = cv2.erode(network_dynamics[t, ...], cross_33)
            net_coord = np.nonzero(network_dynamics[t, ...] - eroded_binary)
            imtoshow[net_coord[0], net_coord[1], :] = (34, 34, 158)
            if show_seg:
                cv2.imshow("", cv2.resize(imtoshow, (1000, 1000)))
                cv2.waitKey(1)
            else:
                visu[t, ...] = imtoshow
            if show_seg:
                cv2.destroyAllWindows()

    network_coord = smallest_memory_array(np.nonzero(network_dynamics), "uint")
    pseudopod_coord = None
    if detect_pseudopods:
        network_dynamics[pseudopod_vid > 0] = 2
        pseudopod_coord = smallest_memory_array(np.nonzero(pseudopod_vid), "uint")
        if save_coord_network:
            write_h5(f"coord_pseudopods{arena_label}_t{dims[0]}_y{dims[1]}_x{dims[2]}.h5", pseudopod_coord)
    if save_coord_network:
        write_h5(f"coord_network{arena_label}_t{dims[0]}_y{dims[1]}_x{dims[2]}.h5", network_coord)
    return network_coord, pseudopod_coord

display_boxes(binary_image, box_diameter, show=True)

Display grid lines on a binary image at specified box diameter intervals.

This function displays the given binary image with vertical and horizontal grid lines drawn at regular intervals defined by box_diameter. The function returns the total number of grid lines drawn.

Parameters:

Name Type Description Default
binary_image ndarray

Binary image on which to draw the grid lines.

required
box_diameter int

Diameter of each box in pixels.

required

Returns:

Name Type Description
line_nb int

Number of grid lines drawn, both vertical and horizontal.

Examples:

>>> import numpy as np
>>> binary_image = np.random.randint(0, 2, (100, 100), dtype=np.uint8)
>>> display_boxes(binary_image, box_diameter=25)
Source code in src/cellects/utils/load_display_save.py
def display_boxes(binary_image: NDArray, box_diameter: int, show: bool = True):
    """
    Display grid lines on a binary image at specified box diameter intervals.

    This function displays the given binary image with vertical and horizontal
    grid lines drawn at regular intervals defined by `box_diameter`. The function
    returns the total number of grid lines drawn.

    Parameters
    ----------
    binary_image : ndarray
        Binary image on which to draw the grid lines.
    box_diameter : int
        Diameter of each box in pixels.

    Returns
    -------
    line_nb : int
        Number of grid lines drawn, both vertical and horizontal.

    Examples
    --------
    >>> import numpy as np
    >>> binary_image = np.random.randint(0, 2, (100, 100), dtype=np.uint8)
    >>> display_boxes(binary_image, box_diameter=25)
    """
    plt.imshow(binary_image, cmap='gray', extent=(0, binary_image.shape[1], 0, binary_image.shape[0]))
    height, width = binary_image.shape
    line_nb = 0
    for x in range(0, width + 1, box_diameter):
        line_nb += 1
        plt.axvline(x=x, color='white', linewidth=1)
    for y in range(0, height + 1, box_diameter):
        line_nb += 1
        plt.axhline(y=y, color='white', linewidth=1)

    if show:
        plt.show()

    return line_nb

display_network_methods(network_detection, save_path=None)

Display segmentation results from a network detection object.

Extended Description

Plots the binary segmentation results for various methods stored in network_detection.all_results. Highlights the best result based on quality metrics and allows for saving the figure to a file.

Parameters:

Name Type Description Default
network_detection object

An object containing segmentation results and quality metrics.

required
save_path str

Path to save the figure. If None, the plot is displayed.

None
Source code in src/cellects/utils/load_display_save.py
def display_network_methods(network_detection: object, save_path: str=None):
    """

    Display segmentation results from a network detection object.

    Extended Description
    --------------------

    Plots the binary segmentation results for various methods stored in ``network_detection.all_results``.
    Highlights the best result based on quality metrics and allows for saving the figure to a file.

    Parameters
    ----------
    network_detection : object
        An object containing segmentation results and quality metrics.
    save_path : str, optional
        Path to save the figure. If ``None``, the plot is displayed.

    """
    row_nb = 6
    fig, axes = plt.subplots(int(np.ceil(len(network_detection.all_results) / row_nb)), row_nb, figsize=(100, 100))
    fig.suptitle(f'Segmentation Comparison: Frangi + Sato Variations', fontsize=16)

    # Plot all results
    for idx, result in enumerate(network_detection.all_results):
        row = idx // row_nb
        col = idx % row_nb

        ax = axes[row, col]

        # Display binary segmentation result
        ax.imshow(result['binary'], cmap='gray')

        # Create title with filter info and quality score
        title = f"{result['method']}: {str(np.round(network_detection.quality_metrics[idx], 0))}"

        # Highlight the best result
        if idx == network_detection.best_idx:
            ax.set_title(title, fontsize=8, color='red', fontweight='bold')
            ax.add_patch(plt.Rectangle((0, 0), result['binary'].shape[1] - 1,
                                       result['binary'].shape[0] - 1,
                                       fill=False, edgecolor='red', linewidth=3))
        else:
            ax.set_title(title, fontsize=8)

        ax.axis('off')
    plt.tight_layout()

    if save_path is not None:
        plt.savefig(save_path, bbox_inches='tight', pad_inches=0., transparent=True, dpi=500)
        plt.close()
    else:
        plt.show()

eudist(v1, v2)

Calculate the Euclidean distance between two points in n-dimensional space.

Parameters:

Name Type Description Default
v1 iterable of float

The coordinates of the first point.

required
v2 iterable of float

The coordinates of the second point.

required

Returns:

Type Description
float

The Euclidean distance between v1 and v2.

Raises:

Type Description
ValueError

If v1 and v2 do not have the same length.

Notes

The Euclidean distance is calculated using the standard formula: √((x2 − x1)^2 + (y2 − y1)^2 + ...).

Examples:

>>> v1 = [1.0, 2.0]
>>> v2 = [4.0, 6.0]
>>> eudist(v1, v2)
5.0
>>> v1 = [1.0, 2.0, 3.0]
>>> v2 = [4.0, 6.0, 8.0]
>>> eudist(v1, v2)
7.0710678118654755
Source code in src/cellects/utils/formulas.py
def eudist(v1, v2) -> float:
    """
    Calculate the Euclidean distance between two points in n-dimensional space.

    Parameters
    ----------
    v1 : iterable of float
        The coordinates of the first point.
    v2 : iterable of float
        The coordinates of the second point.

    Returns
    -------
    float
        The Euclidean distance between `v1` and `v2`.

    Raises
    ------
    ValueError
        If `v1` and `v2` do not have the same length.

    Notes
    -----
    The Euclidean distance is calculated using the standard formula:
    √((x2 − x1)^2 + (y2 − y1)^2 + ...).

    Examples
    --------
    >>> v1 = [1.0, 2.0]
    >>> v2 = [4.0, 6.0]
    >>> eudist(v1, v2)
    5.0

    >>> v1 = [1.0, 2.0, 3.0]
    >>> v2 = [4.0, 6.0, 8.0]
    >>> eudist(v1, v2)
    7.0710678118654755
    """
    dist = [(a - b)**2 for a, b in zip(v1, v2)]
    dist = np.sqrt(np.sum(dist))
    return dist

extract_graph_dynamics(converted_video, coord_network, arena_label, starting_time=0, origin=None, coord_pseudopods=None)

Extracts dynamic graph data from video frames based on network dynamics.

This function processes time-series binary network structures to extract evolving vertices and edges over time. It computes spatial relationships between networks and an origin point through image processing steps including contour detection, padding for alignment, skeleton extraction, and morphological analysis. Vertex and edge attributes like position, connectivity, width, intensity, and betweenness are compiled into tables saved as CSV files.

Parameters:

Name Type Description Default
converted_video NDArray

3D video data array (t x y x) containing pixel intensities used for calculating edge intensity attributes during table generation.

required
coord_network NDArray[uint8]

3D binary network mask array (t x y x) representing connectivity structures across time points.

required
arena_label int

Unique identifier to prefix output filenames corresponding to specific experimental arenas.

required
starting_time (int, optional(default=0))

Time index within coord_network to begin processing from (exclusive of origin initialization).

0
origin (NDArray[uint8], optional(default=None))

Binary mask identifying the region of interest's central origin for spatial reference during network comparison.

None

Returns:

Type Description
None
Saves two CSV files in working directory:
1. `vertex_table{arena_label}_t{T}_y{Y}_x{X}.csv` - Vertex table with time, coordinates, and connectivity information
2. `edge_table{arena_label}_t{T}_y{Y}_x{X}.csv` - Edge table containing attributes like length, width, intensity, and betweenness
Notes

Output CSVs use NumPy arrays converted to pandas DataFrames with columns: - Vertex table includes timestamps (t), coordinates (y,x), and connectivity flags. - Edge table contains betweenness centrality calculated during skeleton processing. Origin contours are spatially aligned through padding operations to maintain coordinate consistency across time points.

Source code in src/cellects/image_analysis/network_functions.py
def extract_graph_dynamics(converted_video: NDArray, coord_network: NDArray, arena_label: int,
                            starting_time: int=0, origin: NDArray[np.uint8]=None, coord_pseudopods: NDArray=None):
    """
    Extracts dynamic graph data from video frames based on network dynamics.

    This function processes time-series binary network structures to extract evolving vertices and edges over time. It computes spatial relationships between networks and an origin point through image processing steps including contour detection, padding for alignment, skeleton extraction, and morphological analysis. Vertex and edge attributes like position, connectivity, width, intensity, and betweenness are compiled into tables saved as CSV files.

    Parameters
    ----------
    converted_video : NDArray
        3D video data array (t x y x) containing pixel intensities used for calculating edge intensity attributes during table generation.
    coord_network : NDArray[np.uint8]
        3D binary network mask array (t x y x) representing connectivity structures across time points.
    arena_label : int
        Unique identifier to prefix output filenames corresponding to specific experimental arenas.
    starting_time : int, optional (default=0)
        Time index within `coord_network` to begin processing from (exclusive of origin initialization).
    origin : NDArray[np.uint8], optional (default=None)
        Binary mask identifying the region of interest's central origin for spatial reference during network comparison.

    Returns
    -------
    None
    Saves two CSV files in working directory:
    1. `vertex_table{arena_label}_t{T}_y{Y}_x{X}.csv` - Vertex table with time, coordinates, and connectivity information
    2. `edge_table{arena_label}_t{T}_y{Y}_x{X}.csv` - Edge table containing attributes like length, width, intensity, and betweenness

    Notes
    ---
    Output CSVs use NumPy arrays converted to pandas DataFrames with columns:
    - Vertex table includes timestamps (t), coordinates (y,x), and connectivity flags.
    - Edge table contains betweenness centrality calculated during skeleton processing.
    Origin contours are spatially aligned through padding operations to maintain coordinate consistency across time points.
    """
    logging.info(f"Arena n°{arena_label}. Starting graph extraction.")
    # converted_video = self.converted_video; coord_network=self.coord_network; arena_label=1; starting_time=0; origin=self.origin
    dims = converted_video.shape[:3]
    if origin is not None:
        _, _, _, origin_centroid = cv2.connectedComponentsWithStats(origin)
        origin_centroid = np.round((origin_centroid[1, 1], origin_centroid[1, 0])).astype(np.int64)
        pad_origin_centroid = origin_centroid + 1
        origin_contours = get_contours(origin)
        pad_origin = add_padding([origin])[0]
    else:
        pad_origin_centroid = None
        pad_origin = None
        origin_contours = None
    vertex_table = None
    for t in np.arange(starting_time, dims[0]): # t=320   Y, X = 729, 554
        computed_network = np.zeros((dims[1], dims[2]), dtype=np.uint8)
        net_t = coord_network[1:, coord_network[0, :] == t]
        computed_network[net_t[0], net_t[1]] = 1
        if origin is not None:
            computed_network = computed_network * (1 - origin)
            computed_network = np.logical_or(origin_contours, computed_network).astype(np.uint8)
        else:
            computed_network = computed_network.astype(np.uint8)
        if computed_network.any():
            computed_network = keep_one_connected_component(computed_network)
            pad_network = add_padding([computed_network])[0]
            pad_skeleton, pad_distances, pad_origin_contours = get_skeleton_and_widths(pad_network, pad_origin,
                                                                                           pad_origin_centroid)
            edge_id = EdgeIdentification(pad_skeleton, pad_distances, t)
            edge_id.run_edge_identification()
            if pad_origin_contours is not None:
                origin_contours = remove_padding([pad_origin_contours])[0]
            growing_areas = None
            if coord_pseudopods is not None:
                growing_areas = coord_pseudopods[1:, coord_pseudopods[0, :] == t]
            edge_id.make_vertex_table(origin_contours, growing_areas)
            edge_id.make_edge_table(converted_video[t, ...])
            edge_id.vertex_table = np.hstack((np.repeat(t, edge_id.vertex_table.shape[0])[:, None], edge_id.vertex_table))
            edge_id.edge_table = np.hstack((np.repeat(t, edge_id.edge_table.shape[0])[:, None], edge_id.edge_table))
            if vertex_table is None:
                vertex_table = edge_id.vertex_table.copy()
                edge_table = edge_id.edge_table.copy()
            else:
                vertex_table = np.vstack((vertex_table, edge_id.vertex_table))
                edge_table = np.vstack((edge_table, edge_id.edge_table))

    vertex_table = pd.DataFrame(vertex_table, columns=["t", "y", "x", "vertex_id", "is_tip", "origin",
                                                       "vertex_connected"])
    edge_table = pd.DataFrame(edge_table,
                              columns=["t", "edge_id", "vertex1", "vertex2", "length", "average_width", "intensity",
                                       "betweenness_centrality"])
    vertex_table.to_csv(
        f"vertex_table{arena_label}_t{dims[0]}_y{dims[1]}_x{dims[2]}.csv")
    edge_table.to_csv(
        f"edge_table{arena_label}_t{dims[0]}_y{dims[1]}_x{dims[2]}.csv")

extract_time(pathway='', image_list=None, raw_images=False)

Extract timestamps from a list of images.

This function extracts the DateTimeOriginal or datetime values from the EXIF data of a list of image files, and computes the total time in seconds.

Parameters:

Name Type Description Default
pathway str

Path to the directory containing the images. Default is an empty string.

''
image_list list of str

List of image file names.

None
raw_images bool

If True, use the exifread library. Otherwise, use the exif library. Default is False.

False

Returns:

Name Type Description
time ndarray of int64

Array containing the total time in seconds for each image.

Examples:

>>> pathway = Path(__name__).resolve().parents[0] / "data" / "single_experiment"
>>> image_list = ['image1.tif', 'image2.tif']
>>> time = extract_time(pathway, image_list)
>>> print(time)
array([0, 0])
Source code in src/cellects/utils/load_display_save.py
def extract_time(pathway="", image_list: list=None, raw_images:bool=False):
    """
    Extract timestamps from a list of images.

    This function extracts the DateTimeOriginal or datetime values from
    the EXIF data of a list of image files, and computes the total time in seconds.

    Parameters
    ----------
    pathway : str, optional
        Path to the directory containing the images. Default is an empty string.
    image_list : list of str
        List of image file names.
    raw_images : bool, optional
        If True, use the exifread library. Otherwise, use the exif library.
        Default is False.

    Returns
    -------
    time : ndarray of int64
        Array containing the total time in seconds for each image.

    Examples
    --------
    >>> pathway = Path(__name__).resolve().parents[0] / "data" / "single_experiment"
    >>> image_list = ['image1.tif', 'image2.tif']
    >>> time = extract_time(pathway, image_list)
    >>> print(time)
    array([0, 0])

    """
    if isinstance(pathway, str):
        pathway = Path(pathway)
    os.chdir(pathway)
    if image_list is None:
        image_list = list_image_dir(pathway)
    nb = len(image_list)
    timings = np.zeros((nb, 6), dtype=np.int64)
    if raw_images:
        for i in np.arange(nb):
            with open(image_list, 'rb') as image_file:
                my_image = exifread.process_file(image_file, details=False, stop_tag='DateTimeOriginal')
                datetime = my_image["EXIF DateTimeOriginal"]
            datetime = datetime.values[:10] + ':' + datetime.values[11:]
            timings[i, :] = datetime.split(':')
    else:
        for i in np.arange(nb):
            with open(image_list[i], 'rb') as image_file:
                my_image = Image(image_file)
                if my_image.has_exif:
                    datetime = my_image.datetime
                    datetime = datetime[:10] + ':' + datetime[11:]
                    timings[i, :] = datetime.split(':')

    if np.all(timings[:, 0] == timings[0, 0]):
        if np.all(timings[:, 1] == timings[0, 1]):
            if np.all(timings[:, 2] == timings[0, 2]):
                time = timings[:, 3] * 3600 + timings[:, 4] * 60 + timings[:, 5]
            else:
                time = timings[:, 2] * 86400 + timings[:, 3] * 3600 + timings[:, 4] * 60 + timings[:, 5]
        else:
            days_per_month = (31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31)
            for j in np.arange(nb):
                month_number = timings[j, 1]#int(timings[j, 1])
                timings[j, 1] = days_per_month[month_number] * month_number
            time = (timings[:, 1] + timings[:, 2]) * 86400 + timings[:, 3] * 3600 + timings[:, 4] * 60 + timings[:, 5]
        #time = int(time)
    else:
        time = np.repeat(0, nb)#arange(1, nb * 60, 60)#"Do not experiment the 31th of december!!!"
    if time.sum() == 0:
        time = np.repeat(0, nb)#arange(1, nb * 60, 60)
    return time

find_common_coord(array1, array2)

Find common coordinates between two arrays.

This function compares the given 2D array1 and array2 to determine if there are any common coordinates.

Parameters:

Name Type Description Default
array1 ndarray of int

A 2D numpy ndarray.

required
array2 ndarray of int

Another 2D numpy ndarray.

required

Returns:

Name Type Description
out ndarray of bool

A boolean numpy ndarray where True indicates common coordinates.

Examples:

>>> array1 = np.array([[1, 2], [3, 4]])
>>> array2 = np.array([[5, 6], [1, 2]])
>>> result = find_common_coord(array1, array2)
>>> print(result)
array([ True, False])
Source code in src/cellects/utils/formulas.py
def find_common_coord(array1: NDArray[int], array2: NDArray[int]) -> NDArray[bool]:
    """Find common coordinates between two arrays.

    This function compares the given 2D `array1` and `array2`
    to determine if there are any common coordinates.

    Parameters
    ----------
    array1 : ndarray of int
        A 2D numpy ndarray.
    array2 : ndarray of int
        Another 2D numpy ndarray.

    Returns
    -------
    out : ndarray of bool
        A boolean numpy ndarray where True indicates common
        coordinates.

    Examples
    --------
    >>> array1 = np.array([[1, 2], [3, 4]])
    >>> array2 = np.array([[5, 6], [1, 2]])
    >>> result = find_common_coord(array1, array2)
    >>> print(result)
    array([ True, False])"""
    return (array1[:, None, :] == array2[None, :, :]).all(-1).any(-1)

find_duplicates_coord(array1)

Find duplicate rows in a 2D array and return their coordinate indices.

Given a 2D NumPy array, this function identifies rows that are duplicated (i.e., appear more than once) and returns a boolean array indicating their positions.

Parameters:

Name Type Description Default
array1 ndarray of int

Input 2D array of shape (n_rows, n_columns) from which to find duplicate rows.

required

Returns:

Name Type Description
duplicates ndarray of bool

Boolean array of shape (n_rows,), where True indicates that the corresponding row in array1 is a duplicate.

Examples:

>>> import numpy as np
>>> array1 = np.array([[1, 2], [3, 4], [1, 2], [5, 6]])
>>> find_duplicates_coord(array1)
array([ True, False,  True, False])
Source code in src/cellects/utils/formulas.py
def find_duplicates_coord(array1: NDArray[int]) -> NDArray[bool]:
    """
    Find duplicate rows in a 2D array and return their coordinate indices.

    Given a 2D NumPy array, this function identifies rows that are duplicated (i.e., appear more than once) and returns a boolean array indicating their positions.

    Parameters
    ----------
    array1 : ndarray of int
        Input 2D array of shape (n_rows, n_columns) from which to find duplicate rows.

    Returns
    -------
    duplicates : ndarray of bool
        Boolean array of shape (n_rows,), where `True` indicates that the corresponding row in `array1` is a duplicate.

    Examples
    --------
    >>> import numpy as np
    >>> array1 = np.array([[1, 2], [3, 4], [1, 2], [5, 6]])
    >>> find_duplicates_coord(array1)
    array([ True, False,  True, False])"""
    unique_rows, inverse_indices = np.unique(array1, axis=0, return_inverse=True)
    counts = np.bincount(inverse_indices)
    # A row is duplicate if its count > 1
    return counts[inverse_indices] > 1

find_threshold_given_mask(greyscale, mask, min_threshold=0)

Find the optimal threshold value for a greyscale image given a mask.

This function performs a binary search to find the optimal threshold that maximizes the separation between two regions defined by the mask. The search is bounded by a minimum threshold value.

Parameters:

Name Type Description Default
greyscale ndarray of uint8

The greyscale image array.

required
mask ndarray of uint8

The binary mask array where positive values define region A and zero values define region B.

required
min_threshold uint8

The minimum threshold value for the search. Defaults to 0.

0

Returns:

Name Type Description
out uint8

The optimal threshold value found.

Examples:

>>> greyscale = np.array([[255, 128, 54], [0, 64, 20]], dtype=np.uint8)
>>> mask = np.array([[1, 1, 0], [0, 0, 0]], dtype=np.uint8)
>>> find_threshold_given_mask(greyscale, mask)
54
Source code in src/cellects/image_analysis/image_segmentation.py
def find_threshold_given_mask(greyscale: NDArray[np.uint8], mask: np.uint8, min_threshold: np.uint8=0) -> np.uint8:
    """
    Find the optimal threshold value for a greyscale image given a mask.

    This function performs a binary search to find the optimal threshold
    that maximizes the separation between two regions defined by the mask.
    The search is bounded by a minimum threshold value.

    Parameters
    ----------
    greyscale : ndarray of uint8
        The greyscale image array.
    mask : ndarray of uint8
        The binary mask array where positive values define region A and zero values define region B.
    min_threshold : uint8, optional
        The minimum threshold value for the search. Defaults to 0.

    Returns
    -------
    out : uint8
        The optimal threshold value found.

    Examples
    --------
    >>> greyscale = np.array([[255, 128, 54], [0, 64, 20]], dtype=np.uint8)
    >>> mask = np.array([[1, 1, 0], [0, 0, 0]], dtype=np.uint8)
    >>> find_threshold_given_mask(greyscale, mask)
    54
    """
    region_a = greyscale[mask > 0]
    if len(region_a) == 0:
        return np.uint8(255)
    region_b = greyscale[mask == 0]
    if len(region_b) == 0:
        return min_threshold
    else:
        low = min_threshold
        high = 255
        best_thresh = low

        while 0 <= low <= high:
            mid = (low + high) // 2
            count_a, count_b = _get_counts_jit(mid, region_a, region_b)

            if count_a > count_b:
                # Try to find a lower threshold that still satisfies the condition
                best_thresh = mid
                high = mid - 1
            else:
                if count_a == 0 and count_b == 0:
                    best_thresh = greyscale.mean()
                    break
                # Need higher threshold
                low = mid + 1
    return best_thresh

generate_color_space_combination(bgr_image, c_spaces, first_dict, second_dict=Dict(), background=None, background2=None, convert_to_uint8=False, all_c_spaces={})

Generate color space combinations for an input image.

This function generates a grayscale image by combining multiple color spaces from an input BGR image and provided dictionaries. Optionally, it can also generate a second grayscale image using another dictionary.

Parameters:

Name Type Description Default
bgr_image ndarray of uint8

The input image in BGR color space.

required
c_spaces list

List of color spaces to consider for combination.

required
first_dict Dict

Dictionary containing color space and transformation details for the first grayscale image.

required
second_dict Dict

Dictionary containing color space and transformation details for the second grayscale image.

Dict()
background ndarray

Background image to be used. Default is None.

None
background2 ndarray

Second background image to be used for the second grayscale image. Default is None.

None
convert_to_uint8 bool

Flag indicating whether to convert the output images to uint8. Default is False.

False

Returns:

Name Type Description
out tuple of ndarray of uint8

A tuple containing the first and second grayscale images.

Examples:

>>> bgr_image = np.random.randint(0, 256, (100, 100, 3), dtype=np.uint8)
>>> c_spaces = ['bgr', 'hsv']
>>> first_dict = Dict()
>>> first_dict['bgr'] = [0, 1, 1]
>>> second_dict = Dict()
>>> second_dict['hsv'] = [0, 0, 1]
>>> greyscale_image1, greyscale_image2 = generate_color_space_combination(bgr_image, c_spaces, first_dict, second_dict)
>>> print(greyscale_image1.shape)
(100, 100)
Source code in src/cellects/image_analysis/image_segmentation.py
def generate_color_space_combination(bgr_image: NDArray[np.uint8], c_spaces: list, first_dict: Dict, second_dict: Dict=Dict(), background: NDArray=None, background2: NDArray=None, convert_to_uint8: bool=False, all_c_spaces: dict={}) -> NDArray[np.uint8]:
    """
    Generate color space combinations for an input image.

    This function generates a grayscale image by combining multiple color spaces
    from an input BGR image and provided dictionaries. Optionally, it can also generate
    a second grayscale image using another dictionary.

    Parameters
    ----------
    bgr_image : ndarray of uint8
        The input image in BGR color space.
    c_spaces : list
        List of color spaces to consider for combination.
    first_dict : Dict
        Dictionary containing color space and transformation details for the first grayscale image.
    second_dict : Dict, optional
        Dictionary containing color space and transformation details for the second grayscale image.
    background : ndarray, optional
        Background image to be used. Default is None.
    background2 : ndarray, optional
        Second background image to be used for the second grayscale image. Default is None.
    convert_to_uint8 : bool, optional
        Flag indicating whether to convert the output images to uint8. Default is False.

    Returns
    -------
    out : tuple of ndarray of uint8
        A tuple containing the first and second grayscale images.

    Examples
    --------
    >>> bgr_image = np.random.randint(0, 256, (100, 100, 3), dtype=np.uint8)
    >>> c_spaces = ['bgr', 'hsv']
    >>> first_dict = Dict()
    >>> first_dict['bgr'] = [0, 1, 1]
    >>> second_dict = Dict()
    >>> second_dict['hsv'] = [0, 0, 1]
    >>> greyscale_image1, greyscale_image2 = generate_color_space_combination(bgr_image, c_spaces, first_dict, second_dict)
    >>> print(greyscale_image1.shape)
    (100, 100)
    """
    greyscale_image2 = None
    first_pc_vector = None
    if "PCA" in c_spaces:
        greyscale_image, var_ratio, first_pc_vector = extract_first_pc(bgr_image)
        first_pc_vector = first_pc_vector.tolist()
    else:
        if len(all_c_spaces) == 0:
            all_c_spaces = get_color_spaces(bgr_image, c_spaces)
        try:
            greyscale_image = combine_color_spaces(first_dict, all_c_spaces, background)
        except:
            first_dict = translate_dict(first_dict)
            greyscale_image = combine_color_spaces(first_dict, all_c_spaces, background)
        if len(second_dict) > 0:
            greyscale_image2 = combine_color_spaces(second_dict, all_c_spaces, background2)

    if convert_to_uint8:
        greyscale_image = bracket_to_uint8_image_contrast(greyscale_image)
        if greyscale_image2 is not None and len(second_dict) > 0:
            greyscale_image2 = bracket_to_uint8_image_contrast(greyscale_image2)
    return greyscale_image, greyscale_image2, all_c_spaces, first_pc_vector

get_all_line_coordinates(start_point, end_points)

Get all line coordinates between start point and end points.

This function computes the coordinates of lines connecting a start point to multiple end points, converting input arrays to float if necessary before processing.

Parameters:

Name Type Description Default
start_point NDArray[float]

Starting coordinate point for the lines. Can be of any numeric type, will be converted to float if needed.

required
end_points NDArray[float]

Array of end coordinate points for the lines. Can be of any numeric type, will be converted to float if needed.

required

Returns:

Name Type Description
out List[NDArray[int]]

A list of numpy arrays containing the coordinates of each line as integer values.

Examples:

>>> start_point = np.array([0, 0])
>>> end_points = np.array([[1, 2], [3, 4]])
>>> get_all_line_coordinates(start_point, end_points)
[array([[0, 0],
   [0, 1],
   [1, 2]], dtype=uint64), array([[0, 0],
   [1, 1],
   [1, 2],
   [2, 3],
   [3, 4]], dtype=uint64)]
Source code in src/cellects/image_analysis/morphological_operations.py
def get_all_line_coordinates(start_point: NDArray[int], end_points: NDArray[int]) -> NDArray[int]:
    """
    Get all line coordinates between start point and end points.

    This function computes the coordinates of lines connecting a
    start point to multiple end points, converting input arrays to float
    if necessary before processing.

    Parameters
    ----------
    start_point : NDArray[float]
        Starting coordinate point for the lines. Can be of any numeric type,
        will be converted to float if needed.
    end_points : NDArray[float]
        Array of end coordinate points for the lines. Can be of any
        numeric type, will be converted to float if needed.

    Returns
    -------
    out : List[NDArray[int]]
        A list of numpy arrays containing the coordinates of each line
        as integer values.

    Examples
    --------
    >>> start_point = np.array([0, 0])
    >>> end_points = np.array([[1, 2], [3, 4]])
    >>> get_all_line_coordinates(start_point, end_points)
    [array([[0, 0],
       [0, 1],
       [1, 2]], dtype=uint64), array([[0, 0],
       [1, 1],
       [1, 2],
       [2, 3],
       [3, 4]], dtype=uint64)]
    """
    lines = []
    for end_point in end_points:
        line_coords = get_line_points(start_point, end_point)
        lines.append(np.array(line_coords, dtype=np.uint64))
    return lines

get_branches_and_tips_coord(pad_vertices, pad_tips)

Extracts the coordinates of branches and tips from vertices and tips binary images.

This function calculates branch coordinates by subtracting tips from vertices. Then it finds and outputs the non-zero indices of branches and tips separatly.

Parameters:

Name Type Description Default
pad_vertices ndarray

Array containing the vertices to be padded.

required
pad_tips ndarray

Array containing the tips of the padding.

required

Returns:

Name Type Description
branch_v_coord ndarray

Coordinates of branches derived from subtracting tips from vertices.

tips_coord ndarray

Coordinates of the tips.

Examples:

>>> branch_v, tip_c = get_branches_and_tips_coord(pad_vertices, pad_tips)
>>> branch_v
>>> tip_c
Source code in src/cellects/image_analysis/network_functions.py
def get_branches_and_tips_coord(pad_vertices: NDArray[np.uint8], pad_tips: NDArray[np.uint8]) -> Tuple[NDArray, NDArray]:
    """
    Extracts the coordinates of branches and tips from vertices and tips binary images.

    This function calculates branch coordinates by subtracting
    tips from vertices. Then it finds and outputs the non-zero indices of branches and tips separatly.

    Parameters
    ----------
    pad_vertices : ndarray
        Array containing the vertices to be padded.
    pad_tips : ndarray
        Array containing the tips of the padding.

    Returns
    -------
    branch_v_coord : ndarray
        Coordinates of branches derived from subtracting tips from vertices.
    tips_coord : ndarray
        Coordinates of the tips.

    Examples
    --------
    >>> branch_v, tip_c = get_branches_and_tips_coord(pad_vertices, pad_tips)
    >>> branch_v
    >>> tip_c
    """
    pad_branches = pad_vertices - pad_tips
    branch_v_coord = np.transpose(np.array(np.nonzero(pad_branches)))
    tips_coord = np.transpose(np.array(np.nonzero(pad_tips)))
    return branch_v_coord, tips_coord

get_contour_width_from_im_shape(im_shape)

Calculate the contour width based on image shape.

Parameters:

Name Type Description Default
im_shape tuple of int, two items

The dimensions of the image.

required

Returns:

Type Description
int

The calculated contour width.

Source code in src/cellects/utils/formulas.py
def get_contour_width_from_im_shape(im_shape: Tuple) -> int:
    """
    Calculate the contour width based on image shape.

    Parameters
    ----------
    im_shape : tuple of int, two items
        The dimensions of the image.

    Returns
    -------
    int
        The calculated contour width.
    """
    return np.max((np.round(np.log10(np.max(im_shape)) - 2).astype(int), 2))

get_contours(binary_image)

Find and return the contours of a binary image.

This function erodes the input binary image using a 3x3 cross-shaped structuring element and then subtracts the eroded image from the original to obtain the contours.

Parameters:

Name Type Description Default
binary_image ndarray of uint8

Input binary image from which to extract contours.

required

Returns:

Name Type Description
out ndarray of uint8

Image containing only the contours extracted from binary_image.

Examples:

>>> binary_image = np.zeros((10, 10), dtype=np.uint8)
>>> binary_image[2:8, 2:8] = 1
>>> result = get_contours(binary_image)
>>> print(result)
[[0 0 0 0 0 0 0 0 0 0]
 [0 0 0 0 0 0 0 0 0 0]
 [0 0 1 1 1 1 1 1 0 0]
 [0 0 1 0 0 0 0 1 0 0]
 [0 0 1 0 0 0 0 1 0 0]
 [0 0 1 0 0 0 0 1 0 0]
 [0 0 1 0 0 0 0 1 0 0]
 [0 0 1 1 1 1 1 1 0 0]
 [0 0 0 0 0 0 0 0 0 0]
 [0 0 0 0 0 0 0 0 0 0]]
Source code in src/cellects/image_analysis/morphological_operations.py
def get_contours(binary_image: NDArray[np.uint8]) -> NDArray[np.uint8]:
    """
    Find and return the contours of a binary image.

    This function erodes the input binary image using a 3x3 cross-shaped
    structuring element and then subtracts the eroded image from the original to obtain the contours.

    Parameters
    ----------
    binary_image : ndarray of uint8
        Input binary image from which to extract contours.

    Returns
    -------
    out : ndarray of uint8
        Image containing only the contours extracted from `binary_image`.

    Examples
    --------
    >>> binary_image = np.zeros((10, 10), dtype=np.uint8)
    >>> binary_image[2:8, 2:8] = 1
    >>> result = get_contours(binary_image)
    >>> print(result)
    [[0 0 0 0 0 0 0 0 0 0]
     [0 0 0 0 0 0 0 0 0 0]
     [0 0 1 1 1 1 1 1 0 0]
     [0 0 1 0 0 0 0 1 0 0]
     [0 0 1 0 0 0 0 1 0 0]
     [0 0 1 0 0 0 0 1 0 0]
     [0 0 1 0 0 0 0 1 0 0]
     [0 0 1 1 1 1 1 1 0 0]
     [0 0 0 0 0 0 0 0 0 0]
     [0 0 0 0 0 0 0 0 0 0]]
    """
    if not isinstance(binary_image.dtype, np.uint8):
        binary_image = binary_image.astype(np.uint8)
    if np.all(binary_image):
        contours = 1 - image_borders(binary_image.shape)
    elif np.any(binary_image):
        eroded_binary = cv2.erode(binary_image, cross_33, borderType=cv2.BORDER_CONSTANT, borderValue=0)
        contours = binary_image - eroded_binary
    else:
        contours = binary_image
    return contours

get_h5_keys(file_name)

Retrieve all keys from a given HDF5 file.

Parameters:

Name Type Description Default
file_name str

The path to the HDF5 file from which keys are to be retrieved.

required

Returns:

Type Description
list of str

A list containing all the keys present in the specified HDF5 file.

Raises:

Type Description
FileNotFoundError

If the specified HDF5 file does not exist.

Source code in src/cellects/utils/load_display_save.py
def get_h5_keys(file_name):
    """
    Retrieve all keys from a given HDF5 file.

    Parameters
    ----------
    file_name : str
        The path to the HDF5 file from which keys are to be retrieved.

    Returns
    -------
    list of str
        A list containing all the keys present in the specified HDF5 file.

    Raises
    ------
    FileNotFoundError
        If the specified HDF5 file does not exist.
    """
    try:
        with h5py.File(file_name, 'r') as h5f:
            all_keys = list(h5f.keys())
            return all_keys
    except FileNotFoundError:
        raise FileNotFoundError(f"The file '{file_name}' does not exist.")

get_inertia_axes(mo)

Calculate the inertia axes of a moment object.

This function computes the barycenters, central moments, and the lengths of the major and minor axes, as well as their orientation.

Parameters:

Name Type Description Default
mo dict

Dictionary containing moments, which should include keys: 'm00', 'm10', 'm01', 'm20', and 'm11'.

required

Returns:

Type Description
tuple

A tuple containing: - cx : float The x-coordinate of the barycenter. - cy : float The y-coordinate of the barycenter. - major_axis_len : float The length of the major axis. - minor_axis_len : float The length of the minor axis. - axes_orientation : float The orientation of the axes in radians.

Notes

This function uses Numba's @njit decorator for performance. The moments in the input dictionary should be computed from the same image region.

Examples:

>>> mo = {'m00': 1.0, 'm10': 2.0, 'm01': 3.0, 'm20': 4.0, 'm11': 5.0}
>>> get_inertia_axes(mo)
(2.0, 3.0, 9.165151389911677, 0.8421875803239, 0.7853981633974483)
Source code in src/cellects/utils/formulas.py
@njit()
def get_inertia_axes(mo: dict) -> Tuple[float, float, float, float, float]:
    """
    Calculate the inertia axes of a moment object.

    This function computes the barycenters, central moments,
    and the lengths of the major and minor axes, as well as
    their orientation.

    Parameters
    ----------
    mo : dict
        Dictionary containing moments, which should include keys:
        'm00', 'm10', 'm01', 'm20', and 'm11'.

    Returns
    -------
    tuple
        A tuple containing:
            - cx : float
                The x-coordinate of the barycenter.
            - cy : float
                The y-coordinate of the barycenter.
            - major_axis_len : float
                The length of the major axis.
            - minor_axis_len : float
                The length of the minor axis.
            - axes_orientation : float
                The orientation of the axes in radians.

    Notes
    -----
    This function uses Numba's @njit decorator for performance.
    The moments in the input dictionary should be computed from
    the same image region.

    Examples
    --------
    >>> mo = {'m00': 1.0, 'm10': 2.0, 'm01': 3.0, 'm20': 4.0, 'm11': 5.0}
    >>> get_inertia_axes(mo)
    (2.0, 3.0, 9.165151389911677, 0.8421875803239, 0.7853981633974483)

    """
    #L. Rocha, L. Velho and P.C.P. Calvalho (2002)
    #http://sibgrapi.sid.inpe.br/col/sid.inpe.br/banon/2002/10.23.11.34/doc/35.pdf
    # http://raphael.candelier.fr/?blog=Image%20Moments

    # Calculate barycenters
    cx = mo["m10"] / mo["m00"]
    cy = mo["m01"] / mo["m00"]
    # Calculate central moments
    c20 = (mo["m20"] / mo["m00"]) - np.square(cx)
    c02 = (mo["m02"] / mo["m00"]) - np.square(cy)
    c11 = (mo["m11"] / mo["m00"]) - (cx * cy)
    # Calculate major and minor axi lengths OK
    major_axis_len = np.sqrt(6 * (c20 + c02 + np.sqrt(np.square(2 * c11) + np.square(c20 - c02))))
    minor_axis_len = np.sqrt(6 * (c20 + c02 - np.sqrt(np.square(2 * c11) + np.square(c20 - c02))))
    if (c20 - c02) != 0:
        axes_orientation = (0.5 * np.arctan((2 * c11) / (c20 - c02))) + ((c20 < c02) * (np.pi /2))
    else:
        axes_orientation = 0.
    return cx, cy, major_axis_len, minor_axis_len, axes_orientation

get_inner_vertices(pad_skeleton, potential_tips, cnv4, cnv8)

Get inner vertices from skeleton image.

This function identifies and returns the inner vertices of a skeletonized image. It processes potential tips to determine which pixels should be considered as vertices based on their neighbor count and connectivity.

Parameters:

Name Type Description Default
pad_skeleton ndarray of uint8

The padded skeleton image.

required
potential_tips ndarray of uint8

Potential tip points in the skeleton. Defaults to pad_tips.

required
cnv4 object

Object for handling 4-connections.

required
cnv8 object

Object for handling 8-connections.

required

Returns:

Name Type Description
out tuple of ndarray of uint8, ndarray of uint8

A tuple containing the final vertices matrix and the updated potential tips.

Examples:

>>> pad_vertices, potential_tips = get_inner_vertices(pad_skeleton, potential_tips)
>>> print(pad_vertices)
Source code in src/cellects/image_analysis/network_functions.py
def get_inner_vertices(pad_skeleton: NDArray[np.uint8], potential_tips: NDArray[np.uint8], cnv4: object, cnv8: object) -> Tuple[NDArray[np.uint8], NDArray[np.uint8]]: # potential_tips=pad_tips
    """
    Get inner vertices from skeleton image.

    This function identifies and returns the inner vertices of a skeletonized image.
    It processes potential tips to determine which pixels should be considered as
    vertices based on their neighbor count and connectivity.

    Parameters
    ----------
    pad_skeleton : ndarray of uint8
        The padded skeleton image.
    potential_tips : ndarray of uint8, optional
        Potential tip points in the skeleton. Defaults to pad_tips.
    cnv4 : object
        Object for handling 4-connections.
    cnv8 : object
        Object for handling 8-connections.

    Returns
    -------
    out : tuple of ndarray of uint8, ndarray of uint8
        A tuple containing the final vertices matrix and the updated potential tips.

    Examples
    --------
    >>> pad_vertices, potential_tips = get_inner_vertices(pad_skeleton, potential_tips)
    >>> print(pad_vertices)
    """

    # Initiate the vertices final matrix as a copy of the potential_tips
    pad_vertices = potential_tips.copy()
    for neighbor_nb in [8, 7, 6, 5, 4]:
        # All pixels having neighbor_nb neighbor are potential vertices
        potential_vertices = np.zeros(potential_tips.shape, dtype=np.uint8)

        potential_vertices[cnv8.equal_neighbor_nb == neighbor_nb] = 1
        # remove the false intersections that are a neighbor of a previously detected intersection
        # Dilate vertices to make sure that no neighbors of the current potential vertices are already vertices.
        dilated_previous_intersections = cv2.dilate(pad_vertices, cross_33, iterations=1)
        potential_vertices *= (1 - dilated_previous_intersections)
        pad_vertices[np.nonzero(potential_vertices)] = 1

    # Having 3 neighbors is ambiguous
    with_3_neighbors = cnv8.equal_neighbor_nb == 3
    if np.any(with_3_neighbors):
        # We compare 8-connections with 4-connections
        # We loop over all 3 connected
        coord_3 = np.nonzero(with_3_neighbors)
        for y3, x3 in zip(coord_3[0], coord_3[1]): # y3, x3 = 3,7
            # If, in the neighborhood of the 3, there is at least a 2 (in 8) that is 0 (in 4), and not a termination: the 3 is a node
            has_2_8neigh = cnv8.equal_neighbor_nb[(y3 - 1):(y3 + 2), (x3 - 1):(x3 + 2)] > 0  # 1
            has_2_8neigh_without_focal = has_2_8neigh.copy()
            has_2_8neigh_without_focal[1, 1] = 0
            node_but_not_term = pad_vertices[(y3 - 1):(y3 + 2), (x3 - 1):(x3 + 2)] * (1 - potential_tips[(y3 - 1):(y3 + 2), (x3 - 1):(x3 + 2)])
            all_are_node_but_not_term = np.array_equal(has_2_8neigh_without_focal, node_but_not_term)
            if np.any(has_2_8neigh * (1 - all_are_node_but_not_term)):
                # At least 3 of the 8neigh are not connected:
                has_2_8neigh_without_focal = np.pad(has_2_8neigh_without_focal, [(1,), (1,)], mode='constant')
                cnv_8con = CompareNeighborsWithValue(has_2_8neigh_without_focal, 4)
                cnv_8con.is_equal(1, and_itself=True)
                disconnected_nb = has_2_8neigh_without_focal.sum() - (cnv_8con.equal_neighbor_nb > 0).sum()
                if disconnected_nb > 2:
                    pad_vertices[y3, x3] = 1
    # Now there may be too many vertices:
    # - Those that are 4-connected:
    nb, sh, st, ce = cv2.connectedComponentsWithStats(pad_vertices, connectivity=4)
    problematic_vertices = np.nonzero(st[:, 4] > 1)[0][1:]
    for prob_v in problematic_vertices:
        vertices_group = sh == prob_v
        # If there is a tip in the group, do
        if np.any(potential_tips[vertices_group]):
            # Change the most connected one from tip to vertex
            curr_neighbor_nb = cnv8.equal_neighbor_nb * vertices_group
            wrong_tip = np.nonzero(curr_neighbor_nb == curr_neighbor_nb.max())
            potential_tips[wrong_tip] = 0
        else:
            #  otherwise do:
            # Find the most 8-connected one, if its 4-connected neighbors have no more 8-connexions than 4-connexions + 1, they can be removed
            # Otherwise,
            # Find the most 4-connected one, and remove its 4 connected neighbors having only 1 or other 8-connexion

            c = zoom_on_nonzero(vertices_group)
            # 1. Find the most 8-connected one:
            sub_v_grp = vertices_group[c[0]:c[1], c[2]:c[3]]
            c8 = cnv8.equal_neighbor_nb[c[0]:c[1], c[2]:c[3]]
            vertices_group_8 = c8 * sub_v_grp
            max_8_con = vertices_group_8.max()
            most_8_con = np.nonzero(vertices_group_8 == max_8_con)
            # c4[(most_8_con[0][0] - 1):(most_8_con[0][0] + 2), (most_8_con[1][0] - 1):(most_8_con[1][0] + 2)]
            if len(most_8_con[0]) == 1:
                skel_copy = pad_skeleton[c[0]:c[1], c[2]:c[3]].copy()
                skel_copy[most_8_con] = 0
                sub_cnv8 = CompareNeighborsWithValue(skel_copy, 8)
                sub_cnv8.is_equal(1, and_itself=False)
                sub_cnv4 = CompareNeighborsWithValue(skel_copy, 4)
                sub_cnv4.is_equal(1, and_itself=False)
                v_to_remove = sub_v_grp * (sub_cnv8.equal_neighbor_nb <= sub_cnv4.equal_neighbor_nb + 1)
            else:
                c4 = cnv4.equal_neighbor_nb[c[0]:c[1], c[2]:c[3]]
                # 1. # Find the most 4-connected one:
                vertices_group_4 = c4 * sub_v_grp
                max_con = vertices_group_4.max()
                most_con = np.nonzero(vertices_group_4 == max_con)
                if len(most_con[0]) < sub_v_grp.sum():
                    # 2. Check its 4-connected neighbors and remove those having only 1 other 8-connexion
                    skel_copy = pad_skeleton[c[0]:c[1], c[2]:c[3]].copy()
                    skel_copy[most_con] = 0
                    skel_copy[most_con[0] - 1, most_con[1]] = 0
                    skel_copy[most_con[0] + 1, most_con[1]] = 0
                    skel_copy[most_con[0], most_con[1] - 1] = 0
                    skel_copy[most_con[0], most_con[1] + 1] = 0
                    sub_cnv8 = CompareNeighborsWithValue(skel_copy, 8)
                    sub_cnv8.is_equal(1, and_itself=False)
                    # There are:
                    v_to_remove = ((vertices_group_4 > 0) * sub_cnv8.equal_neighbor_nb) == 1
                else:
                    v_to_remove = np.zeros(sub_v_grp.shape, dtype=bool)
            pad_vertices[c[0]:c[1], c[2]:c[3]][v_to_remove] = 0

    # Other vertices to remove:
    # - Those that are forming a cross with 0 at the center while the skeleton contains 1
    cnv4_false = CompareNeighborsWithValue(pad_vertices, 4)
    cnv4_false.is_equal(1, and_itself=False)
    cross_vertices = cnv4_false.equal_neighbor_nb == 4
    wrong_cross_vertices = cross_vertices * pad_skeleton
    if wrong_cross_vertices.any():
        pad_vertices[np.nonzero(wrong_cross_vertices)] = 1
        cross_fix = cv2.dilate(wrong_cross_vertices, kernel=cross_33, iterations=1)
        # Remove the 4-connected vertices that have no more than 4 8-connected neighbors
        # i.e. the three on the side of the surrounded 0 and only one on edge on the other side
        cross_fix = ((cnv8.equal_neighbor_nb * cross_fix) == 4) * (1 - wrong_cross_vertices)
        pad_vertices *= (1 - cross_fix)
    return pad_vertices, potential_tips

get_kurtosis(mo, binary_image, cx, cy, sx, sy)

Calculate the kurtosis of a binary image.

The function calculates the fourth moment (kurtosis) of the given binary image around the specified center coordinates with an option to specify the size of the square window.

Parameters:

Name Type Description Default
mo dict

Dictionary containing moments of binary image.

required
binary_image ndarray

A 2D numpy ndarray representing a binary image.

required
cx int or float

The x-coordinate of the center point of the square window.

required
cy int or float

The y-coordinate of the center point of the square window.

required
sx int or float

The x-length of the square window (width).

required
sy int or float

The y-length of the square window (height).

required

Returns:

Type Description
float

The kurtosis value calculated from the moments.

Examples:

>>> mo = np.array([[0, 1], [2, 3]])
>>> binary_image = np.array([[1, 0], [0, 1]])
>>> cx = 2
>>> cy = 3
>>> sx = 5
>>> sy = 6
>>> result = get_kurtosis(mo, binary_image, cx, cy, sx, sy)
>>> print(result)
expected output
Source code in src/cellects/utils/formulas.py
def get_kurtosis(mo: dict, binary_image: NDArray, cx: float, cy: float, sx: float, sy: float) -> Tuple[float, float]:
    """
    Calculate the kurtosis of a binary image.

    The function calculates the fourth moment (kurtosis) of the given
    binary image around the specified center coordinates with an option
    to specify the size of the square window.

    Parameters
    ----------
    mo : dict
        Dictionary containing moments of binary image.
    binary_image : np.ndarray
        A 2D numpy ndarray representing a binary image.
    cx : int or float
        The x-coordinate of the center point of the square window.
    cy : int or float
        The y-coordinate of the center point of the square window.
    sx : int or float
        The x-length of the square window (width).
    sy : int or float
        The y-length of the square window (height).

    Returns
    -------
    float
        The kurtosis value calculated from the moments.

    Examples
    --------
    >>> mo = np.array([[0, 1], [2, 3]])
    >>> binary_image = np.array([[1, 0], [0, 1]])
    >>> cx = 2
    >>> cy = 3
    >>> sx = 5
    >>> sy = 6
    >>> result = get_kurtosis(mo, binary_image, cx, cy, sx, sy)
    >>> print(result)
    expected output
    """
    x4, y4 = get_power_dists(binary_image, cx, cy, 4)
    X4, Y4 = np.meshgrid(x4, y4)
    m4x, m4y = get_var(mo, binary_image, X4, Y4)
    return get_skewness_kurtosis(m4x, m4y, sx, sy, 4)

get_min_or_max_euclidean_pair(coords, min_or_max='max')

Find the pair of points in a given set with the minimum or maximum Euclidean distance.

Parameters:

Name Type Description Default
coords Union[ndarray, Tuple]

An Nx2 numpy array or a tuple of two arrays, each containing the x and y coordinates of points.

required
min_or_max str

Whether to find the 'min' or 'max' distance pair. Default is 'max'.

'max'

Returns:

Type Description
Tuple[ndarray, ndarray]

A tuple containing the coordinates of the two points that form the minimum or maximum distance pair.

Raises:

Type Description
ValueError

If min_or_max is not 'min' or 'max'.

Notes
  • The function first computes all pairwise distances in condensed form using pdist.
  • Then, it finds the index of the minimum or maximum distance.
  • Finally, it maps this index to the actual point indices using a binary search method.

Examples:

>>> coords = np.array([[0, 1], [2, 3], [4, 5]])
>>> point1, point2 = get_min_or_max_euclidean_pair(coords, min_or_max="max")
>>> print(point1)
[0 1]
>>> print(point2)
[4 5]
>>> coords = (np.array([0, 2, 4, 8, 1, 5]), np.array([0, 2, 4, 8, 0, 5]))
>>> point1, point2 = get_min_or_max_euclidean_pair(coords, min_or_max="min")
>>> print(point1)
[0 0]
>>> print(point2)
[1 0]
Source code in src/cellects/image_analysis/morphological_operations.py
def get_min_or_max_euclidean_pair(coords, min_or_max: str="max") -> Tuple[np.ndarray, np.ndarray]:
    """
    Find the pair of points in a given set with the minimum or maximum Euclidean distance.

    Parameters
    ----------
    coords : Union[np.ndarray, Tuple]
        An Nx2 numpy array or a tuple of two arrays, each containing the x and y coordinates of points.
    min_or_max : str, optional
        Whether to find the 'min' or 'max' distance pair. Default is 'max'.

    Returns
    -------
    Tuple[np.ndarray, np.ndarray]
        A tuple containing the coordinates of the two points that form the minimum or maximum distance pair.

    Raises
    ------
    ValueError
        If `min_or_max` is not 'min' or 'max'.

    Notes
    -----
    - The function first computes all pairwise distances in condensed form using `pdist`.
    - Then, it finds the index of the minimum or maximum distance.
    - Finally, it maps this index to the actual point indices using a binary search method.

    Examples
    --------
    >>> coords = np.array([[0, 1], [2, 3], [4, 5]])
    >>> point1, point2 = get_min_or_max_euclidean_pair(coords, min_or_max="max")
    >>> print(point1)
    [0 1]
    >>> print(point2)
    [4 5]
    >>> coords = (np.array([0, 2, 4, 8, 1, 5]), np.array([0, 2, 4, 8, 0, 5]))
    >>> point1, point2 = get_min_or_max_euclidean_pair(coords, min_or_max="min")
    >>> print(point1)
    [0 0]
    >>> print(point2)
    [1 0]

    """
    if isinstance(coords, Tuple):
        coords = np.column_stack(coords)
    N = coords.shape[0]
    if N <= 1:
        return (coords[0], coords[0]) if N == 1 else None

    # Step 1: Compute all pairwise distances in condensed form
    distances = pdist(coords)

    # Step 2: Find the index of the maximum distance
    if min_or_max == "max":
        idx = np.argmax(distances)
    elif min_or_max == "min":
        idx = np.argmin(distances)
    else:
        raise ValueError

    # Step 3: Map this index to (i, j) using a binary search method

    def get_pair_index(k):
        low, high = 0, N
        while low < high:
            mid = (low + high) // 2
            total = mid * (2 * N - mid - 1) // 2
            if total <= k:
                low = mid + 1
            else:
                high = mid

        i = low - 1
        prev_sum = i * (2 * N - i - 1) // 2
        j_index_in_row = k - prev_sum
        return i, i + j_index_in_row + 1  # Ensure j > i

    i, j = get_pair_index(idx)
    return coords[i], coords[j]

get_mpl_colormap(cmap_name)

Returns a linear color range array for the given matplotlib colormap.

Parameters:

Name Type Description Default
cmap_name str

The name of the colormap to get.

required

Returns:

Type Description
ndarray

A 256x1x3 array of bytes representing the linear color range.

Examples:

>>> result = get_mpl_colormap('viridis')
>>> print(result.shape)
(256, 1, 3)
Source code in src/cellects/utils/load_display_save.py
def get_mpl_colormap(cmap_name: str):
    """
    Returns a linear color range array for the given matplotlib colormap.

    Parameters
    ----------
    cmap_name : str
        The name of the colormap to get.

    Returns
    -------
    numpy.ndarray
        A 256x1x3 array of bytes representing the linear color range.

    Examples
    --------
    >>> result = get_mpl_colormap('viridis')
    >>> print(result.shape)
    (256, 1, 3)

    """
    cmap = plt.get_cmap(cmap_name)

    # Initialize the matplotlib color map
    sm = plt.cm.ScalarMappable(cmap=cmap)

    # Obtain linear color range
    color_range = sm.to_rgba(np.linspace(0, 1, 256), bytes=True)[:, 2::-1]

    return color_range.reshape(256, 1, 3)

get_neighbor_comparisons(pad_skeleton)

Get neighbor comparisons for a padded skeleton.

This function creates two CompareNeighborsWithValue objects with different neighborhood sizes (4 and 8) and checks if the neighbors are equal to 1. It returns both comparison objects.

Parameters:

Name Type Description Default
pad_skeleton ndarray of uint8

The input padded skeleton array.

required

Returns:

Name Type Description
out tuple of CompareNeighborsWithValue, CompareNeighborsWithValue

Two comparison objects for 4 and 8 neighbors.

Examples:

>>> cnv4, cnv8 = get_neighbor_comparisons(pad_skeleton)
Source code in src/cellects/image_analysis/network_functions.py
def get_neighbor_comparisons(pad_skeleton: NDArray[np.uint8]) -> Tuple[object, object]:
    """
    Get neighbor comparisons for a padded skeleton.

    This function creates two `CompareNeighborsWithValue` objects with different
    neighborhood sizes (4 and 8) and checks if the neighbors are equal to 1. It
    returns both comparison objects.

    Parameters
    ----------
    pad_skeleton : ndarray of uint8
        The input padded skeleton array.

    Returns
    -------
    out : tuple of CompareNeighborsWithValue, CompareNeighborsWithValue
        Two comparison objects for 4 and 8 neighbors.

    Examples
    --------
    >>> cnv4, cnv8 = get_neighbor_comparisons(pad_skeleton)
    """
    cnv4 = CompareNeighborsWithValue(pad_skeleton, 4)
    cnv4.is_equal(1, and_itself=True)
    cnv8 = CompareNeighborsWithValue(pad_skeleton, 8)
    cnv8.is_equal(1, and_itself=True)
    return cnv4, cnv8

get_newly_explored_area(binary_vid)

Get newly explored area in a binary video.

Calculate the number of new pixels that have become active (==1) from the previous frame in a binary video representation.

Parameters:

Name Type Description Default
binary_vid ndarray

The current frame of the binary video.

required

Returns:

Type Description
ndarray

An array containing the number of new active pixels for each row.

Notes

This function uses Numba's @njit decorator for performance.

Examples:

>>> binary_vid=np.zeros((4, 5, 5), dtype=np.uint8)
>>> binary_vid[:2, 3, 3] = 1
>>> binary_vid[1, 4, 3] = 1
>>> binary_vid[2, 3, 4] = 1
>>> binary_vid[3, 2, 3] = 1
>>> get_newly_explored_area(binary_vid)
array([0, 1, 1, 1])
>>> binary_vid=np.zeros((5, 5), dtype=np.uint8)[None, :, :]
>>> get_newly_explored_area(binary_vid)
array([0])
Source code in src/cellects/utils/formulas.py
@njit()
def get_newly_explored_area(binary_vid: NDArray[np.uint8]) -> NDArray:
    """
    Get newly explored area in a binary video.

    Calculate the number of new pixels that have become active (==1) from
    the previous frame in a binary video representation.

    Parameters
    ----------
    binary_vid : np.ndarray
        The current frame of the binary video.

    Returns
    -------
    np.ndarray
        An array containing the number of new active pixels for each row.

    Notes
    -----
    This function uses Numba's @njit decorator for performance.

    Examples
    --------
    >>> binary_vid=np.zeros((4, 5, 5), dtype=np.uint8)
    >>> binary_vid[:2, 3, 3] = 1
    >>> binary_vid[1, 4, 3] = 1
    >>> binary_vid[2, 3, 4] = 1
    >>> binary_vid[3, 2, 3] = 1
    >>> get_newly_explored_area(binary_vid)
    array([0, 1, 1, 1])

    >>> binary_vid=np.zeros((5, 5), dtype=np.uint8)[None, :, :]
    >>> get_newly_explored_area(binary_vid)
    array([0])
    """
    return ((binary_vid - binary_vid[0, ...]) == 1).reshape(binary_vid.shape[0], - 1).sum(1)

get_power_dists(binary_image, cx, cy, n)

Calculate the power distributions based on the given center coordinates and exponent.

This function computes the nth powers of x and y distances from a given center point (cx, cy) for each pixel in the binary image.

Parameters:

Name Type Description Default
binary_image ndarray

A 2D array (binary image) where the power distributions are calculated.

required
cx float

The x-coordinate of the center point.

required
cy float

The y-coordinate of the center point.

required
n int

The exponent for power distribution calculation.

required

Returns:

Type Description
tuple[ndarray, ndarray]

A tuple containing two arrays: - The first array contains the nth power of x distances from the center. - The second array contains the nth power of y distances from the center.

Notes

This function uses Numba's @njit decorator for performance optimization. Ensure that binary_image is a NumPy ndarray to avoid type issues.

Examples:

>>> binary_image = np.zeros((10, 10))
>>> xn, yn = get_power_dists(binary_image, 5.0, 5.0, 2)
>>> print(xn.shape), print(yn.shape)
(10,) (10,)
Source code in src/cellects/utils/formulas.py
@njit()
def get_power_dists(binary_image: np.ndarray, cx: float, cy: float, n: int):
    """
    Calculate the power distributions based on the given center coordinates and exponent.

    This function computes the `n`th powers of x and y distances from
    a given center point `(cx, cy)` for each pixel in the binary image.

    Parameters
    ----------
    binary_image : np.ndarray
        A 2D array (binary image) where the power distributions are calculated.
    cx : float
        The x-coordinate of the center point.
    cy : float
        The y-coordinate of the center point.
    n : int
        The exponent for power distribution calculation.

    Returns
    -------
    tuple[np.ndarray, np.ndarray]
        A tuple containing two arrays:
        - The first array contains the `n`th power of x distances from the center.
        - The second array contains the `n`th power of y distances from the center.

    Notes
    -----
    This function uses Numba's `@njit` decorator for performance optimization.
    Ensure that `binary_image` is a NumPy ndarray to avoid type issues.

    Examples
    --------
    >>> binary_image = np.zeros((10, 10))
    >>> xn, yn = get_power_dists(binary_image, 5.0, 5.0, 2)
    >>> print(xn.shape), print(yn.shape)
    (10,) (10,)
    """
    xn = (np.arange(binary_image.shape[1]) - cx) ** n
    yn = (np.arange(binary_image.shape[0]) - cy) ** n
    return xn, yn

get_skeleton_and_widths(pad_network, pad_origin=None, pad_origin_centroid=None)

Get skeleton and widths from a network.

This function computes the morphological skeleton of a network and calculates the distances to the closest zero pixel for each non-zero pixel using medial_axis. If pad_origin is provided, it adds a central contour. Finally, the function removes small loops and keeps only one connected component.

Parameters:

Name Type Description Default
pad_network ndarray of uint8

The binary pad network image.

required
pad_origin ndarray of uint8

An array indicating the origin for adding central contour.

None
pad_origin_centroid ndarray

The centroid of the pad origin. Defaults to None.

None

Returns:

Name Type Description
out tuple(ndarray of uint8, ndarray of uint8, ndarray of uint8)

A tuple containing: - pad_skeleton: The skeletonized image. - pad_distances: The distances to the closest zero pixel. - pad_origin_contours: The contours of the central origin, or None if not used.

Examples:

>>> pad_network = np.array([[0, 1], [1, 0]])
>>> skeleton, distances, contours = get_skeleton_and_widths(pad_network)
>>> print(skeleton)
Source code in src/cellects/image_analysis/network_functions.py
def get_skeleton_and_widths(pad_network: NDArray[np.uint8], pad_origin: NDArray[np.uint8]=None, pad_origin_centroid: NDArray[np.int64]=None) -> Tuple[NDArray[np.uint8], NDArray[np.float64], NDArray[np.uint8]]:
    """
    Get skeleton and widths from a network.

    This function computes the morphological skeleton of a network and calculates
    the distances to the closest zero pixel for each non-zero pixel using medial_axis.
    If pad_origin is provided, it adds a central contour. Finally, the function
    removes small loops and keeps only one connected component.

    Parameters
    ----------
    pad_network : ndarray of uint8
        The binary pad network image.
    pad_origin : ndarray of uint8, optional
        An array indicating the origin for adding central contour.
    pad_origin_centroid : ndarray, optional
        The centroid of the pad origin. Defaults to None.

    Returns
    -------
    out : tuple(ndarray of uint8, ndarray of uint8, ndarray of uint8)
        A tuple containing:
        - pad_skeleton: The skeletonized image.
        - pad_distances: The distances to the closest zero pixel.
        - pad_origin_contours: The contours of the central origin, or None if not
          used.

    Examples
    --------
    >>> pad_network = np.array([[0, 1], [1, 0]])
    >>> skeleton, distances, contours = get_skeleton_and_widths(pad_network)
    >>> print(skeleton)
    """
    pad_skeleton, pad_distances = morphology.medial_axis(pad_network, return_distance=True, rng=0)
    pad_skeleton = pad_skeleton.astype(np.uint8)
    if pad_origin is not None:
        pad_skeleton, pad_distances, pad_origin_contours = _add_central_contour(pad_skeleton, pad_distances, pad_origin, pad_network, pad_origin_centroid)
    else:
        pad_origin_contours = None
    pad_skeleton, pad_distances = remove_small_loops(pad_skeleton, pad_distances)
    pad_skeleton = keep_one_connected_component(pad_skeleton)
    pad_distances *= pad_skeleton
    return pad_skeleton, pad_distances, pad_origin_contours

get_skewness(mo, binary_image, cx, cy, sx, sy)

Calculate skewness of the given moment.

This function computes the skewness based on the third moments and the central moments of a binary image.

Parameters:

Name Type Description Default
mo dict

Dictionary containing moments of binary image.

required
binary_image ndarray

Binary image as a 2D numpy array.

required
cx float

Description of parameter cx.

required
cy float

Description of parameter cy.

required
sx float

Description of parameter sx.

required
sy float

Description of parameter sy.

required

Returns:

Type Description
Tuple[float, float]

Tuple containing skewness values.

Examples:

>>> result = get_skewness(mo=example_mo, binary_image=binary_img,
... cx=0.5, cy=0.5, sx=1.0, sy=1.0)
>>> print(result)
(skewness_x, skewness_y)  # Example output
Source code in src/cellects/utils/formulas.py
def get_skewness(mo: dict, binary_image: NDArray, cx: float, cy: float, sx: float, sy: float) -> Tuple[float, float]:
    """Calculate skewness of the given moment.

    This function computes the skewness based on the third moments
    and the central moments of a binary image.

    Parameters
    ----------
    mo : dict
        Dictionary containing moments of binary image.
    binary_image : ndarray
        Binary image as a 2D numpy array.
    cx : float
        Description of parameter `cx`.
    cy : float
        Description of parameter `cy`.
    sx : float
        Description of parameter `sx`.
    sy : float
        Description of parameter `sy`.

    Returns
    -------
    Tuple[float, float]
        Tuple containing skewness values.

    Examples
    --------
    >>> result = get_skewness(mo=example_mo, binary_image=binary_img,
    ... cx=0.5, cy=0.5, sx=1.0, sy=1.0)
    >>> print(result)
    (skewness_x, skewness_y)  # Example output
    """
    x3, y3 = get_power_dists(binary_image, cx, cy, 3)
    X3, Y3 = np.meshgrid(x3, y3)
    m3x, m3y = get_var(mo, binary_image, X3, Y3)
    return get_skewness_kurtosis(m3x, m3y, sx, sy, 3)

get_skewness_kurtosis(mnx, mny, sx, sy, n)

Calculates skewness and kurtosis of a distribution.

This function computes the skewness and kurtosis from given statistical moments, standard deviations, and order of moments.

Parameters:

Name Type Description Default
mnx float

The third moment about the mean for x.

required
mny float

The fourth moment about the mean for y.

required
sx float

The standard deviation of x.

required
sy float

The standard deviation of y.

required
n int

Order of the moment (3 for skewness, 4 for kurtosis).

required

Returns:

Name Type Description
skewness float

The computed skewness.

kurtosis float

The computed kurtosis.

Notes

This function uses Numba's @njit decorator for performance. Ensure that the values of mnx, mny, sx, and sy are non-zero to avoid division by zero. If n = 3, the function calculates skewness. If n = 4, it calculates kurtosis.

Examples:

>>> skewness, kurtosis = get_skewness_kurtosis(1.5, 2.0, 0.5, 0.75, 3)
>>> print("Skewness:", skewness)
Skewness: 8.0
>>> print("Kurtosis:", kurtosis)
Kurtosis: nan
Source code in src/cellects/utils/formulas.py
@njit()
def get_skewness_kurtosis(mnx: float, mny: float, sx: float, sy: float, n: int) -> Tuple[float, float]:
    """
    Calculates skewness and kurtosis of a distribution.

    This function computes the skewness and kurtosis from given statistical
    moments, standard deviations, and order of moments.

    Parameters
    ----------
    mnx : float
        The third moment about the mean for x.
    mny : float
        The fourth moment about the mean for y.
    sx : float
        The standard deviation of x.
    sy : float
        The standard deviation of y.
    n : int
        Order of the moment (3 for skewness, 4 for kurtosis).

    Returns
    -------
    skewness : float
        The computed skewness.
    kurtosis : float
        The computed kurtosis.

    Notes
    -----
    This function uses Numba's `@njit` decorator for performance.
    Ensure that the values of `mnx`, `mny`, `sx`, and `sy` are non-zero to avoid division by zero.
    If `n = 3`, the function calculates skewness. If `n = 4`, it calculates kurtosis.

    Examples
    --------
    >>> skewness, kurtosis = get_skewness_kurtosis(1.5, 2.0, 0.5, 0.75, 3)
    >>> print("Skewness:", skewness)
    Skewness: 8.0
    >>> print("Kurtosis:", kurtosis)
    Kurtosis: nan

    """
    if sx == 0:
        fx = 0
    else:
        fx = mnx / sx ** n

    if sy == 0:
        fy = 0
    else:
        fy = mny / sy ** n

    return fx, fy

get_standard_deviations(mo, binary_image, cx, cy)

Return spatial standard deviations for a given moment and binary image.

This function computes the square root of variances along x (horizontal) and y (vertical) axes for the given binary image and moment.

Parameters:

Name Type Description Default
mo dict

Dictionary containing moments of binary image.

required
binary_image ndarray of bool or int8

The binary input image where the moments are computed.

required
cx float64

X-coordinate of center of mass (horizontal position).

required
cy float64

Y-coordinate of center of mass (vertical position).

required

Returns:

Type Description
tuple[ndarray of float64, ndarray of float64]

Tuple containing the standard deviations along the x and y axes.

Raises:

Type Description
ValueError

If binary_image is not a binary image or has an invalid datatype.

Notes

This function uses the get_power_dists and get_var functions to compute the distributed variances, which are then transformed into standard deviations.

Examples:

>>> import numpy as np
>>> binary_image = np.array([[0, 1], [1, 0]], dtype=np.int8)
>>> mo = np.array([[2.0], [3.0]])
>>> cx, cy = 1.5, 1.5
>>> stdx, stdy = get_standard_deviations(mo, binary_image, cx, cy)
>>> print(stdx)
[1.1]
>>> print(stdy)
[0.8366600265...]
Source code in src/cellects/utils/formulas.py
def get_standard_deviations(mo: dict, binary_image: NDArray, cx: float, cy: float) -> Tuple[float, float]:
    """
    Return spatial standard deviations for a given moment and binary image.

    This function computes the square root of variances along `x` (horizontal)
    and `y` (vertical) axes for the given binary image and moment.

    Parameters
    ----------
    mo : dict
        Dictionary containing moments of binary image.
    binary_image : ndarray of bool or int8
        The binary input image where the moments are computed.
    cx : float64
        X-coordinate of center of mass (horizontal position).
    cy : float64
        Y-coordinate of center of mass (vertical position).

    Returns
    -------
    tuple[ndarray of float64, ndarray of float64]
        Tuple containing the standard deviations along the x and y axes.

    Raises
    ------
    ValueError
        If `binary_image` is not a binary image or has an invalid datatype.

    Notes
    -----
    This function uses the `get_power_dists` and `get_var` functions to compute
    the distributed variances, which are then transformed into standard deviations.

    Examples
    --------
    >>> import numpy as np
    >>> binary_image = np.array([[0, 1], [1, 0]], dtype=np.int8)
    >>> mo = np.array([[2.0], [3.0]])
    >>> cx, cy = 1.5, 1.5
    >>> stdx, stdy = get_standard_deviations(mo, binary_image, cx, cy)
    >>> print(stdx)
    [1.1]
    >>> print(stdy)
    [0.8366600265...]
    """
    x2, y2 = get_power_dists(binary_image, cx, cy, 2)
    X2, Y2 = np.meshgrid(x2, y2)
    vx, vy = get_var(mo, binary_image, X2, Y2)
    return np.sqrt(vx), np.sqrt(vy)

get_terminations_and_their_connected_nodes(pad_skeleton, cnv4, cnv8)

Get terminations in a skeleton and their connected nodes.

This function identifies termination points in a padded skeleton array based on pixel connectivity, marking them and their connected nodes.

Parameters:

Name Type Description Default
pad_skeleton ndarray of uint8

The padded skeleton array where terminations are to be identified.

required
cnv4 object

Convolution object with 4-connectivity for neighbor comparison.

required
cnv8 object

Convolution object with 8-connectivity for neighbor comparison.

required

Returns:

Name Type Description
out ndarray of uint8

Array containing marked terminations and their connected nodes.

Examples:

>>> result = get_terminations_and_their_connected_nodes(pad_skeleton, cnv4, cnv8)
>>> print(result)
Source code in src/cellects/image_analysis/network_functions.py
def get_terminations_and_their_connected_nodes(pad_skeleton: NDArray[np.uint8], cnv4: object, cnv8: object) -> NDArray[np.uint8]:
    """
    Get terminations in a skeleton and their connected nodes.

    This function identifies termination points in a padded skeleton array
    based on pixel connectivity, marking them and their connected nodes.

    Parameters
    ----------
    pad_skeleton : ndarray of uint8
        The padded skeleton array where terminations are to be identified.
    cnv4 : object
        Convolution object with 4-connectivity for neighbor comparison.
    cnv8 : object
        Convolution object with 8-connectivity for neighbor comparison.

    Returns
    -------
    out : ndarray of uint8
        Array containing marked terminations and their connected nodes.

    Examples
    --------
    >>> result = get_terminations_and_their_connected_nodes(pad_skeleton, cnv4, cnv8)
    >>> print(result)
    """
    # All pixels having only one neighbor, and containing the value 1, are terminations for sure
    potential_tips = np.zeros(pad_skeleton.shape, dtype=np.uint8)
    potential_tips[cnv8.equal_neighbor_nb == 1] = 1
    # Add more terminations using 4-connectivity
    # If a pixel is 1 (in 4) and all its neighbors are neighbors (in 4), it is a termination

    coord1_4 = cnv4.equal_neighbor_nb == 1
    if np.any(coord1_4):
        coord1_4 = np.nonzero(coord1_4)
        for y1, x1 in zip(coord1_4[0], coord1_4[1]): # y1, x1 = 3,5
            # If, in the neighborhood of the 1 (in 4), all (in 8) its neighbors are 4-connected together, and none of them are terminations, the 1 is a termination
            is_4neigh = cnv4.equal_neighbor_nb[(y1 - 1):(y1 + 2), (x1 - 1):(x1 + 2)] != 0
            all_4_connected = pad_skeleton[(y1 - 1):(y1 + 2), (x1 - 1):(x1 + 2)] == is_4neigh
            is_not_term = 1 - potential_tips[y1, x1]
            if np.all(all_4_connected * is_not_term):
                is_4neigh[1, 1] = 0
                is_4neigh = np.pad(is_4neigh, [(1,), (1,)], mode='constant')
                cnv_4con = CompareNeighborsWithValue(is_4neigh, 4)
                cnv_4con.is_equal(1, and_itself=True)
                all_connected = (is_4neigh.sum() - (cnv_4con.equal_neighbor_nb > 0).sum()) == 0
                # If they are connected, it can be a termination
                if all_connected:
                    # If its closest neighbor is above 3 (in 8), this one is also a node
                    is_closest_above_3 = cnv8.equal_neighbor_nb[(y1 - 1):(y1 + 2), (x1 - 1):(x1 + 2)] * cross_33 > 3
                    if np.any(is_closest_above_3):
                        Y, X = np.nonzero(is_closest_above_3)
                        Y += y1 - 1
                        X += x1 - 1
                        potential_tips[Y, X] = 1
                    potential_tips[y1, x1] = 1
    return potential_tips

get_var(mo, binary_image, Xn, Yn)

Compute the center of mass in 2D space.

This function calculates the weighted average position (centroid) of a binary image using given pixel coordinates and moments.

Parameters:

Name Type Description Default
mo dict

Dictionary containing moments of binary image.

required
binary_image ndarray

2D binary image where non-zero pixels are considered.

required
Xn ndarray

Array of x-coordinates for each pixel in binary_image.

required
Yn ndarray

Array of y-coordinates for each pixel in binary_image.

required

Returns:

Type Description
tuple

A tuple of two floats (vx, vy) representing the centroid coordinates.

Raises:

Type Description
ZeroDivisionError

If mo['m00'] is zero, indicating no valid pixels in the image. The function raises a ZeroDivisionError.

Notes

Performance considerations: This function uses Numba's @njit decorator for performance.

Source code in src/cellects/utils/formulas.py
@njit()
def get_var(mo: dict, binary_image: NDArray, Xn: NDArray, Yn: NDArray) -> Tuple[float, float]:
    """
    Compute the center of mass in 2D space.

    This function calculates the weighted average position (centroid) of
    a binary image using given pixel coordinates and moments.

    Parameters
    ----------
    mo : dict
        Dictionary containing moments of binary image.
    binary_image : ndarray
        2D binary image where non-zero pixels are considered.
    Xn : ndarray
        Array of x-coordinates for each pixel in `binary_image`.
    Yn : ndarray
        Array of y-coordinates for each pixel in `binary_image`.

    Returns
    -------
    tuple
        A tuple of two floats `(vx, vy)` representing the centroid coordinates.

    Raises
    ------
    ZeroDivisionError
        If `mo['m00']` is zero, indicating no valid pixels in the image.
        The function raises a `ZeroDivisionError`.

    Notes
    -----
    Performance considerations: This function uses Numba's `@njit` decorator for performance.
    """
    if mo['m00'] == 0:
        vx, vy = 0., 0.
    else:
        vx = np.sum(binary_image * Xn) / mo["m00"]
        vy = np.sum(binary_image * Yn) / mo["m00"]
    return vx, vy

get_vertices_and_tips_from_skeleton(pad_skeleton)

Get vertices and tips from a padded skeleton.

This function identifies the vertices and tips of a skeletonized image. Tips are endpoints of the skeleton while vertices include tips and points where three or more edges meet.

Parameters:

Name Type Description Default
pad_skeleton ndarray of uint8

Input skeleton image that has been padded.

required

Returns:

Name Type Description
out tuple (ndarray of uint8, ndarray of uint8)

Tuple containing arrays of vertex points and tip points.

Source code in src/cellects/image_analysis/network_functions.py
def get_vertices_and_tips_from_skeleton(pad_skeleton: NDArray[np.uint8]) -> Tuple[NDArray[np.uint8], NDArray[np.uint8]]:
    """
    Get vertices and tips from a padded skeleton.

    This function identifies the vertices and tips of a skeletonized image.
    Tips are endpoints of the skeleton while vertices include tips and points where three or more edges meet.

    Parameters
    ----------
    pad_skeleton : ndarray of uint8
        Input skeleton image that has been padded.

    Returns
    -------
    out : tuple (ndarray of uint8, ndarray of uint8)
        Tuple containing arrays of vertex points and tip points.
    """
    cnv4, cnv8 = get_neighbor_comparisons(pad_skeleton)
    potential_tips = get_terminations_and_their_connected_nodes(pad_skeleton, cnv4, cnv8)
    pad_vertices, pad_tips = get_inner_vertices(pad_skeleton, potential_tips, cnv4, cnv8)
    return pad_vertices, pad_tips

image_borders(dimensions, shape='rectangular')

Create an image with borders, either rectangular or circular.

Parameters:

Name Type Description Default
dimensions tuple

The dimensions of the image (height, width).

required
shape str

The shape of the borders. Options are "rectangular" or "circular". Defaults to "rectangular".

'rectangular'

Returns:

Name Type Description
out ndarray of uint8

The image with borders. If the shape is "circular", an ellipse border; if "rectangular", a rectangular border.

Examples:

>>> borders = image_borders((3, 3), "rectangle")
>>> print(borders)
[[0 0 0]
 [0 1 0]
 [0 0 0]]
Source code in src/cellects/image_analysis/morphological_operations.py
def image_borders(dimensions: tuple, shape: str="rectangular") -> NDArray[np.uint8]:
    """
    Create an image with borders, either rectangular or circular.

    Parameters
    ----------
    dimensions : tuple
        The dimensions of the image (height, width).
    shape : str, optional
        The shape of the borders. Options are "rectangular" or "circular".
        Defaults to "rectangular".

    Returns
    -------
    out : ndarray of uint8
        The image with borders. If the shape is "circular", an ellipse border;
        if "rectangular", a rectangular border.

    Examples
    --------
    >>> borders = image_borders((3, 3), "rectangle")
    >>> print(borders)
    [[0 0 0]
     [0 1 0]
     [0 0 0]]
    """
    if shape == "circle":
        borders = create_ellipse(dimensions[0], dimensions[1])
        img_contours = image_borders(dimensions)
        borders = borders * img_contours
    else:
        borders = np.ones(dimensions, dtype=np.uint8)
        borders[0, :] = 0
        borders[:, 0] = 0
        borders[- 1, :] = 0
        borders[:, - 1] = 0
    return borders

insensitive_glob(pattern)

Generates a glob pattern that matches both lowercase and uppercase letters.

Parameters:

Name Type Description Default
pattern str

The glob pattern to be made case-insensitive.

required

Returns:

Type Description
str

A new glob pattern that will match both lowercase and uppercase letters.

Examples:

>>> insensitive_glob('*.TXT')
Source code in src/cellects/utils/utilitarian.py
def insensitive_glob(pattern: str):
    """
    Generates a glob pattern that matches both lowercase and uppercase letters.

    Parameters
    ----------
    pattern : str
        The glob pattern to be made case-insensitive.

    Returns
    -------
    str
        A new glob pattern that will match both lowercase and uppercase letters.

    Examples
    --------
    >>> insensitive_glob('*.TXT')
    """
    def either(c):
        return '[%s%s]' % (c.lower(), c.upper()) if c.isalpha() else c
    return glob(''.join(map(either, pattern)))

is_raw_image(image_path)

Determine if the image path corresponds to a raw image.

Parameters:

Name Type Description Default
image_path str

The file path of the image.

required

Returns:

Type Description
bool

True if the image is considered raw, False otherwise.

Examples:

>>> result = is_raw_image("image.jpg")
>>> print(result)
False
Source code in src/cellects/utils/load_display_save.py
def is_raw_image(image_path) -> bool:
    """
    Determine if the image path corresponds to a raw image.

    Parameters
    ----------
    image_path : str
        The file path of the image.

    Returns
    -------
    bool
        True if the image is considered raw, False otherwise.

    Examples
    --------
    >>> result = is_raw_image("image.jpg")
    >>> print(result)
    False
    """
    ext = image_path.split(".")[-1]
    if np.isin(ext, opencv_accepted_formats):
        raw_image = False
    else:
        raw_image = True
    return raw_image

keep_one_connected_component(binary_image)

Keep only one connected component in a binary image.

This function filters out all but the largest connected component in a binary image, effectively isolating it from other noise or objects. The function ensures the input is in uint8 format before processing.

Parameters:

Name Type Description Default
binary_image ndarray of uint8

Binary image containing one or more connected components.

required

Returns:

Type Description
ndarray of uint8

Image with only the largest connected component retained.

Examples:

>>> all_shapes = np.zeros((5, 5), dtype=np.uint8)
>>> all_shapes[0:2, 0:2] = 1
>>> all_shapes[3:4, 3:4] = 1
>>> res = keep_one_connected_component(all_shapes)
>>> print(res)
[[1 1 0 0 0]
 [1 1 0 0 0]
 [0 0 0 0 0]
 [0 0 0 0 0]
 [0 0 0 0 0]]
Source code in src/cellects/image_analysis/morphological_operations.py
def keep_one_connected_component(binary_image: NDArray[np.uint8])-> NDArray[np.uint8]:
    """
    Keep only one connected component in a binary image.

    This function filters out all but the largest connected component in
    a binary image, effectively isolating it from other noise or objects.
    The function ensures the input is in uint8 format before processing.

    Parameters
    ----------
    binary_image : ndarray of uint8
        Binary image containing one or more connected components.

    Returns
    -------
    ndarray of uint8
        Image with only the largest connected component retained.

    Examples
    -------
    >>> all_shapes = np.zeros((5, 5), dtype=np.uint8)
    >>> all_shapes[0:2, 0:2] = 1
    >>> all_shapes[3:4, 3:4] = 1
    >>> res = keep_one_connected_component(all_shapes)
    >>> print(res)
    [[1 1 0 0 0]
     [1 1 0 0 0]
     [0 0 0 0 0]
     [0 0 0 0 0]
     [0 0 0 0 0]]
    """
    if binary_image.dtype != np.uint8:
        binary_image = binary_image.astype(np.uint8)
    num_labels, sh = cv2.connectedComponents(binary_image)
    if num_labels <= 1:
        return binary_image.astype(np.uint8)
    else:
        return keep_largest_shape(sh)

linear_model(x, a, b)

Perform a linear transformation on input data using slope and intercept.

Parameters:

Name Type Description Default
x array_like

Input data.

required
a float

Slope coefficient.

required
b float

Intercept.

required

Returns:

Type Description
float

Resulting value from linear transformation: a * x + b.

Examples:

>>> result = linear_model(5, 2.0, 1.5)
>>> print(result)
11.5
Notes

This function uses Numba's @njit decorator for performance.

Source code in src/cellects/utils/formulas.py
@njit()
def linear_model(x: NDArray, a: float, b: float) -> float:
    """
    Perform a linear transformation on input data using slope and intercept.

    Parameters
    ----------
    x : array_like
        Input data.
    a : float
        Slope coefficient.
    b : float
        Intercept.

    Returns
    -------
    float
        Resulting value from linear transformation: `a` * `x` + `b`.

    Examples
    --------
    >>> result = linear_model(5, 2.0, 1.5)
    >>> print(result)  # doctest: +SKIP
    11.5

    Notes
    -----
    This function uses Numba's @njit decorator for performance.
    """
    return a * x + b

list_image_dir(path_to_images='', img_extension='', img_radical='')

List files in an image directory based on optional naming patterns (extension and/or radical).

Parameters:

Name Type Description Default
path_to_images optional

The path to the directory containing images. Default is an empty string.

''
img_extension str

The file extension of the images to be listed. Default is an empty string. When let empty, use the extension corresponding to the most numerous image file in the folder.

''
img_radical str

The radical part of the filenames to be listed. Default is an empty string.

''

Returns:

Type Description
list

A list of image filenames that match the specified criteria, sorted in a natural order.

Notes

This function uses the natsorted and insensitive_glob utilities to ensure that filenames are sorted in a human-readable order.

Examples:

>>> pathway = Path(__name__).resolve().parents[0] / "data" / "single_experiment"
>>> image_list = list_image_dir(pathway)
>>> print(image_list)
Source code in src/cellects/utils/load_display_save.py
def list_image_dir(path_to_images='', img_extension: str='', img_radical: str='') -> list:
    """
    List files in an image directory based on optional naming patterns (extension and/or radical).

    Parameters
    ----------
    path_to_images : optional
        The path to the directory containing images. Default is an empty string.
    img_extension : str, optional
        The file extension of the images to be listed. Default is an empty string.
        When let empty, use the extension corresponding to the most numerous image file in the folder.
    img_radical : str, optional
        The radical part of the filenames to be listed. Default is an empty string.

    Returns
    -------
    list
        A list of image filenames that match the specified criteria,
        sorted in a natural order.

    Notes
    -----
    This function uses the `natsorted` and `insensitive_glob` utilities to ensure
    that filenames are sorted in a human-readable order.

    Examples
    --------
    >>> pathway = Path(__name__).resolve().parents[0] / "data" / "single_experiment"
    >>> image_list = list_image_dir(pathway)
    >>> print(image_list)
    """
    if isinstance(path_to_images, str):
        path_to_images = Path(path_to_images)
    os.chdir(path_to_images)
    if len(img_extension) == 0:
        imgs = insensitive_glob(f'{img_radical}*')
        matches = np.zeros(len(opencv_accepted_formats))
        for e_i, ext in enumerate(opencv_accepted_formats):
            matches[e_i] = np.char.endswith(imgs, ext).sum()
        img_extension = opencv_accepted_formats[np.argmax(matches)]
    imgs = insensitive_glob(f'{img_radical}*{img_extension}')
    imgs = natsorted(imgs)
    return imgs

movie(video, increase_contrast=True)

Summary

Processes a video to display each frame with optional contrast increase and resizing.

Parameters:

Name Type Description Default
video ndarray

The input video represented as a 3D NumPy array.

required
increase_contrast bool

Flag to increase the contrast of each frame (default is True).

True

Other Parameters:

Name Type Description
keyboard int

Key to wait for during the display of each frame.

increase_contrast bool

Whether to increase contrast for the displayed frames.

Returns:

Type Description
None

Raises:

Type Description
ValueError

If video is not a 3D NumPy array.

Notes

This function uses OpenCV's imshow to display each frame. Ensure that the required OpenCV dependencies are met.

Examples:

>>> movie(video)
Processes and displays a video with default settings.
>>> movie(video, keyboard=0)
Processes and displays a video waiting for the SPACE key between frames.
>>> movie(video, increase_contrast=False)
Processes and displays a video without increasing contrast.
Source code in src/cellects/utils/load_display_save.py
def movie(video, increase_contrast: bool=True):
    """
    Summary
    -------
    Processes a video to display each frame with optional contrast increase and resizing.

    Parameters
    ----------
    video : numpy.ndarray
        The input video represented as a 3D NumPy array.
    increase_contrast : bool, optional
        Flag to increase the contrast of each frame (default is True).

    Other Parameters
    ----------------
    keyboard : int, optional
        Key to wait for during the display of each frame.
    increase_contrast : bool, optional
        Whether to increase contrast for the displayed frames.

    Returns
    -------
    None

    Raises
    ------
    ValueError
        If `video` is not a 3D NumPy array.

    Notes
    -----
    This function uses OpenCV's `imshow` to display each frame. Ensure that the required
    OpenCV dependencies are met.

    Examples
    --------
    >>> movie(video)
    Processes and displays a video with default settings.
    >>> movie(video, keyboard=0)
    Processes and displays a video waiting for the SPACE key between frames.
    >>> movie(video, increase_contrast=False)
    Processes and displays a video without increasing contrast.

    """
    for i in np.arange(video.shape[0]):
        image = video[i, :, :]
        if np.any(image):
            if increase_contrast:
                image = bracket_to_uint8_image_contrast(image)
            final_img = cv2.resize(image, (500, 500))
            cv2.imshow('Motion analysis', final_img)
            if cv2.waitKey(25) & 0xFF == ord('q'):
                break
    cv2.destroyAllWindows()

moving_average(vector, step)

Calculate the moving average of a given vector with specified step size.

Computes the moving average of input vector using specified step size. NaN values are treated as zeros in the calculation to allow for continuous averaging.

Parameters:

Name Type Description Default
vector ndarray

Input vector for which to calculate the moving average.

required
step int

Size of the window for computing the moving average.

required

Returns:

Type Description
ndarray

Vector containing the moving averages of the input vector.

Raises:

Type Description
ValueError

If step is less than 1.

ValueError

If the input vector has no valid (non-NaN) elements.

Notes
  • The function considers NaN values as zeros during the averaging process.
  • If step is greater than or equal to the length of the vector, a warning will be raised.

Examples:

>>> import numpy as np
>>> vector = np.array([1.0, 2.0, np.nan, 4.0, 5.0])
>>> step = 3
>>> result = moving_average(vector, step)
>>> print(result)
[1.5 2.33333333 3.66666667 4.         nan]
Source code in src/cellects/utils/formulas.py
def moving_average(vector: NDArray, step: int) -> NDArray[float]:
    """
    Calculate the moving average of a given vector with specified step size.

    Computes the moving average of input `vector` using specified `step`
    size. NaN values are treated as zeros in the calculation to allow
    for continuous averaging.

    Parameters
    ----------
    vector : ndarray
        Input vector for which to calculate the moving average.
    step : int
        Size of the window for computing the moving average.

    Returns
    -------
    numpy.ndarray
        Vector containing the moving averages of the input vector.

    Raises
    ------
    ValueError
        If `step` is less than 1.
    ValueError
        If the input vector has no valid (non-NaN) elements.

    Notes
    -----
    - The function considers NaN values as zeros during the averaging process.
    - If `step` is greater than or equal to the length of the vector, a warning will be raised.

    Examples
    --------
    >>> import numpy as np
    >>> vector = np.array([1.0, 2.0, np.nan, 4.0, 5.0])
    >>> step = 3
    >>> result = moving_average(vector, step)
    >>> print(result)
    [1.5 2.33333333 3.66666667 4.         nan]
    """
    substep = np.array((- int(np.floor((step - 1) / 2)), int(np.ceil((step - 1) / 2))))
    sums = np.zeros(vector.shape)
    n_okays = sums.copy()
    true_numbers = np.logical_not(np.isnan(vector))
    vector[np.logical_not(true_numbers)] = 0
    for step_i in np.arange(substep[1] + 1):
        sums[step_i: (sums.size - step_i)] = sums[step_i: (sums.size - step_i)] + vector[(2 * step_i):]
        n_okays[step_i: (sums.size - step_i)] = n_okays[step_i: (sums.size - step_i)] + true_numbers[(2 * step_i):]
        if np.logical_and(step_i > 0, step_i < np.absolute(substep[0])):
            sums[step_i: (sums.size - step_i)] = sums[step_i: (sums.size - step_i)] + vector[:(sums.size - (2 * step_i)):]
            n_okays[step_i: (sums.size - step_i)] = n_okays[step_i: (sums.size - step_i)] + true_numbers[:(
                        true_numbers.size - (2 * step_i))]
    vector = sums / n_okays
    return vector

njit(*args, **kwargs)

numba.njit decorator that can be disabled. Useful for testing.

Source code in src/cellects/utils/decorators.py
def njit(*args, **kwargs):
    """ numba.njit decorator that can be disabled. Useful for testing.
    """
    if USE_NUMBA:
        return _real_njit(*args, **kwargs)
    # test mode: return identity decorator
    def deco(func):
        return func
    return deco

read_and_rotate(image_name, prev_img=None, raw_images=False, is_landscape=True, crop_coord=None)

Read and rotate an image based on specified parameters.

This function reads an image from the given file name, optionally rotates it by 90 degrees clockwise or counterclockwise based on its dimensions and the is_landscape flag, and applies cropping if specified. It also compares rotated images against a previous image to choose the best rotation.

Parameters:

Name Type Description Default
image_name str

Name of the image file to read.

required
prev_img ndarray

Previous image for comparison. Default is None.

None
raw_images bool

Flag to read raw images. Default is False.

False
is_landscape bool

Flag to determine if the image should be considered in landscape mode. Default is True.

True
crop_coord ndarray

Coordinates for cropping the image. Default is None.

None

Returns:

Type Description
ndarray

Rotated and optionally cropped image.

Raises:

Type Description
FileNotFoundError

If the specified image file does not exist.

Examples:

>>> pathway = Path(__name__).resolve().parents[0] / "data" / "single_experiment"
>>> image_name = 'image1.tif'
>>> image = read_and_rotate(pathway /image_name)
>>> print(image.shape)
(245, 300, 3)
Source code in src/cellects/utils/load_display_save.py
def read_and_rotate(image_name, prev_img: NDArray=None, raw_images: bool=False, is_landscape: bool=True, crop_coord: NDArray=None) -> NDArray:
    """
    Read and rotate an image based on specified parameters.

    This function reads an image from the given file name, optionally rotates
    it by 90 degrees clockwise or counterclockwise based on its dimensions and
    the `is_landscape` flag, and applies cropping if specified. It also compares
    rotated images against a previous image to choose the best rotation.

    Parameters
    ----------
    image_name : str
        Name of the image file to read.
    prev_img : ndarray, optional
        Previous image for comparison. Default is `None`.
    raw_images : bool, optional
        Flag to read raw images. Default is `False`.
    is_landscape : bool, optional
        Flag to determine if the image should be considered in landscape mode.
        Default is `True`.
    crop_coord : ndarray, optional
        Coordinates for cropping the image. Default is `None`.

    Returns
    -------
    ndarray
        Rotated and optionally cropped image.

    Raises
    ------
    FileNotFoundError
        If the specified image file does not exist.

    Examples
    ------
    >>> pathway = Path(__name__).resolve().parents[0] / "data" / "single_experiment"
    >>> image_name = 'image1.tif'
    >>> image = read_and_rotate(pathway /image_name)
    >>> print(image.shape)
    (245, 300, 3)
    """
    if not os.path.exists(image_name):
        raise FileNotFoundError(image_name)
    img = readim(image_name, raw_images)
    if (img.shape[0] > img.shape[1] and is_landscape) or (img.shape[0] < img.shape[1] and not is_landscape):
        clockwise = cv2.rotate(img, cv2.ROTATE_90_CLOCKWISE)
        if crop_coord is not None:
            clockwise = clockwise[crop_coord[0]:crop_coord[1], crop_coord[2]:crop_coord[3], ...]
        if prev_img is not None:
            prev_img = np.int16(prev_img)
            clock_diff = sum_of_abs_differences(prev_img, np.int16(clockwise))
            counter_clockwise = cv2.rotate(img, cv2.ROTATE_90_COUNTERCLOCKWISE)
            if crop_coord is not None:
                counter_clockwise = counter_clockwise[crop_coord[0]:crop_coord[1], crop_coord[2]:crop_coord[3], ...]
            counter_clock_diff = sum_of_abs_differences(prev_img, np.int16(counter_clockwise))
            if clock_diff > counter_clock_diff:
                img = counter_clockwise
            else:
                img = clockwise
        else:
            img = clockwise
    else:
        if crop_coord is not None:
            img = img[crop_coord[0]:crop_coord[1], crop_coord[2]:crop_coord[3], ...]
    return img

read_h5(file_name, key='data')

Read data array from an HDF5 file.

This function reads a specific dataset from an HDF5 file using the provided key.

Parameters:

Name Type Description Default
file_name str

The path to the HDF5 file.

required
key str

The dataset name within the HDF5 file.

'data'

Returns:

Type Description
ndarray

The data array from the specified dataset in the HDF5 file.

Source code in src/cellects/utils/load_display_save.py
def read_h5(file_name, key: str="data"):
    """
    Read data array from an HDF5 file.

    This function reads a specific dataset from an HDF5 file using the provided key.

    Parameters
    ----------
    file_name : str
        The path to the HDF5 file.
    key : str, optional, default: 'data'
        The dataset name within the HDF5 file.

    Returns
    -------
    ndarray
        The data array from the specified dataset in the HDF5 file.
    """
    if os.path.isfile(file_name):
        with h5py.File(file_name, 'r') as h5f:
            if key in h5f:
                data = h5f[key][:]
                return data
            else:
                return None
    else:
        return None

read_one_arena(arena_label, already_greyscale, csc_dict, videos_already_in_ram=None, true_frame_width=None, vid_name=None, background=None, background2=None)

Read a single arena's video data, potentially converting it from color to greyscale.

Parameters:

Name Type Description Default
arena_label int

The label of the arena.

required
already_greyscale bool

Whether the video is already in greyscale format.

required
csc_dict dict

Dictionary containing color space conversion settings.

required
videos_already_in_ram ndarray

Pre-loaded video frames in memory. Default is None.

None
true_frame_width int

The true width of the video frames. Default is None.

None
vid_name str

Name of the video file. Default is None.

None
background ndarray

Background image for subtractions. Default is None.

None
background2 ndarray

Second background image for subtractions. Default is None.

None

Returns:

Type Description
tuple

A tuple containing: - visu: np.ndarray or None, the visual frame. - converted_video: np.ndarray or None, the video data converted as needed. - converted_video2: np.ndarray or None, additional video data if necessary.

Raises:

Type Description
FileNotFoundError

If the specified video file does not exist.

ValueError

If the video data shape is invalid.

Notes

This function assumes that video2numpy is a helper function available in the scope. For optimal performance, ensure all video data fits in RAM.

Source code in src/cellects/utils/load_display_save.py
def read_one_arena(arena_label, already_greyscale:bool, csc_dict: dict, videos_already_in_ram=None,
                   true_frame_width=None, vid_name: str=None, background: NDArray=None, background2: NDArray=None):
    """
    Read a single arena's video data, potentially converting it from color to greyscale.

    Parameters
    ----------
    arena_label : int
        The label of the arena.
    already_greyscale : bool
        Whether the video is already in greyscale format.
    csc_dict : dict
        Dictionary containing color space conversion settings.
    videos_already_in_ram : np.ndarray, optional
        Pre-loaded video frames in memory. Default is None.
    true_frame_width : int, optional
        The true width of the video frames. Default is None.
    vid_name : str, optional
        Name of the video file. Default is None.
    background : np.ndarray, optional
        Background image for subtractions. Default is None.
    background2 : np.ndarray, optional
        Second background image for subtractions. Default is None.

    Returns
    -------
    tuple
        A tuple containing:
            - visu: np.ndarray or None, the visual frame.
            - converted_video: np.ndarray or None, the video data converted as needed.
            - converted_video2: np.ndarray or None, additional video data if necessary.

    Raises
    ------
    FileNotFoundError
        If the specified video file does not exist.
    ValueError
        If the video data shape is invalid.

    Notes
    -----
    This function assumes that `video2numpy` is a helper function available in the scope.
    For optimal performance, ensure all video data fits in RAM.
    """
    visu, converted_video, converted_video2 = None, None, None
    logging.info(f"Arena n°{arena_label}. Load images and videos")
    if videos_already_in_ram is not None:
        if already_greyscale:
            converted_video = videos_already_in_ram
        else:
            if csc_dict['logical'] == 'None':
                visu, converted_video = videos_already_in_ram
            else:
                visu, converted_video, converted_video2 = videos_already_in_ram
    else:
        if vid_name is not None:
            if already_greyscale:
                converted_video = video2numpy(vid_name, None, background, background2, true_frame_width)
                if len(converted_video.shape) == 4:
                    converted_video = converted_video[:, :, :, 0]
            else:
                visu = video2numpy(vid_name, None, background, background2, true_frame_width)
        else:
            vid_name = f"ind_{arena_label}.h5"
            h5_keys = get_h5_keys(vid_name)
            if os.path.isfile(vid_name) and 'video' in h5_keys:
                if already_greyscale:
                    converted_video = video2numpy(vid_name, None, background, background2, true_frame_width)
                    if len(converted_video.shape) == 4:
                        converted_video = converted_video[:, :, :, 0]
                else:
                    visu = video2numpy(vid_name, None, background, background2, true_frame_width)
    return visu, converted_video, converted_video2

read_rotate_crop_and_reduce_image(image_name, prev_img=None, crop_coord=None, cr=None, raw_images=False, is_landscape=True, reduce_image_dim=False)

Reads, rotates, crops (if specified), and reduces image dimensionality if required.

Parameters:

Name Type Description Default
image_name str

Name of the image file to read.

required
prev_img NDArray

Previous image array used for rotation reference or state tracking.

None
crop_coord list

List of four integers [x_start, x_end, y_start, y_end] specifying cropping region. If None, no initial crop is applied.

None
cr list

List of four integers [x_start, x_end, y_start, y_end] for final cropping after rotation.

None
raw_images bool

Flag indicating whether to process raw image data (True) or processed image (False).

False
is_landscape bool

Boolean determining if the image is landscape-oriented and requires specific rotation handling.

True
reduce_image_dim bool

Whether to reduce the cropped image to a single channel (e.g., grayscale from RGB).

False

Returns:

Name Type Description
img NDArray

Processed image after rotation, cropping, and optional dimensionality reduction.

prev_img NDArray

Copy of the image immediately after rotation but before any cropping operations.

Examples:

>>> import numpy as np
>>> img = np.random.rand(200, 300, 3)
>>> new_img, prev = read_rotate_crop_and_reduce_image("example.jpg", img, [50, 150, 75, 225], [20, 180, 40, 250], False, True, True)
>>> new_img.shape == (160, 210)
True
>>> prev.shape == (200, 300, 3)
True
Source code in src/cellects/utils/load_display_save.py
def read_rotate_crop_and_reduce_image(image_name: str, prev_img: NDArray=None, crop_coord: list=None, cr: list=None,
                                      raw_images: bool=False, is_landscape: bool=True, reduce_image_dim: bool=False):
    """
    Reads, rotates, crops (if specified), and reduces image dimensionality if required.

    Parameters
    ----------
    image_name : str
        Name of the image file to read.
    prev_img : NDArray
        Previous image array used for rotation reference or state tracking.
    crop_coord : list
        List of four integers [x_start, x_end, y_start, y_end] specifying cropping region. If None, no initial crop is applied.
    cr : list
        List of four integers [x_start, x_end, y_start, y_end] for final cropping after rotation.
    raw_images : bool
        Flag indicating whether to process raw image data (True) or processed image (False).
    is_landscape : bool
        Boolean determining if the image is landscape-oriented and requires specific rotation handling.
    reduce_image_dim : bool
        Whether to reduce the cropped image to a single channel (e.g., grayscale from RGB).

    Returns
    -------
    img : NDArray
        Processed image after rotation, cropping, and optional dimensionality reduction.
    prev_img : NDArray
        Copy of the image immediately after rotation but before any cropping operations.

    Examples
    --------
    >>> import numpy as np
    >>> img = np.random.rand(200, 300, 3)
    >>> new_img, prev = read_rotate_crop_and_reduce_image("example.jpg", img, [50, 150, 75, 225], [20, 180, 40, 250], False, True, True)
    >>> new_img.shape == (160, 210)
    True
    >>> prev.shape == (200, 300, 3)
    True
    """
    img = read_and_rotate(image_name, prev_img, raw_images, is_landscape)
    prev_img = img.copy()
    if crop_coord is not None:
        img = img[crop_coord[0]:crop_coord[1], crop_coord[2]:crop_coord[3], :]
    if cr is not None:
        img = img[cr[0]:cr[1], cr[2]:cr[3], :]
    if reduce_image_dim:
        img = img[:, :, 0]
    return img, prev_img

read_tif_stack(vid_name, expected_channels=1)

Read video array from a tif file.

This function reads a specific dataset from a tif file.

Parameters:

Name Type Description Default
vid_name str

The path to the tif stack file.

required
expected_channels int

The number of channel.

1

Returns:

Type Description
ndarray

The data array from the tif file.

Source code in src/cellects/utils/load_display_save.py
def read_tif_stack(vid_name: str, expected_channels: int=1):
    """
        Read video array from a tif file.

        This function reads a specific dataset from a tif file.

        Parameters
        ----------
        vid_name : str
            The path to the tif stack file.
        expected_channels : int
            The number of channel.

        Returns
        -------
        ndarray
            The data array from the tif file.
    """
    all_frames = None
    if os.path.isfile(vid_name):
        with tifffile.TiffFile(vid_name) as tif:
            # Count the number of pages (frames and channels)
            num_pages = len(tif.pages)

            # Determine the shape of a single frame
            example_page = tif.pages[0]
            height, width = example_page.asarray().shape

            # Calculate the number of frames per channel based on expected_channels parameter
            frames_per_channel = num_pages // expected_channels

            # Initialize an array to hold all frames for each channel
            all_frames = np.zeros((frames_per_channel, height, width, expected_channels),
                                  dtype=example_page.asarray().dtype)

            # Read and store each frame
            for i in range(frames_per_channel):
                for channel in range(expected_channels):
                    page_index = i * expected_channels + channel
                    frame = tif.pages[page_index].asarray()
                    all_frames[i, :, :, channel] = frame
    return all_frames

readim(image_path, raw_image=False)

Read an image from a file and optionally process it.

Parameters:

Name Type Description Default
image_path str

Path to the image file.

required
raw_image bool

If True, logs an error message indicating that the raw image format cannot be processed. Default is False.

False

Returns:

Type Description
ndarray

The decoded image represented as a NumPy array of shape (height, width, channels).

Raises:

Type Description
RuntimeError

If raw_image is set to True, logs an error indicating that the raw image format cannot be processed.

Notes

Although raw_image is set to False by default, currently it does not perform any raw image processing.

Examples:

>>> cv2.imread("example.jpg")
array([[[255, 0, 0],
        [255, 0, 0]],
   [[  0, 255, 0],
    [  0, 255, 0]],

   [[  0,   0, 255],
    [  0,   0, 255]]], dtype=np.uint8)
Source code in src/cellects/utils/load_display_save.py
def readim(image_path, raw_image: bool=False):
    """
    Read an image from a file and optionally process it.

    Parameters
    ----------
    image_path : str
        Path to the image file.
    raw_image : bool, optional
        If True, logs an error message indicating that the raw image format cannot be processed. Default is False.

    Returns
    -------
    ndarray
        The decoded image represented as a NumPy array of shape (height, width, channels).

    Raises
    ------
    RuntimeError
        If `raw_image` is set to True, logs an error indicating that the raw image format cannot be processed.

    Notes
    -----
    Although `raw_image` is set to False by default, currently it does not perform any raw image processing.

    Examples
    --------
    >>> cv2.imread("example.jpg")
    array([[[255, 0, 0],
            [255, 0, 0]],

           [[  0, 255, 0],
            [  0, 255, 0]],

           [[  0,   0, 255],
            [  0,   0, 255]]], dtype=np.uint8)
    """
    if raw_image:
        logging.error("Cannot read this image format. If the rawpy package can, ask for a version of Cellects using it.")
        # import rawpy
        # raw = rawpy.imread(image_path)
        # raw = raw.postprocess()
        # return cv2.cvtColor(raw, COLOR_RGB2BGR)
        return cv2.imread(image_path)
    else:
        return cv2.imread(image_path)

remove_coordinates(arr1, arr2)

Remove coordinates from arr1 that are present in arr2.

Given two arrays of coordinates, remove rows from the first array that match any row in the second array.

Parameters:

Name Type Description Default
arr1 ndarray of shape (n, 2)

Array containing coordinates to filter.

required
arr2 ndarray of shape (m, 2)

Array containing coordinates to match for removal.

required

Returns:

Type Description
ndarray of shape (k, 2)

Array with coordinates from arr1 that are not in arr2.

Examples:

>>> arr1 = np.array([[1, 2], [3, 4]])
>>> arr2 = np.array([[3, 4]])
>>> remove_coordinates(arr1, arr2)
array([[1, 2],
       [3, 4]])
>>> arr1 = np.array([[1, 2], [3, 4]])
>>> arr2 = np.array([[3, 2], [1, 4]])
>>> remove_coordinates(arr1, arr2)
array([[1, 2],
       [3, 4]])
>>> arr1 = np.array([[1, 2], [3, 4]])
>>> arr2 = np.array([[3, 2], [1, 2]])
>>> remove_coordinates(arr1, arr2)
array([[3, 4]])
>>> arr1 = np.arange(200).reshape(100, 2)
>>> arr2 = np.array([[196, 197], [198, 199]])
>>> new_arr1 = remove_coordinates(arr1, arr2)
>>> new_arr1.shape
(98, 2)
Source code in src/cellects/utils/utilitarian.py
def remove_coordinates(arr1: NDArray, arr2: NDArray) -> NDArray:
    """
    Remove coordinates from `arr1` that are present in `arr2`.

    Given two arrays of coordinates, remove rows from the first array
    that match any row in the second array.

    Parameters
    ----------
    arr1 : ndarray of shape (n, 2)
        Array containing coordinates to filter.
    arr2 : ndarray of shape (m, 2)
        Array containing coordinates to match for removal.

    Returns
    -------
    ndarray of shape (k, 2)
        Array with coordinates from `arr1` that are not in `arr2`.

    Examples
    --------
    >>> arr1 = np.array([[1, 2], [3, 4]])
    >>> arr2 = np.array([[3, 4]])
    >>> remove_coordinates(arr1, arr2)
    array([[1, 2],
           [3, 4]])

    >>> arr1 = np.array([[1, 2], [3, 4]])
    >>> arr2 = np.array([[3, 2], [1, 4]])
    >>> remove_coordinates(arr1, arr2)
    array([[1, 2],
           [3, 4]])

    >>> arr1 = np.array([[1, 2], [3, 4]])
    >>> arr2 = np.array([[3, 2], [1, 2]])
    >>> remove_coordinates(arr1, arr2)
    array([[3, 4]])

    >>> arr1 = np.arange(200).reshape(100, 2)
    >>> arr2 = np.array([[196, 197], [198, 199]])
    >>> new_arr1 = remove_coordinates(arr1, arr2)
    >>> new_arr1.shape
    (98, 2)
    """
    if arr2.shape[0] == 0:
        return arr1
    else:
        if arr1.shape[1] != 2 or arr2.shape[1] != 2:
            raise ValueError("Both arrays must have shape (n, 2)")
        c_to_keep = ~np.all(arr1 == arr2[0], axis=1)
        for row in arr2[1:]:
            c_to_keep *= ~np.all(arr1 == row, axis=1)
        return arr1[c_to_keep]

remove_h5_key(file_name, key='data')

Remove a specified key from an HDF5 file.

This function opens an HDF5 file in append mode and deletes the specified key if it exists. It handles exceptions related to file not found and other runtime errors.

Parameters:

Name Type Description Default
file_name str

The path to the HDF5 file from which the key should be removed.

required
key str

The name of the dataset or group to delete from the HDF5 file. Default is "data".

'data'

Returns:

Type Description
None

Raises:

Type Description
FileNotFoundError

If the specified file does not exist.

RuntimeError

If any other error occurs during file operations.

Notes

This function modifies the HDF5 file in place. Ensure you have a backup if necessary.

Source code in src/cellects/utils/load_display_save.py
def remove_h5_key(file_name, key: str="data"):
    """
    Remove a specified key from an HDF5 file.

    This function opens an HDF5 file in append mode and deletes the specified
    key if it exists. It handles exceptions related to file not found
    and other runtime errors.

    Parameters
    ----------
    file_name : str
        The path to the HDF5 file from which the key should be removed.
    key : str, optional
        The name of the dataset or group to delete from the HDF5 file.
        Default is "data".

    Returns
    -------
    None

    Raises
    ------
    FileNotFoundError
        If the specified file does not exist.
    RuntimeError
        If any other error occurs during file operations.

    Notes
    -----
    This function modifies the HDF5 file in place. Ensure you have a backup if necessary.
    """
    if os.path.isfile(file_name):
        with h5py.File(file_name, 'a') as h5f:  # Open in append mode to modify the file
            if key in h5f:
                del h5f[key]

remove_padding(array_list)

Remove padding from a list of 2D arrays.

Parameters:

Name Type Description Default
array_list list of ndarrays

List of 2D NumPy arrays to be processed.

required

Returns:

Name Type Description
out list of ndarrays

List of 2D NumPy arrays with the padding removed.

Examples:

>>> arr1 = np.array([[0, 0, 0], [0, 1, 0], [0, 0, 0]])
>>> arr2 = np.array([[1, 1, 1], [1, 0, 1], [1, 1, 1]])
>>> remove_padding([arr1, arr2])
[array([[1]]), array([[0]])]
Source code in src/cellects/image_analysis/network_functions.py
def remove_padding(array_list: list) -> list:
    """
    Remove padding from a list of 2D arrays.

    Parameters
    ----------
    array_list : list of ndarrays
        List of 2D NumPy arrays to be processed.

    Returns
    -------
    out : list of ndarrays
        List of 2D NumPy arrays with the padding removed.

    Examples
    --------
    >>> arr1 = np.array([[0, 0, 0], [0, 1, 0], [0, 0, 0]])
    >>> arr2 = np.array([[1, 1, 1], [1, 0, 1], [1, 1, 1]])
    >>> remove_padding([arr1, arr2])
    [array([[1]]), array([[0]])]
    """
    new_array_list = []
    for arr in array_list:
        new_array_list.append(un_pad(arr))
    return new_array_list

remove_small_loops(pad_skeleton, pad_distances=None)

Remove small loops from a skeletonized image.

This function identifies and removes small loops in a skeletonized image, returning the modified skeleton. If distance information is provided, it updates that as well.

Parameters:

Name Type Description Default
pad_skeleton ndarray of uint8

The skeletonized image with potential small loops.

required
pad_distances ndarray of float64

The distance map corresponding to the skeleton image. Default is None.

None

Returns:

Name Type Description
out ndarray of uint8 or tuple(ndarray of uint8, ndarray of float64)

If pad_distances is None, returns the modified skeleton. Otherwise, returns a tuple of the modified skeleton and updated distances.

Source code in src/cellects/image_analysis/network_functions.py
def remove_small_loops(pad_skeleton: NDArray[np.uint8], pad_distances: NDArray[np.float64]=None):
    """
    Remove small loops from a skeletonized image.

    This function identifies and removes small loops in a skeletonized image, returning the modified skeleton.
    If distance information is provided, it updates that as well.

    Parameters
    ----------
    pad_skeleton : ndarray of uint8
        The skeletonized image with potential small loops.
    pad_distances : ndarray of float64, optional
        The distance map corresponding to the skeleton image. Default is `None`.

    Returns
    -------
    out : ndarray of uint8 or tuple(ndarray of uint8, ndarray of float64)
        If `pad_distances` is None, returns the modified skeleton. Otherwise,
        returns a tuple of the modified skeleton and updated distances.
    """
    cnv4, cnv8 = get_neighbor_comparisons(pad_skeleton)
    # potential_tips = get_terminations_and_their_connected_nodes(pad_skeleton, cnv4, cnv8)

    cnv_diag_0 = CompareNeighborsWithValue(pad_skeleton, 0)
    cnv_diag_0.is_equal(0, and_itself=True)

    cnv4_false = CompareNeighborsWithValue(pad_skeleton, 4)
    cnv4_false.is_equal(1, and_itself=False)

    loop_centers = np.logical_and((cnv4_false.equal_neighbor_nb == 4), cnv_diag_0.equal_neighbor_nb > 2).astype(np.uint8)

    surrounding = cv2.dilate(loop_centers, kernel=square_33)
    surrounding -= loop_centers
    surrounding = surrounding * cnv8.equal_neighbor_nb

    # Every 2 can be replaced by 0 if the loop center becomes 1
    filled_loops = pad_skeleton.copy()
    filled_loops[surrounding == 2] = 0
    filled_loops += loop_centers

    new_pad_skeleton = morphology.skeletonize(filled_loops, method='lee')

    # Put the new pixels in pad_distances
    new_pixels = new_pad_skeleton * (1 - pad_skeleton)
    pad_skeleton = new_pad_skeleton.astype(np.uint8)
    if pad_distances is None:
        return pad_skeleton
    else:
        pad_distances[np.nonzero(new_pixels)] = np.nan # 2. # Put nearest value instead?
        pad_distances *= pad_skeleton
        # for yi, xi in zip(npY, npX): # yi, xi = npY[0], npX[0]
        #     distances[yi, xi] = 2.
        return pad_skeleton, pad_distances

rolling_window_segmentation(greyscale_image, possibly_filled_pixels, patch_size=(10, 10))

Perform rolling window segmentation on a greyscale image, using potentially filled pixels and a specified patch size.

The function divides the input greyscale image into overlapping patches defined by patch_size, and applies Otsu's thresholding method to each patch. The thresholds can be optionally refined using a minimization algorithm.

Parameters:

Name Type Description Default
greyscale_image ndarray of uint8

The input greyscale image to segment.

required
possibly_filled_pixels ndarray of uint8

An array indicating which pixels are possibly filled.

required
patch_size tuple

The dimensions of the patches to segment. Default is (10, 10). Must be superior to (1, 1).

(10, 10)

Returns:

Name Type Description
output ndarray of uint8

The segmented binary image where the network is marked as True.

Examples:

>>> greyscale_image = np.array([[1, 2, 1, 1], [1, 3, 4, 1], [2, 4, 3, 1], [2, 1, 2, 1]])
>>> possibly_filled_pixels = greyscale_image > 1
>>> patch_size = (2, 2)
>>> result = rolling_window_segmentation(greyscale_image, possibly_filled_pixels, patch_size)
>>> print(result)
[[0 1 0 0]
 [0 1 1 0]
 [0 1 1 0]
 [0 0 1 0]]
Source code in src/cellects/image_analysis/image_segmentation.py
def rolling_window_segmentation(greyscale_image: NDArray, possibly_filled_pixels: NDArray, patch_size: tuple=(10, 10)) -> NDArray[np.uint8]:
    """
    Perform rolling window segmentation on a greyscale image, using potentially filled pixels and a specified patch size.

    The function divides the input greyscale image into overlapping patches defined by `patch_size`,
    and applies Otsu's thresholding method to each patch. The thresholds can be optionally
    refined using a minimization algorithm.

    Parameters
    ----------
    greyscale_image : ndarray of uint8
        The input greyscale image to segment.
    possibly_filled_pixels : ndarray of uint8
        An array indicating which pixels are possibly filled.
    patch_size : tuple, optional
        The dimensions of the patches to segment. Default is (10, 10).
        Must be superior to (1, 1).

    Returns
    -------
    output : ndarray of uint8
        The segmented binary image where the network is marked as True.

    Examples
    --------
    >>> greyscale_image = np.array([[1, 2, 1, 1], [1, 3, 4, 1], [2, 4, 3, 1], [2, 1, 2, 1]])
    >>> possibly_filled_pixels = greyscale_image > 1
    >>> patch_size = (2, 2)
    >>> result = rolling_window_segmentation(greyscale_image, possibly_filled_pixels, patch_size)
    >>> print(result)
    [[0 1 0 0]
     [0 1 1 0]
     [0 1 1 0]
     [0 0 1 0]]
    """
    patch_centers = [
        np.floor(np.linspace(
            p // 2, s - p // 2, int(np.ceil(s / (p // 2))) - 1
        )).astype(int)
        for s, p in zip(greyscale_image.shape, patch_size)
    ]
    patch_centers = np.transpose(np.meshgrid(*patch_centers), (1, 2, 0)).reshape((-1, 2))

    patch_slices = [
        tuple(slice(c - p // 2, c + p // 2, 1)
              for c, p in zip(p_c, patch_size)) for p_c in patch_centers
    ]
    maximize_parameter = False

    network_patches = []
    patch_thresholds = []
    for patch in patch_slices:
        v = greyscale_image[patch] * possibly_filled_pixels[patch]
        if v.max() > 0 and np.ptp(v) > 0.5:
            t = threshold_otsu(v)

            if maximize_parameter:
                res = minimize(_network_perimeter, x0=t, args=(v,), method='Nelder-Mead')
                t = res.x[0]

            network_patches.append(v > t)
            patch_thresholds.append(t)
        else:
            network_patches.append(np.zeros_like(v))
            patch_thresholds.append(0)

    network_img = np.zeros(greyscale_image.shape, dtype=np.float64)
    count_img = np.zeros_like(greyscale_image)
    for patch, network_patch, t in zip(patch_slices, network_patches, patch_thresholds):
        network_img[patch] += network_patch
        count_img[patch] += np.ones_like(network_patch)

    # Safe in-place division: zeros remain where count_img == 0
    np.divide(network_img, count_img, out=network_img, where=count_img != 0)

    return (network_img > 0.5).astype(np.uint8)

save_fig(img, full_path, cmap=None)

Save an image figure to a file with specified options.

This function creates a matplotlib figure from the given image, optionally applies a colormap, displays it briefly, saves the figure to disk at high resolution, and closes the figure.

Parameters:

Name Type Description Default
img array_like(M, N, 3)

Input image to be saved as a figure. Expected to be in RGB format.

required
full_path str

The complete file path where the figure will be saved. Must include extension (e.g., '.png', '.jpg').

required
cmap str or None

Colormap to be applied if the image should be displayed with a specific color map. If None, no colormap is applied.

None

Returns:

Type Description
None

This function does not return any value. It saves the figure to disk at the specified location.

Raises:

Type Description
FileNotFoundError

If the directory in full_path does not exist.

Examples:

>>> img = np.random.rand(100, 100, 3) * 255
>>> save_fig(img, 'test.png')
Creates and saves a figure from the random image to 'test.png'.
>>> save_fig(img, 'colored_test.png', cmap='viridis')
Creates and saves a figure from the random image with 'viridis' colormap
to 'colored_test.png'.
Source code in src/cellects/utils/load_display_save.py
def save_fig(img: NDArray, full_path, cmap=None):
    """
    Save an image figure to a file with specified options.

    This function creates a matplotlib figure from the given image,
    optionally applies a colormap, displays it briefly, saves the
    figure to disk at high resolution, and closes the figure.

    Parameters
    ----------
    img : array_like (M, N, 3)
        Input image to be saved as a figure. Expected to be in RGB format.
    full_path : str
        The complete file path where the figure will be saved. Must include
        extension (e.g., '.png', '.jpg').
    cmap : str or None, optional
        Colormap to be applied if the image should be displayed with a specific
        color map. If `None`, no colormap is applied.

    Returns
    -------
    None

        This function does not return any value. It saves the figure to disk
        at the specified location.

    Raises
    ------
    FileNotFoundError
        If the directory in `full_path` does not exist.

    Examples
    --------
    >>> img = np.random.rand(100, 100, 3) * 255
    >>> save_fig(img, 'test.png')
    Creates and saves a figure from the random image to 'test.png'.

    >>> save_fig(img, 'colored_test.png', cmap='viridis')
    Creates and saves a figure from the random image with 'viridis' colormap
    to 'colored_test.png'.
    """
    sizes = img.shape[0] / 100,  img.shape[1] / 100
    fig = plt.figure(figsize=(sizes[0], sizes[1]))
    ax = fig.gca()
    if cmap is None:
        ax.imshow(img, interpolation="none")
    else:
        ax.imshow(img, cmap=cmap, interpolation="none")
    plt.axis('off')
    if np.min(img.shape) > 50:
        fig.tight_layout()

    fig.savefig(full_path, bbox_inches='tight', pad_inches=0., transparent=True, dpi=500)
    plt.close(fig)

scale_coordinates(coord, scale, dims)

Scale coordinates based on given scale factors and dimensions.

Parameters:

Name Type Description Default
coord ndarray

A 2x2 array of coordinates to be scaled.

required
scale tuple of float

Scaling factors for the x and y coordinates, respectively.

required
dims tuple of int

Maximum dimensions (height, width) for the scaled coordinates.

required

Returns:

Type Description
ndarray

Scaled and rounded coordinates.

int

Minimum y-coordinate.

int

Maximum y-coordinate.

int

Minimum x-coordinate.

int

Maximum x-coordinate.

Examples:

>>> coord = np.array(((47, 38), (59, 37)))
>>> scale = (0.92, 0.87)
>>> dims = (245, 300, 3)
>>> scaled_coord, min_y, max_y, min_x, max_x = scale_coordinates(coord, scale, dims)
>>> scaled_coord
array([[43, 33],
       [54, 32]])
>>> min_y, max_y
(np.int64(43), np.int64(54))
>>> min_x, max_x
(np.int64(32), np.int64(33))
Notes

This function assumes that the input coordinates are in a specific format and will fail if not. The scaling factors should be positive.

Source code in src/cellects/utils/formulas.py
def scale_coordinates(coord: NDArray, scale: Tuple, dims: Tuple) -> Tuple[NDArray[np.int64], np.int64, np.int64, np.int64, np.int64]:
    """
    Scale coordinates based on given scale factors and dimensions.

    Parameters
    ----------
    coord : numpy.ndarray
        A 2x2 array of coordinates to be scaled.
    scale : tuple of float
        Scaling factors for the x and y coordinates, respectively.
    dims : tuple of int
        Maximum dimensions (height, width) for the scaled coordinates.

    Returns
    -------
    numpy.ndarray
        Scaled and rounded coordinates.
    int
        Minimum y-coordinate.
    int
        Maximum y-coordinate.
    int
        Minimum x-coordinate.
    int
        Maximum x-coordinate.

    Examples
    --------
    >>> coord = np.array(((47, 38), (59, 37)))
    >>> scale = (0.92, 0.87)
    >>> dims = (245, 300, 3)
    >>> scaled_coord, min_y, max_y, min_x, max_x = scale_coordinates(coord, scale, dims)
    >>> scaled_coord
    array([[43, 33],
           [54, 32]])
    >>> min_y, max_y
    (np.int64(43), np.int64(54))
    >>> min_x, max_x
    (np.int64(32), np.int64(33))

    Notes
    -----
    This function assumes that the input coordinates are in a specific format
    and will fail if not. The scaling factors should be positive.
    """
    coord = np.array(((np.round(coord[0][0] * scale[0]), np.round(coord[0][1] * scale[1])),
                    (np.round(coord[1][0] * scale[0]), np.round(coord[1][1] * scale[1]))), dtype=np.int64)
    min_y = np.max((0, np.min(coord[:, 0])))
    max_y = np.min((dims[0], np.max(coord[:, 0])))
    min_x = np.max((0, np.min(coord[:, 1])))
    max_x = np.min((dims[1], np.max(coord[:, 1])))
    return coord, min_y, max_y, min_x, max_x

show(img, interactive=True, cmap=None, show=True)

Display an image using Matplotlib with optional interactivity and colormap.

Parameters:

Name Type Description Default
img ndarray

The image data to be displayed.

required
interactive bool

If True, turn on interactive mode. Default is True.

True
cmap str or Colormap

The colormap to be used. If None, the default colormap will be used.

None

Other Parameters:

Name Type Description
interactive bool

If True, turn on interactive mode. Default is True.

cmap str or Colormap

The colormap to be used. If None, the default colormap will be used.

Returns:

Name Type Description
fig Figure

The Matplotlib figure object containing the displayed image.

ax AxesSubplot

The axes on which the image is plotted.

Raises:

Type Description
ValueError

If cmap is not a recognized colormap name or object.

Notes

If interactive mode is enabled, the user can manipulate the figure window interactively.

Examples:

>>> img = np.random.rand(100, 50)
>>> fig, ax = show(img)
>>> print(fig)
<Figure size ... with ... Axes>
>>> fig, ax = show(img, interactive=False)
>>> print(fig)
<Figure size ... with ... Axes>
>>> fig, ax = show(img, cmap='gray')
>>> print(fig)
<Figure size ... with .... Axes>
Source code in src/cellects/utils/load_display_save.py
def show(img, interactive: bool=True, cmap=None, show: bool=True):
    """
    Display an image using Matplotlib with optional interactivity and colormap.

    Parameters
    ----------
    img : ndarray
        The image data to be displayed.
    interactive : bool, optional
        If ``True``, turn on interactive mode. Default is ``True``.
    cmap : str or Colormap, optional
        The colormap to be used. If ``None``, the default colormap will
        be used.

    Other Parameters
    ----------------
    interactive : bool, optional
        If ``True``, turn on interactive mode. Default is ``True``.
    cmap : str or Colormap, optional
        The colormap to be used. If ``None``, the default colormap will
        be used.

    Returns
    -------
    fig : Figure
        The Matplotlib figure object containing the displayed image.
    ax : AxesSubplot
        The axes on which the image is plotted.

    Raises
    ------
    ValueError
        If `cmap` is not a recognized colormap name or object.

    Notes
    -----
    If interactive mode is enabled, the user can manipulate the figure
    window interactively.

    Examples
    --------
    >>> img = np.random.rand(100, 50)
    >>> fig, ax = show(img)
    >>> print(fig) # doctest: +SKIP
    <Figure size ... with ... Axes>

    >>> fig, ax = show(img, interactive=False)
    >>> print(fig) # doctest: +SKIP
    <Figure size ... with ... Axes>

    >>> fig, ax = show(img, cmap='gray')
    >>> print(fig) # doctest: +SKIP
    <Figure size ... with .... Axes>
    """
    if interactive:
        plt.ion()
    else:
        plt.ioff()
    sizes = img.shape[0] / 100,  img.shape[1] / 100
    fig = plt.figure(figsize=(sizes[1], sizes[0]))
    ax = fig.gca()
    if cmap is None:
        ax.imshow(img, interpolation="none", extent=(0, sizes[1], 0, sizes[0]))
    else:
        ax.imshow(img, cmap=cmap, interpolation="none", extent=(0, sizes[1], 0, sizes[0]))

    if show:
        fig.tight_layout()
        fig.show()

    return fig, ax

smallest_memory_array(array_object, array_type='uint')

Convert input data to the smallest possible NumPy array type that can hold it.

Parameters:

Name Type Description Default
array_object ndarray or list of lists

The input data to be converted.

required
array_type str

The type of NumPy data type to use ('uint').

is 'uint'

Returns:

Type Description
ndarray

A NumPy array of the smallest data type that can hold all values in array_object.

Examples:

>>> import numpy as np
>>> array = [[1, 2], [3, 4]]
>>> smallest_memory_array(array)
array([[1, 2],
       [3, 4]], dtype=np.uint8)
>>> array = [[1000, 2000], [3000, 4000]]
>>> smallest_memory_array(array)
array([[1000, 2000],
       [3000, 4000]], dtype=uint16)
>>> array = [[2**31, 2**32], [2**33, 2**34]]
>>> smallest_memory_array(array)
array([[         2147483648,          4294967296],
       [         8589934592,        17179869184]], dtype=uint64)
Source code in src/cellects/utils/utilitarian.py
def smallest_memory_array(array_object, array_type='uint') -> NDArray:
    """
    Convert input data to the smallest possible NumPy array type that can hold it.

    Parameters
    ----------
    array_object : ndarray or list of lists
        The input data to be converted.
    array_type : str, optional, default is 'uint'
        The type of NumPy data type to use ('uint').

    Returns
    -------
    ndarray
        A NumPy array of the smallest data type that can hold all values in `array_object`.

    Examples
    --------
    >>> import numpy as np
    >>> array = [[1, 2], [3, 4]]
    >>> smallest_memory_array(array)
    array([[1, 2],
           [3, 4]], dtype=np.uint8)

    >>> array = [[1000, 2000], [3000, 4000]]
    >>> smallest_memory_array(array)
    array([[1000, 2000],
           [3000, 4000]], dtype=uint16)

    >>> array = [[2**31, 2**32], [2**33, 2**34]]
    >>> smallest_memory_array(array)
    array([[         2147483648,          4294967296],
           [         8589934592,        17179869184]], dtype=uint64)
    """
    if isinstance(array_object, list):
        array_object = np.array(array_object)
    if isinstance(array_object, np.ndarray):
        value_max = array_object.max()
    else:
        if len(array_object[0]) > 0:
            value_max = np.max((array_object[0].max(), array_object[1].max()))
        else:
            value_max = 0

    if array_type == 'uint':
        if value_max <= np.iinfo(np.uint8).max:
            array_object = np.array(array_object, dtype=np.uint8)
        elif value_max <= np.iinfo(np.uint16).max:
            array_object = np.array(array_object, dtype=np.uint16)
        elif value_max <= np.iinfo(np.uint32).max:
            array_object = np.array(array_object, dtype=np.uint32)
        else:
            array_object = np.array(array_object, dtype=np.uint64)
    return array_object

split_dict(c_space_dict)

Split a dictionary into two dictionaries based on specific criteria and return their keys.

Split the input dictionary c_space_dict into two dictionaries: one for items not ending with '2' and another where the key is truncated by removing its last character if it does end with '2'. Additionally, return the keys that have been processed.

Parameters:

Name Type Description Default
c_space_dict dict

The dictionary to be split. Expected keys are strings and values can be any type.

required

Returns:

Name Type Description
first_dict dict

Dictionary containing items from c_space_dict whose keys do not end with '2'.

second_dict dict

Dictionary containing items from c_space_dict whose keys end with '2', where the key is truncated by removing its last character.

c_spaces list

List of keys from c_space_dict that have been processed.

Raises:

Type Description
None
Notes

No critical information to share.

Examples:

>>> c_space_dict = {'key1': 10, 'key2': 20, 'logical': 30}
>>> first_dict, second_dict, c_spaces = split_dict(c_space_dict)
>>> print(first_dict)
{'key1': 10}
>>> print(second_dict)
{'key': 20}
>>> print(c_spaces)
['key1', 'key']
Source code in src/cellects/utils/utilitarian.py
def split_dict(c_space_dict: dict) -> Tuple[Dict, Dict, list]:
    """

    Split a dictionary into two dictionaries based on specific criteria and return their keys.

    Split the input dictionary `c_space_dict` into two dictionaries: one for items not
    ending with '2' and another where the key is truncated by removing its last
    character if it does end with '2'. Additionally, return the keys that have been
    processed.

    Parameters
    ----------
    c_space_dict : dict
        The dictionary to be split. Expected keys are strings and values can be any type.

    Returns
    -------
    first_dict : dict
        Dictionary containing items from `c_space_dict` whose keys do not end with '2'.
    second_dict : dict
        Dictionary containing items from `c_space_dict` whose keys end with '2',
        where the key is truncated by removing its last character.
    c_spaces : list
        List of keys from `c_space_dict` that have been processed.

    Raises
    ------
    None

    Notes
    -----
    No critical information to share.

    Examples
    --------
    >>> c_space_dict = {'key1': 10, 'key2': 20, 'logical': 30}
    >>> first_dict, second_dict, c_spaces = split_dict(c_space_dict)
    >>> print(first_dict)
    {'key1': 10}
    >>> print(second_dict)
    {'key': 20}
    >>> print(c_spaces)
    ['key1', 'key']

    """
    first_dict = Dict()
    second_dict = Dict()
    c_spaces = []
    for k, v in c_space_dict.items():
        if k == 'PCA' or k != 'logical' and np.absolute(v).sum() > 0:
            if k[-1] != '2':
                first_dict[k] = List(v)
                c_spaces.append(k)
            else:
                second_dict[k[:-1]] = List(v)
                c_spaces.append(k[:-1])
    return first_dict, second_dict, c_spaces

sum_of_abs_differences(array1, array2)

Compute the sum of absolute differences between two arrays.

Parameters:

Name Type Description Default
array1 NDArray

The first input array.

required
array2 NDArray

The second input array.

required

Returns:

Type Description
int

Sum of absolute differences between elements of array1 and array2.

Examples:

>>> arr1 = np.array([1.2, 2.5, -3.7])
>>> arr2 = np.array([12, 25, -37])
>>> result = sum_of_abs_differences(arr1, arr2)
>>> print(result)
66.6
Source code in src/cellects/utils/formulas.py
@njit()
def sum_of_abs_differences(array1: NDArray, array2: NDArray):
    """
    Compute the sum of absolute differences between two arrays.

    Parameters
    ----------
    array1 : NDArray
        The first input array.
    array2 : NDArray
        The second input array.

    Returns
    -------
    int
        Sum of absolute differences between elements of `array1` and `array2`.

    Examples
    --------
    >>> arr1 = np.array([1.2, 2.5, -3.7])
    >>> arr2 = np.array([12, 25, -37])
    >>> result = sum_of_abs_differences(arr1, arr2)
    >>> print(result)
    66.6
    """
    return np.sum(np.absolute(array1 - array2))

to_uint8(an_array)

Convert an array to unsigned 8-bit integers.

Parameters:

Name Type Description Default
an_array ndarray

Input array to be converted. It can be of any numeric dtype.

required

Returns:

Type Description
ndarray

The input array rounded to the nearest integer and then cast to unsigned 8-bit integers.

Raises:

Type Description
TypeError

If an_array is not a ndarray.

Notes

This function uses Numba's @njit decorator for performance optimization.

Examples:

>>> result = to_uint8(np.array([1.2, 2.5, -3.7]))
>>> print(result)
[1 3 0]
Source code in src/cellects/utils/formulas.py
@njit()
def to_uint8(an_array: NDArray):
    """
    Convert an array to unsigned 8-bit integers.

    Parameters
    ----------
    an_array : ndarray
        Input array to be converted. It can be of any numeric dtype.

    Returns
    -------
    ndarray
        The input array rounded to the nearest integer and then cast to
        unsigned 8-bit integers.

    Raises
    ------
    TypeError
        If `an_array` is not a ndarray.

    Notes
    -----
    This function uses Numba's `@njit` decorator for performance optimization.

    Examples
    --------
    >>> result = to_uint8(np.array([1.2, 2.5, -3.7]))
    >>> print(result)
    [1 3 0]
    """
    out = np.empty_like(an_array)
    return np.round(an_array, 0, out).astype(np.uint8)

translate_dict(old_dict)

Translate a dictionary to a typed dictionary and filter out non-string values.

Parameters:

Name Type Description Default
old_dict dict

The input dictionary that may contain non-string values

required

Returns:

Name Type Description
numba_dict Dict

A typed dictionary containing only the items from old_dict where the value is not a string

Examples:

>>> result = translate_dict({'a': 1., 'b': 'string', 'c': 2.0})
>>> print(result)
{a: 1.0, c: 2.0}
Source code in src/cellects/utils/utilitarian.py
def translate_dict(old_dict: dict) -> Dict:
    """
    Translate a dictionary to a typed dictionary and filter out non-string values.

    Parameters
    ----------
    old_dict : dict
        The input dictionary that may contain non-string values

    Returns
    -------
    numba_dict : Dict
        A typed dictionary containing only the items from `old_dict` where the value is not a string

    Examples
    --------
    >>> result = translate_dict({'a': 1., 'b': 'string', 'c': 2.0})
    >>> print(result)
    {a: 1.0, c: 2.0}
    """
    numba_dict = Dict()
    for k, v in old_dict.items():
        if not isinstance(v, str):
            if isinstance(v, list):
                v = List(v)
            numba_dict[k] = v
    return numba_dict

un_pad(arr)

Unpads a 2D NumPy array by removing the first and last row/column.

Extended Description

Reduces the size of a 2D array by removing the outermost rows and columns. Useful for trimming boundaries added during padding operations.

Parameters:

Name Type Description Default
arr ndarray

Input 2D array to be unpadded. Shape (n,m) is expected.

required

Returns:

Type Description
ndarray

Unpadded 2D array with shape (n-2, m-2).

Examples:

>>> arr = np.array([[0, 0, 0],
>>>                 [0, 4, 0],
>>>                 [0, 0, 0]])
>>> un_pad(arr)
array([[4]])
Source code in src/cellects/image_analysis/network_functions.py
def un_pad(arr: NDArray) -> NDArray:
    """
    Unpads a 2D NumPy array by removing the first and last row/column.

    Extended Description
    --------------------
    Reduces the size of a 2D array by removing the outermost rows and columns.
    Useful for trimming boundaries added during padding operations.

    Parameters
    ----------
    arr : ndarray
        Input 2D array to be unpadded. Shape (n,m) is expected.

    Returns
    -------
    ndarray
        Unpadded 2D array with shape (n-2, m-2).

    Examples
    --------
    >>> arr = np.array([[0, 0, 0],
    >>>                 [0, 4, 0],
    >>>                 [0, 0, 0]])
    >>> un_pad(arr)
    array([[4]])
    """
    return arr[1:-1, 1:-1]

video2numpy(vid_name, conversion_dict=None, background=None, background2=None, true_frame_width=None)

Convert a video file to a NumPy array.

Parameters:

Name Type Description Default
vid_name str

The path to the video file. Can be a .mp4 or .h5.

required
conversion_dict dict

Dictionary containing color space conversion parameters.

None
background NDArray

Background image for processing.

None
background2 NDArray

Second background image for processing.

None
true_frame_width int

True width of the frame. If specified and the current width is double this value, adjusts to true_frame_width.

None

Returns:

Type Description
NDArray or tuple of NDArrays

If conversion_dict is None, returns the video as a NumPy array. Otherwise, returns a tuple containing the original video and converted video.

Notes

This function uses OpenCV to read the contents of a .mp4 video file.

Source code in src/cellects/utils/load_display_save.py
def video2numpy(vid_name: str, conversion_dict=None, background: NDArray=None, background2: NDArray=None,
                true_frame_width: int=None):
    """
    Convert a video file to a NumPy array.

    Parameters
    ----------
    vid_name : str
        The path to the video file. Can be a `.mp4` or `.h5`.
    conversion_dict : dict, optional
        Dictionary containing color space conversion parameters.
    background : NDArray, optional
        Background image for processing.
    background2 : NDArray, optional
        Second background image for processing.
    true_frame_width : int, optional
        True width of the frame. If specified and the current width is double this value,
        adjusts to true_frame_width.

    Returns
    -------
    NDArray or tuple of NDArrays
        If conversion_dict is None, returns the video as a NumPy array.
        Otherwise, returns a tuple containing the original video and converted video.

    Notes
    -----
    This function uses OpenCV to read the contents of a `.mp4` video file.
    """
    h5_loading = vid_name[-3:] == ".h5"
    tif_loading = vid_name[-3:] == ".tif" or vid_name[-3:] == ".tiff"
    if h5_loading:
        video = read_h5(vid_name, 'video')
        dims = list(video.shape)
    elif tif_loading:
        video = read_tif_stack(vid_name)
        dims = video.shape
    else:
        cap = cv2.VideoCapture(vid_name)
        dims = [int(cap.get(cv2.CAP_PROP_FRAME_COUNT)), int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT)), int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))]

    if true_frame_width is not None:
        if dims[2] == 2 * true_frame_width:
            dims[2] = true_frame_width

    if conversion_dict is not None:
        first_dict, second_dict, c_spaces = split_dict(conversion_dict)
        converted_video = np.empty(dims[:3], dtype=np.uint8)
        if conversion_dict['logical'] == 'None':
            converted_video2 = np.empty(dims[:3], dtype=np.uint8)
        if h5_loading:
            for counter in np.arange(video.shape[0]):
                img = video[counter, :, :dims[2], :]
                greyscale_image, greyscale_image2, all_c_spaces, first_pc_vector = generate_color_space_combination(img, c_spaces,
                                                                                     first_dict, second_dict, background=background,background2=background2,
                                                                                     convert_to_uint8=True)
                converted_video[counter, ...] = greyscale_image
                if conversion_dict['logical'] == 'None':
                    converted_video2[counter, ...] = greyscale_image2
            video = video[:, :, :dims[2], ...]

    if not h5_loading:
        # 2) Create empty arrays to store video analysis data
        video = np.empty((dims[0], dims[1], dims[2], 3), dtype=np.uint8)
        # 3) Read and convert the video frame by frame
        counter = 0
        while cap.isOpened() and counter < dims[0]:
            ret, frame = cap.read()
            frame = frame[:, :dims[2], ...]
            video[counter, ...] = frame
            if conversion_dict is not None:
                greyscale_image, greyscale_image2, all_c_spaces, first_pc_vector = generate_color_space_combination(frame, c_spaces,
                                                                                     first_dict, second_dict, background=background,background2=background2,
                                                                                     convert_to_uint8=True)
                converted_video[counter, ...] = greyscale_image
                if conversion_dict['logical'] == 'None':
                    converted_video2[counter, ...] = greyscale_image2
            counter += 1
        cap.release()

    if conversion_dict is None:
        return video
    else:
        return video, converted_video

video_writing_decision(arena_nb, im_or_vid, overwrite_unaltered_videos)

Determine whether to write videos based on existing files and user preferences.

Parameters:

Name Type Description Default
arena_nb int

Number of arenas to analyze.

required
im_or_vid int

Indicates whether the analysis should be performed on images or videos.

required
overwrite_unaltered_videos bool

Flag indicating whether existing unaltered videos should be overwritten.

required

Returns:

Type Description
bool

True if videos should be written, False otherwise.

Source code in src/cellects/utils/load_display_save.py
def video_writing_decision(arena_nb: int, im_or_vid: int, overwrite_unaltered_videos: bool) -> bool:
    """
    Determine whether to write videos based on existing files and user preferences.

    Parameters
    ----------
    arena_nb : int
        Number of arenas to analyze.
    im_or_vid : int
        Indicates whether the analysis should be performed on images or videos.
    overwrite_unaltered_videos : bool
        Flag indicating whether existing unaltered videos should be overwritten.

    Returns
    -------
    bool
        True if videos should be written, False otherwise.
    """
    look_for_existing_videos = insensitive_glob('ind_' + '*' + '.h5')
    there_already_are_videos = len(look_for_existing_videos) > 0
    if there_already_are_videos:
        all_files_contain_video = np.all(['video' in get_h5_keys(vid_name) for vid_name in look_for_existing_videos])
        there_already_are_videos = all_files_contain_video and there_already_are_videos
        if not there_already_are_videos:
            look_for_existing_videos = []
        there_already_are_videos = len(look_for_existing_videos) == arena_nb and there_already_are_videos
    logging.info(f"Video files (h5) found: {len(look_for_existing_videos)} for {arena_nb} arenas to analyze")
    do_write_videos = not im_or_vid and (not there_already_are_videos or (there_already_are_videos and overwrite_unaltered_videos))
    return do_write_videos

vstack_h5_array(file_name, table, key='data')

Stack tables vertically in an HDF5 file.

This function either appends the input table to an existing dataset in the specified HDF5 file or creates a new dataset if the key doesn't exist.

Parameters:

Name Type Description Default
file_name str

Path to the HDF5 file.

required
table NDArray[uint8]

The table to be stacked vertically with the existing data.

required
key str

Key under which the dataset will be stored. Defaults to 'data'.

'data'

Examples:

>>> table = np.array([[1, 2], [3, 4]], dtype=np.uint8)
>>> vstack_h5_array('example.h5', table)
Source code in src/cellects/utils/load_display_save.py
def vstack_h5_array(file_name, table: NDArray, key: str="data"):
    """
    Stack tables vertically in an HDF5 file.

    This function either appends the input table to an existing dataset
    in the specified HDF5 file or creates a new dataset if the key doesn't exist.

    Parameters
    ----------
    file_name : str
        Path to the HDF5 file.
    table : NDArray[np.uint8]
        The table to be stacked vertically with the existing data.
    key : str, optional
        Key under which the dataset will be stored. Defaults to 'data'.

    Examples
    --------
    >>> table = np.array([[1, 2], [3, 4]], dtype=np.uint8)
    >>> vstack_h5_array('example.h5', table)
    """
    if os.path.exists(file_name):
        # Open the file in append mode
        with h5py.File(file_name, 'a') as h5f:
            if key in h5f:
                # Append to the existing dataset
                existing_data = h5f[key][:]
                new_data = np.vstack((existing_data, table))
                del h5f[key]
                h5f.create_dataset(key, data=new_data)
            else:
                # Create a new dataset if the key doesn't exist
                h5f.create_dataset(key, data=table)
    else:
        with h5py.File(file_name, 'w') as h5f:
            h5f.create_dataset(key, data=table)

write_h5(file_name, table, key='data')

Write a file using the h5 format.

Parameters:

Name Type Description Default
file_name str

Name of the file to write.

required
table NDArray[]

An array.

required
key str

The identifier of the data in this h5 file.

'data'
Source code in src/cellects/utils/load_display_save.py
def write_h5(file_name: str, table: NDArray, key: str="data"):
    """
    Write a file using the h5 format.

    Parameters
    ----------
    file_name : str
        Name of the file to write.
    table : NDArray[]
        An array.
    key: str
        The identifier of the data in this h5 file.
    """
    with h5py.File(file_name, 'a') as h5f:
        if key in h5f:
            del h5f[key]
        h5f.create_dataset(key, data=table)

write_video(np_array, vid_name, is_color=True, fps=40)

Write video from numpy array.

Save a numpy array as a video file. Supports .h5 format for saving raw numpy arrays and various video formats (mp4, avi, mkv) using OpenCV. For video formats, automatically selects a suitable codec and handles file extensions.

Parameters:

Name Type Description Default
np_array ndarray of uint8

Input array containing video frames.

required
vid_name str

Filename for the output video. Can include extension or not (defaults to .mp4).

required
is_color bool

Whether the video should be written in color. Defaults to True.

True
fps int

Frame rate for the video in frames per second. Defaults to 40.

40

Examples:

>>> video_array = np.random.randint(0, 255, size=(10, 100, 100, 3), dtype=np.uint8)
>>> write_video(video_array, 'output.mp4', True, 30)
Saves `video_array` as a color video 'output.mp4' with FPS 30.
>>> video_array = np.random.randint(0, 255, size=(10, 100, 100), dtype=np.uint8)
>>> write_video(video_array, 'raw_data.h5')
Saves `video_array` as a raw numpy array file without frame rate.
Source code in src/cellects/utils/load_display_save.py
def write_video(np_array: NDArray[np.uint8], vid_name: str, is_color: bool=True, fps: int=40):
    """
    Write video from numpy array.

    Save a numpy array as a video file. Supports .h5 format for saving raw
    numpy arrays and various video formats (mp4, avi, mkv) using OpenCV.
    For video formats, automatically selects a suitable codec and handles
    file extensions.

    Parameters
    ----------
    np_array : ndarray of uint8
        Input array containing video frames.
    vid_name : str
        Filename for the output video. Can include extension or not (defaults to .mp4).
    is_color : bool, optional
        Whether the video should be written in color. Defaults to True.
    fps : int, optional
        Frame rate for the video in frames per second. Defaults to 40.

    Examples
    --------
    >>> video_array = np.random.randint(0, 255, size=(10, 100, 100, 3), dtype=np.uint8)
    >>> write_video(video_array, 'output.mp4', True, 30)
    Saves `video_array` as a color video 'output.mp4' with FPS 30.
    >>> video_array = np.random.randint(0, 255, size=(10, 100, 100), dtype=np.uint8)
    >>> write_video(video_array, 'raw_data.h5')
    Saves `video_array` as a raw numpy array file without frame rate.
    """
    #h265 ou h265 (mp4)
    # linux: fourcc = 0x00000021 -> don't forget to change it bellow as well
    if vid_name[-4:] == '.h5':
        write_h5(vid_name, np_array, 'video')
    else:
        valid_extensions = ['.mp4', '.avi', '.mkv']
        vid_ext = vid_name[-4:]
        if vid_ext not in valid_extensions:
            vid_name = vid_name[:-4]
            vid_name += '.mp4'
            vid_ext = '.mp4'
        if vid_ext =='.mp4':
            fourcc = 0x7634706d# VideoWriter_fourcc(*'FMP4') #(*'MP4V') (*'h265') (*'x264') (*'DIVX')
        else:
            fourcc = cv2.VideoWriter_fourcc('F', 'F', 'V', '1')  # lossless
        size = np_array.shape[2], np_array.shape[1]
        vid = cv2.VideoWriter(vid_name, fourcc, float(fps), tuple(size), is_color)
        for image_i in np.arange(np_array.shape[0]):
            image = np_array[image_i, ...]
            vid.write(image)
        vid.release()

write_video_from_images(path_to_images='', vid_name='timelapse.mp4', fps=20, img_extension='', img_radical='', crop_coord=None)

Write a video file from a sequence of images.

Extended Description

This function creates a video from a list of image files in the specified directory. To prevent the most comon issues: - The image list is sorted - mp4 files are removed - If they do not have the same orientation, rotate the images accordingly - Images are cropped - Color vs greyscale is automatically determined

After processing, images are compiled into a video file.

Parameters

path_to_images : str The directory where the images are located. vid_name : str, optional The name of the output video file. Default is 'video.mp4'. fps : int, optional The frames per second for the video. Default is 20. img_extension : str, optional The file extension of the images. Default is an empty string. img_radical : str, optional The common prefix of the image filenames. Default is an empty string. crop_coord : list, optional list containing four crop coordinates: [top, bot, left, right]. Default is None and takes the whole image.

Examples

write_video_from_images('path/to/images', vid_name='timelapse.mp4') This will create a video file named 'timelapse.mp4' from the images in the specified directory.

Source code in src/cellects/utils/load_display_save.py
def write_video_from_images(path_to_images='', vid_name: str='timelapse.mp4', fps: int=20, img_extension: str='',
                            img_radical: str='', crop_coord: list=None):
    """
    Write a video file from a sequence of images.

     Extended Description
     --------------------
     This function creates a video from a list of image files in the specified directory.
     To prevent the most comon issues:
     - The image list is sorted
     - mp4 files are removed
     - If they do not have the same orientation, rotate the images accordingly
     - Images are cropped
     - Color vs greyscale is automatically determined

     After processing, images are compiled into a video file.

     Parameters
     ----------
     path_to_images : str
         The directory where the images are located.
     vid_name : str, optional
         The name of the output video file. Default is 'video.mp4'.
     fps : int, optional
         The frames per second for the video. Default is 20.
     img_extension : str, optional
         The file extension of the images. Default is an empty string.
     img_radical : str, optional
         The common prefix of the image filenames. Default is an empty string.
     crop_coord : list, optional
         list containing four crop coordinates: [top, bot, left, right]. Default is None and takes the whole image.

     Examples
     --------
     >>> write_video_from_images('path/to/images', vid_name='timelapse.mp4')
     This will create a video file named 'timelapse.mp4' from the images in the specified directory.
    """
    if isinstance(path_to_images, str):
        path_to_images = Path(path_to_images)
    os.chdir(path_to_images)
    imgs = list_image_dir(path_to_images, img_extension, img_radical)
    is_raw = is_raw_image(imgs[0])
    image, prev_img = read_rotate_crop_and_reduce_image(imgs[0], crop_coord=crop_coord, raw_images=is_raw)
    is_landscape = image.shape[0] < image.shape[1]
    is_color: bool = True
    if len(image.shape) == 2:
        is_color = False
    video = np.zeros((len(imgs), image.shape[0], image.shape[1], 3), dtype=np.uint8)
    for i_, img in enumerate(imgs):
        video[i_], prev_img = read_rotate_crop_and_reduce_image(img, prev_img=prev_img, crop_coord=crop_coord,
                                                                raw_images=is_raw, is_landscape=is_landscape)
    write_video(video, vid_name=vid_name, is_color=is_color, fps=fps)

write_video_sets(img_list, sizes, vid_names, crop_coord, bounding_boxes, bunch_nb, video_nb_per_bunch, remaining, raw_images, is_landscape, use_list_of_vid, in_colors=False, reduce_image_dim=False, pathway='')

Write video sets from a list of images, applying cropping and optional rotation.

Parameters:

Name Type Description Default
img_list list

List of image file names.

required
sizes NDArray

Array containing the dimensions of each video frame.

required
vid_names list

List of video file names to be saved.

required
crop_coord dict or tuple

Coordinates for cropping regions of interest in images/videos.

required
bounding_boxes tuple

Bounding box coordinates to extract sub-images from the original images.

required
bunch_nb int

Number of bunches to divide the videos into.

required
video_nb_per_bunch int

Number of videos per bunch.

required
remaining int

Number of videos remaining after the last full bunch.

required
raw_images bool

Whether the images are in raw format.

required
is_landscape bool

If true, rotate the images to landscape orientation before processing.

required
use_list_of_vid bool

Flag indicating if the output should be a list of videos.

required
in_colors bool

If true, process images with color information. Default is False.

False
reduce_image_dim bool

If true, reduce image dimensions. Default is False.

False
pathway str

Path where the videos should be saved. Default is an empty string.

''
Source code in src/cellects/utils/load_display_save.py
def write_video_sets(img_list: list, sizes: NDArray, vid_names: list, crop_coord, bounding_boxes,
                      bunch_nb: int, video_nb_per_bunch: int, remaining: int,
                      raw_images: bool, is_landscape: bool, use_list_of_vid: bool,
                      in_colors: bool=False, reduce_image_dim: bool=False, pathway: str=""):
    """
    Write video sets from a list of images, applying cropping and optional rotation.

    Parameters
    ----------
    img_list : list
        List of image file names.
    sizes : NDArray
        Array containing the dimensions of each video frame.
    vid_names : list
        List of video file names to be saved.
    crop_coord : dict or tuple
        Coordinates for cropping regions of interest in images/videos.
    bounding_boxes : tuple
        Bounding box coordinates to extract sub-images from the original images.
    bunch_nb : int
        Number of bunches to divide the videos into.
    video_nb_per_bunch : int
        Number of videos per bunch.
    remaining : int
        Number of videos remaining after the last full bunch.
    raw_images : bool
        Whether the images are in raw format.
    is_landscape : bool
        If true, rotate the images to landscape orientation before processing.
    use_list_of_vid : bool
        Flag indicating if the output should be a list of videos.
    in_colors : bool, optional
        If true, process images with color information. Default is False.
    reduce_image_dim : bool, optional
        If true, reduce image dimensions. Default is False.
    pathway : str, optional
        Path where the videos should be saved. Default is an empty string.
    """
    top, bot, left, right = bounding_boxes
    for bunch in np.arange(bunch_nb):
        print(f'\nSaving the video bunch n°{bunch + 1} (tot={bunch_nb})...', end=' ')
        if bunch == (bunch_nb - 1) and remaining > 0:
            arenas = np.arange(bunch * video_nb_per_bunch, bunch * video_nb_per_bunch + remaining, dtype=np.uint32)
        else:
            arenas = np.arange(bunch * video_nb_per_bunch, (bunch + 1) * video_nb_per_bunch, dtype=np.uint32)
        if use_list_of_vid:
            video_bunch = [np.zeros(sizes[i, :], dtype=np.uint8) for i in arenas]
        else:
            video_bunch = np.zeros(np.append(sizes[0, :], len(arenas)), dtype=np.uint8)
        prev_img = None
        images_done = bunch * len(img_list)
        for image_i, image_name in enumerate(img_list):
            img = read_and_rotate(image_name, prev_img, raw_images, is_landscape, crop_coord)
            prev_img = img.copy()
            if not in_colors and reduce_image_dim:
                img = img[:, :, 0]

            for arena_i, arena_name in enumerate(arenas):
                # arena_i = 0; arena_name = arena[arena_i]
                sub_img = img[top[arena_name]:bot[arena_name], left[arena_name]:right[arena_name], ...]
                if use_list_of_vid:
                    video_bunch[arena_i][image_i, ...] = sub_img
                else:
                    if len(video_bunch.shape) == 5:
                        video_bunch[image_i, :, :, :, arena_i] = sub_img
                    else:
                        video_bunch[image_i, :, :, arena_i] = sub_img
        for arena_i, arena_name in enumerate(arenas):
            if use_list_of_vid:
                 write_h5(pathway + vid_names[arena_name], video_bunch[arena_i], 'video')
            else:
                if len(video_bunch.shape) == 5:
                     write_h5(pathway + vid_names[arena_name], video_bunch[:, :, :, :, arena_i], 'video')
                else:
                     write_h5(pathway + vid_names[arena_name], video_bunch[:, :, :, arena_i], 'video')

zoom_on_nonzero(binary_image, padding=2, return_coord=True)

Crops a binary image around non-zero elements with optional padding and returns either coordinates or cropped region.

Parameters:

Name Type Description Default
binary_image NDArray

2D NumPy array containing binary values (0/1)

required
padding int

Amount of zero-padding to add around the minimum bounding box

2
return_coord bool

If True, return slice coordinates instead of cropped image

True

Returns:

Type Description
If `return_coord` is True: [y_min, y_max, x_min, x_max] as 4-element Tuple.

If False: 2D binary array representing the cropped region defined by non-zero elements plus padding.

Examples:

>>> img = np.zeros((10,10))
>>> img[3:7,4:6] = 1
>>> result = zoom_on_nonzero(img)
>>> print(result)
[1 8 2 7]
>>> cropped = zoom_on_nonzero(img, return_coord=False)
>>> print(cropped.shape)
(6, 5)
Notes
  • Returns empty slice coordinates if input contains no non-zero elements.
  • Coordinate indices are 0-based and compatible with NumPy array slicing syntax.
Source code in src/cellects/utils/load_display_save.py
def zoom_on_nonzero(binary_image:NDArray, padding: int = 2, return_coord: bool=True):
    """
    Crops a binary image around non-zero elements with optional padding and returns either coordinates or cropped region.

    Parameters
    ----------
    binary_image : NDArray
        2D NumPy array containing binary values (0/1)
    padding : int, default=2
        Amount of zero-padding to add around the minimum bounding box
    return_coord : bool, default=True
        If True, return slice coordinates instead of cropped image

    Returns
    -------
        If `return_coord` is True: [y_min, y_max, x_min, x_max] as 4-element Tuple.
        If False: 2D binary array representing the cropped region defined by non-zero elements plus padding.

    Examples
    --------
    >>> img = np.zeros((10,10))
    >>> img[3:7,4:6] = 1
    >>> result = zoom_on_nonzero(img)
    >>> print(result)
    [1 8 2 7]
    >>> cropped = zoom_on_nonzero(img, return_coord=False)
    >>> print(cropped.shape)
    (6, 5)

    Notes
    -----
    - Returns empty slice coordinates if input contains no non-zero elements.
    - Coordinate indices are 0-based and compatible with NumPy array slicing syntax.
    """
    y, x = np.nonzero(binary_image)
    cy_min = np.max((0, y.min() - padding))
    cy_max = np.min((binary_image.shape[0], y.max() + padding + 1))
    cx_min = np.max((0, x.min() - padding))
    cx_max = np.min((binary_image.shape[1], x.max() + padding + 1))
    if return_coord:
        return cy_min, cy_max, cx_min, cx_max
    else:
        return binary_image[cy_min:cy_max, cx_min:cx_max]