Skip to content

cellects.image_analysis.network_functions

cellects.image_analysis.network_functions

Network detection and skeleton analysis for biological networks (such as Physarum polycephalum's) images.

This module provides tools for analyzing network structures in grayscale images of biological networks. It implements vessel detection using Frangi/Sato filters, thresholding methods, and quality metrics to select optimal network representations. Additional functionality includes pseudopod detection, skeletonization, loop removal, edge identification, and network topology analysis through vertex/edge tracking.

Classes:

Name Description
NetworkDetection : Detects vessels in images using multi-scale filters with parameter variations.
EdgeIdentification : Identifies edges between vertices in a skeletonized network structure.

Functions:

Name Description
get_skeleton_and_widths: Computes medial axis skeleton and distance transforms for networks.
remove_small_loops: Eliminates small loops from skeletons while preserving topology.
get_neighbor_comparisons: Analyzes pixel connectivity patterns in skeletons.
get_vertices_and_tips_from_skeleton: Identifies junctions and endpoints in network skeletons.
merge_network_with_pseudopods: Combines detected network structures with identified pseudopods.
Notes

Uses morphological operations for network refinement, including hole closing, component labeling, and distance transform analysis. Implements both Otsu thresholding and rolling window segmentation methods for image processing workflows.

EdgeIdentification

Initialize the class with skeleton and distance arrays.

This class is used to identify edges within a skeleton structure based on provided skeleton and distance arrays. It performs various operations to refine and label edges, ultimately producing a fully identified network.

Source code in src/cellects/image_analysis/network_functions.py
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
class EdgeIdentification:
    """Initialize the class with skeleton and distance arrays.

    This class is used to identify edges within a skeleton structure based on
    provided skeleton and distance arrays. It performs various operations to
    refine and label edges, ultimately producing a fully identified network.
    """
    def __init__(self, pad_skeleton: NDArray[np.uint8], pad_distances: NDArray[np.float64], t: int=0):
        """
        Initialize the class with skeleton and distance arrays.

        Parameters
        ----------
        pad_skeleton : ndarray of uint8
            Array representing the skeleton to pad.
        pad_distances : ndarray of float64
            Array representing distances corresponding to the skeleton.

        Attributes
        ----------
        remaining_vertices : None
            Remaining vertices. Initialized as `None`.
        vertices : None
            Vertices. Initialized as `None`.
        growing_vertices : None
            Growing vertices. Initialized as `None`.
        im_shape : tuple of ints
            Shape of the skeleton array.
        """
        self.pad_skeleton = pad_skeleton
        self.pad_distances = pad_distances
        self.t = t
        self.remaining_vertices = None
        self.vertices = None
        self.growing_vertices = None
        self.im_shape = pad_skeleton.shape

    def run_edge_identification(self):
        """
        Run the edge identification process.

        This method orchestrates a series of steps to identify and label edges
        within the graph structure. Each step handles a specific aspect of edge
        identification, ultimately leading to a clearer and more refined edge network.

        Steps involved:
        1. Get vertices and tips coordinates.
        2. Identify tipped edges.
        3. Remove tipped edges smaller than branch width.
        4. Label tipped edges and their vertices.
        5. Label edges connected with vertex clusters.
        6. Label edges connecting vertex clusters.
        7. Label edges from known vertices iteratively.
        8. Label edges looping on 1 vertex.
        9. Clear areas with 1 or 2 unidentified pixels.
        10. Clear edge duplicates.
        11. Clear vertices connecting 2 edges.
        """
        self.get_vertices_and_tips_coord()
        self.get_tipped_edges()
        self.remove_tipped_edge_smaller_than_branch_width()
        self.label_tipped_edges_and_their_vertices()
        self.check_vertex_existence()
        self.label_edges_connected_with_vertex_clusters()
        self.label_edges_connecting_vertex_clusters()
        self.label_edges_from_known_vertices_iteratively()
        self.label_edges_looping_on_1_vertex()
        self.clear_areas_of_1_or_2_unidentified_pixels()
        self.clear_edge_duplicates()
        self.clear_vertices_connecting_2_edges()

    def get_vertices_and_tips_coord(self):
        """Process skeleton data to extract non-tip vertices and tip coordinates.

        This method processes the skeleton stored in `self.pad_skeleton` by first
        extracting all vertices and tips. It then separates these into branch points
        (non-tip vertices) and specific tip coordinates using internal processing.

        Attributes
        ----------
        self.non_tip_vertices : array-like
            Coordinates of non-tip (branch) vertices.
        self.tips_coord : array-like
            Coordinates of identified tips in the skeleton.
        """
        pad_vertices, pad_tips = get_vertices_and_tips_from_skeleton(self.pad_skeleton)
        self.non_tip_vertices, self.tips_coord = get_branches_and_tips_coord(pad_vertices, pad_tips)

    def get_tipped_edges(self):
        """
        get_tipped_edges : method to extract skeleton edges connecting branching points and tips.

        Makes sure that there is only one connected component constituting the skeleton of the network and
        identifies all edges that are connected to a tip.

        Attributes
        ----------
        pad_skeleton : ndarray of bool, modified
            Boolean mask representing the pruned skeleton after isolating the largest connected component.
        vertices_branching_tips : ndarray of int, shape (N, 2)
            Coordinates of branching points that connect to tips in the skeleton structure.
        edge_lengths : ndarray of float, shape (M,)
            Lengths of edges connecting non-tip vertices to identified tip locations.
        edge_pix_coord : list of array of int
            Pixel coordinates for each edge path between connected skeleton elements.

        """
        self.pad_skeleton = keep_one_connected_component(self.pad_skeleton)
        self.vertices_branching_tips, self.edge_lengths, self.edge_pix_coord = _find_closest_vertices(self.pad_skeleton,
                                                                                        self.non_tip_vertices,
                                                                                        self.tips_coord[:, :2])

    def remove_tipped_edge_smaller_than_branch_width(self):
        """Remove very short edges from the skeleton.

        This method focuses on edges connecting tips. When too short, they are considered are noise and
        removed from the skeleton and distances matrices. These edges are considered too short when their length is
        smaller than the width of the nearest network branch (an information included in pad_distances).
        This method also updates internal data structures (skeleton, edge coordinates, vertex/tip positions)
        accordingly through pixel-wise analysis and connectivity checks.
        """
        # Identify edges that are smaller than the width of the branch it is attached to
        tipped_edges_to_remove = np.zeros(self.edge_lengths.shape[0], dtype=bool)
        # connecting_vertices_to_remove = np.zeros(self.vertices_branching_tips.shape[0], dtype=bool)
        branches_to_remove = np.zeros(self.non_tip_vertices.shape[0], dtype=bool)
        new_edge_pix_coord = []
        remaining_tipped_edges_nb = 0
        for i in range(len(self.edge_lengths)): # i = 3142 #1096 # 974 # 222
            Y, X = self.vertices_branching_tips[i, 0], self.vertices_branching_tips[i, 1]
            edge_bool = self.edge_pix_coord[:, 2] == i + 1
            eY, eX = self.edge_pix_coord[edge_bool, 0], self.edge_pix_coord[edge_bool, 1]
            if np.nanmax(self.pad_distances[(Y - 1): (Y + 2), (X - 1): (X + 2)]) >= self.edge_lengths[i]:
                tipped_edges_to_remove[i] = True
                # Remove the edge
                self.pad_skeleton[eY, eX] = 0
                # Remove the tip
                self.pad_skeleton[self.tips_coord[i, 0], self.tips_coord[i, 1]] = 0

                # Remove the coordinates corresponding to that edge
                self.edge_pix_coord = np.delete(self.edge_pix_coord, edge_bool, 0)

                # check whether the connecting vertex remains a vertex of not
                pad_sub_skeleton = np.pad(self.pad_skeleton[(Y - 2): (Y + 3), (X - 2): (X + 3)], [(1,), (1,)],
                                          mode='constant')
                sub_vertices, sub_tips = get_vertices_and_tips_from_skeleton(pad_sub_skeleton)
                # If the vertex does not connect at least 3 edges anymore, remove its vertex label
                if sub_vertices[3, 3] == 0:
                    vertex_to_remove = np.nonzero(np.logical_and(self.non_tip_vertices[:, 0] == Y, self.non_tip_vertices[:, 1] == X))[0]
                    branches_to_remove[vertex_to_remove] = True
                # If that pixel became a tip connected to another vertex remove it from the skeleton
                if sub_tips[3, 3]:
                    if sub_vertices[2:5, 2:5].sum() > 1:
                        self.pad_skeleton[Y, X] = 0
                        self.edge_pix_coord = np.delete(self.edge_pix_coord, np.all(self.edge_pix_coord[:, :2] == [Y, X], axis=1), 0)
                        vertex_to_remove = np.nonzero(np.logical_and(self.non_tip_vertices[:, 0] == Y, self.non_tip_vertices[:, 1] == X))[0]
                        branches_to_remove[vertex_to_remove] = True
            else:
                remaining_tipped_edges_nb += 1
                new_edge_pix_coord.append(np.stack((eY, eX, np.repeat(remaining_tipped_edges_nb, len(eY))), axis=1))

        # Check that excedent connected components are 1 pixel size, if so:
        # It means that they were neighbors to removed tips and not necessary for the skeleton
        nb, sh = cv2.connectedComponents(self.pad_skeleton)
        if nb > 2:
            logging.error("Removing small tipped edges split the skeleton")
            # for i in range(2, nb):
            #     excedent = sh == i
            #     if (excedent).sum() == 1:
            #         self.pad_skeleton[excedent] = 0

        # Remove in distances the pixels removed in skeleton:
        self.pad_distances *= self.pad_skeleton

        # update edge_pix_coord
        if len(new_edge_pix_coord) > 0:
            self.edge_pix_coord = np.vstack(new_edge_pix_coord)

        # # Remove tips connected to very small edges
        # self.tips_coord = np.delete(self.tips_coord, tipped_edges_to_remove, 0)
        # # Add corresponding edge names
        # self.tips_coord = np.hstack((self.tips_coord, np.arange(1, len(self.tips_coord) + 1)[:, None]))

        # # Within all branching (non-tip) vertices, keep those that did not lose their vertex status because of the edge removal
        # self.non_tip_vertices = np.delete(self.non_tip_vertices, branches_to_remove, 0)

        # # Get the branching vertices who kept their typped edge
        # self.vertices_branching_tips = np.delete(self.vertices_branching_tips, tipped_edges_to_remove, 0)

        # Within all branching (non-tip) vertices, keep those that do not connect a tipped edge.
        # v_branching_tips_in_branching_v = find_common_coord(self.non_tip_vertices, self.vertices_branching_tips[:, :2])
        # self.remaining_vertices = np.delete(self.non_tip_vertices, v_branching_tips_in_branching_v, 0)
        # ordered_v_coord = np.vstack((self.tips_coord[:, :2], self.vertices_branching_tips[:, :2], self.remaining_vertices))

        # tips = self.tips_coord
        # branching_any_edge = self.non_tip_vertices
        # branching_typped_edges = self.vertices_branching_tips
        # branching_no_typped_edges = self.remaining_vertices

        self.get_vertices_and_tips_coord()
        self.get_tipped_edges()

    def label_tipped_edges_and_their_vertices(self):
        """Label edges connecting tip vertices to branching vertices and assign unique labels to all relevant vertices.

        Processes vertex coordinates by stacking tips, vertices branching from tips, and remaining non-tip vertices.
        Assigns unique sequential identifiers to these vertices in a new array. Constructs an array of edge-label information,
        where each row contains the edge label (starting at 1), corresponding tip label, and connected vertex label.

        Attributes
        ----------
        tip_number : int
            The number of tip coordinates available in `tips_coord`.

        ordered_v_coord : ndarray of float
            Stack of unique vertex coordinates ordered by: tips first, vertices branching tips second, non-tip vertices third.

        numbered_vertices : ndarray of uint32
            2D array where each coordinate position is labeled with a sequential integer (starting at 1) based on the order in `ordered_v_coord`.

        edges_labels : ndarray of uint32
            Array of shape (n_edges, 3). Each row contains:
            - Edge label (sequential from 1 to n_edges)
            - Label of the tip vertex for that edge.
            - Label of the vertex branching the tip.

        vertices_branching_tips : ndarray of float
            Unique coordinates of vertices directly connected to tips after removing duplicates.
        """
        self.tip_number = self.tips_coord.shape[0]

        # Stack vertex coordinates in that order: 1. Tips, 2. Vertices branching tips, 3. All remaining vertices
        ordered_v_coord = np.vstack((self.tips_coord[:, :2], self.vertices_branching_tips[:, :2], self.non_tip_vertices))
        ordered_v_coord = np.unique(ordered_v_coord, axis=0)

        # Create arrays to store edges and vertices labels
        self.numbered_vertices = np.zeros(self.im_shape, dtype=np.uint32)
        self.numbered_vertices[ordered_v_coord[:, 0], ordered_v_coord[:, 1]] = np.arange(1, ordered_v_coord.shape[0] + 1)
        self.vertices = None
        self.vertex_index_map = {}
        for idx, (y, x) in enumerate(ordered_v_coord):
            self.vertex_index_map[idx + 1] = tuple((np.uint32(y), np.uint32(x)))

        # Name edges from 1 to the number of edges connecting tips and set the vertices labels from all tips to their connected vertices:
        self.edges_labels = np.zeros((self.tip_number, 3), dtype=np.uint32)
        # edge label:
        self.edges_labels[:, 0] = np.arange(self.tip_number) + 1
        # tip label:
        self.edges_labels[:, 1] = self.numbered_vertices[self.tips_coord[:, 0], self.tips_coord[:, 1]]
        # vertex branching tip label:
        self.edges_labels[:, 2] = self.numbered_vertices[self.vertices_branching_tips[:, 0], self.vertices_branching_tips[:, 1]]

        # Remove duplicates in vertices_branching_tips
        self.vertices_branching_tips = np.unique(self.vertices_branching_tips[:, :2], axis=0)

    def check_vertex_existence(self):
        if self.tips_coord.shape[0] == 0 and self.non_tip_vertices.shape[0] == 0:
            loop_coord = np.nonzero(self.pad_skeleton)
            start = 1
            end = 1
            vertex_coord = loop_coord[0][0], loop_coord[1][0]
            self.numbered_vertices[vertex_coord[0], vertex_coord[1]] = 1
            self.vertex_index_map[1] = vertex_coord
            self.non_tip_vertices = np.array(vertex_coord)[None, :]
            new_edge_lengths = len(loop_coord[0]) - 1
            new_edge_pix_coord = np.transpose(np.vstack(((loop_coord[0][1:], loop_coord[1][1:], np.zeros(new_edge_lengths, dtype=np.int32)))))
            self.edge_pix_coord = np.zeros((0, 3), dtype=np.int32)
            self._update_edge_data(start, end, new_edge_lengths, new_edge_pix_coord)

    def label_edges_connected_with_vertex_clusters(self):
        """
        Identify edges connected to touching vertices by processing vertex clusters.

        This function processes the skeleton to identify edges connecting vertices
        that are part of touching clusters. It creates a cropped version of the skeleton
        by removing already detected edges and their tips, then iterates through vertex
        clusters to explore and identify nearby edges.
        """
        # I.1. Identify edges connected to touching vertices:
        # First, create another version of these arrays, where we remove every already detected edge and their tips
        cropped_skeleton = self.pad_skeleton.copy()
        cropped_skeleton[self.edge_pix_coord[:, 0], self.edge_pix_coord[:, 1]] = 0
        cropped_skeleton[self.tips_coord[:, 0], self.tips_coord[:, 1]] = 0

        # non_tip_vertices does not need to be updated yet, because it only contains verified branching vertices
        cropped_non_tip_vertices = self.non_tip_vertices.copy()

        self.new_level_vertices = None
        # The problem with vertex_to_vertex_connexion is that since they are not separated by zeros,
        # they always atract each other instead of exploring other paths.
        # To fix this, we loop over each vertex group to
        # 1. Add one edge per inter-vertex connexion inside the group
        # 2. Remove all except one, and loop as many time as necessary.
        # Inside that second loop, we explore and identify every edge nearby.
        # Find every vertex_to_vertex_connexion
        v_cluster_nb, self.v_cluster_lab, self.v_cluster_stats, vgc = cv2.connectedComponentsWithStats(
            (self.numbered_vertices > 0).astype(np.uint8), connectivity=8)
        if v_cluster_nb > 0:
            max_v_nb = np.max(self.v_cluster_stats[1:, 4])
            cropped_skeleton_list = []
            starting_vertices_list = []
            for v_nb in range(2, max_v_nb + 1):
                labels = np.nonzero(self.v_cluster_stats[:, 4] == v_nb)[0]
                coord_list = []
                for lab in labels:  # lab=labels[0]
                    coord_list.append(np.nonzero(self.v_cluster_lab == lab))
                for iter in range(v_nb):
                    for lab_ in range(labels.shape[0]): # lab=labels[0]
                        cs = cropped_skeleton.copy()
                        sv = []
                        v_c = coord_list[lab_]
                        # Save the current coordinate in the starting vertices array of this iteration
                        sv.append([v_c[0][iter], v_c[1][iter]])
                        # Remove one vertex coordinate to keep it from cs
                        v_y, v_x = np.delete(v_c[0], iter), np.delete(v_c[1], iter)
                        cs[v_y, v_x] = 0
                        cropped_skeleton_list.append(cs)
                        starting_vertices_list.append(np.array(sv))
            for cropped_skeleton, starting_vertices in zip(cropped_skeleton_list, starting_vertices_list):
                _, _ = self._identify_edges_connecting_a_vertex_list(cropped_skeleton, cropped_non_tip_vertices, starting_vertices)

    def label_edges_connecting_vertex_clusters(self):
        """
        Label edges connecting vertex clusters.

        This method identifies the connections between connected vertices within
        vertex clusters and labels these edges. It uses the previously found connected
        vertices, creates an image of the connections, and then identifies
        and labels the edges between these touching vertices.
        """
        # I.2. Identify the connexions between connected vertices:
        all_connected_vertices = np.nonzero(self.v_cluster_stats[:, 4] > 1)[0][1:]
        all_con_v_im = np.zeros_like(self.pad_skeleton)
        for v_group in all_connected_vertices:
            all_con_v_im[self.v_cluster_lab == v_group] = 1
        cropped_skeleton = all_con_v_im
        self.vertex_clusters_coord = np.transpose(np.array(np.nonzero(cropped_skeleton)))
        _, _ = self._identify_edges_connecting_a_vertex_list(cropped_skeleton, self.vertex_clusters_coord, self.vertex_clusters_coord)
        # self.edges_labels
        del self.v_cluster_stats
        del self.v_cluster_lab

    def label_edges_from_known_vertices_iteratively(self):
        """
        Label edges iteratively from known vertices.

        This method labels edges in an iterative process starting from
        known vertices. It handles the removal of detected edges and
        updates the skeleton accordingly, to avoid detecting edges twice.
        """
        # II/ Identify all remaining edges
        if self.new_level_vertices is not None:
            starting_vertices_coord = np.vstack((self.new_level_vertices[:, :2], self.vertices_branching_tips))
            starting_vertices_coord = np.unique(starting_vertices_coord, axis=0)
        else:
            # We start from the vertices connecting tips
            starting_vertices_coord = self.vertices_branching_tips.copy()
        # Remove the detected edges from cropped_skeleton:
        cropped_skeleton = self.pad_skeleton.copy()
        cropped_skeleton[self.edge_pix_coord[:, 0], self.edge_pix_coord[:, 1]] = 0
        cropped_skeleton[self.tips_coord[:, 0], self.tips_coord[:, 1]] = 0
        cropped_skeleton[self.vertex_clusters_coord[:, 0], self.vertex_clusters_coord[:, 1]] = 0

        # Reinitialize cropped_non_tip_vertices to browse all vertices except tips and groups
        cropped_non_tip_vertices = self.non_tip_vertices.copy()
        cropped_non_tip_vertices = remove_coordinates(cropped_non_tip_vertices, self.vertex_clusters_coord)
        del self.vertex_clusters_coord
        remaining_v = cropped_non_tip_vertices.shape[0] + 1
        while remaining_v > cropped_non_tip_vertices.shape[0]:
            remaining_v = cropped_non_tip_vertices.shape[0]
            cropped_skeleton, cropped_non_tip_vertices = self._identify_edges_connecting_a_vertex_list(cropped_skeleton, cropped_non_tip_vertices, starting_vertices_coord)
            if self.new_level_vertices is not None:
                starting_vertices_coord = np.unique(self.new_level_vertices[:, :2], axis=0)

    def label_edges_looping_on_1_vertex(self):
        """
        Identify and handle edges that form loops around a single vertex.
        This method processes the skeleton image to find looping edges and updates
        the edge data structure accordingly.
        """
        self.identified = np.zeros_like(self.numbered_vertices)
        self.identified[self.edge_pix_coord[:, 0], self.edge_pix_coord[:, 1]] = 1
        self.identified[self.non_tip_vertices[:, 0], self.non_tip_vertices[:, 1]] = 1
        self.identified[self.tips_coord[:, 0], self.tips_coord[:, 1]] = 1
        unidentified = (1 - self.identified) * self.pad_skeleton

        # Find out the remaining non-identified pixels
        nb, self.unidentified_shapes, self.unidentified_stats, ce = cv2.connectedComponentsWithStats(unidentified.astype(np.uint8))
        # Handle the cases where edges are loops over only one vertex
        looping_edges = np.nonzero(self.unidentified_stats[:, 4 ] > 2)[0][1:]
        for loop_i in looping_edges: # loop_i = looping_edges[0] loop_i=11 #  zoom_on_nonzero(unique_vertices_im, return_coord=False)
            edge_i = (self.unidentified_shapes == loop_i).astype(np.uint8)
            dil_edge_i = cv2.dilate(edge_i, square_33)
            unique_vertices_im = self.numbered_vertices.copy()
            unique_vertices_im[self.tips_coord[:, 0], self.tips_coord[:, 1]] = 0
            unique_vertices_im = dil_edge_i * unique_vertices_im
            unique_vertices = np.unique(unique_vertices_im)
            unique_vertices = unique_vertices[unique_vertices > 0]
            v_nb = len(unique_vertices)
            new_edge_lengths = edge_i.sum()
            new_edge_pix_coord = np.transpose(np.vstack((np.nonzero(edge_i))))
            new_edge_pix_coord = np.hstack((new_edge_pix_coord, np.repeat(1, new_edge_pix_coord.shape[0])[:, None]))
            if v_nb == 1:
                start, end = unique_vertices[0], unique_vertices[0]
                self._update_edge_data(start, end, new_edge_lengths, new_edge_pix_coord)
            elif v_nb == 2:
                # The edge loops around a group of connected vertices
                start, end = unique_vertices[0], unique_vertices[1]
                self._update_edge_data(start, end, new_edge_lengths, new_edge_pix_coord)
                # conn_v_nb, conn_v = cv2.connectedComponents((unique_vertices_im > 0).astype(np.uint8))
                # if len(unique_vertices) == 2 and conn_v_nb == 2:
            elif v_nb > 2: # The question is: How to choose two vertices so that they link all missing pixels?
                # 1. Find every edge pixel connected to these vertices
                vertex_connected_pixels = np.nonzero(cv2.dilate((unique_vertices_im > 0).astype(np.uint8), square_33) * edge_i)
                # 2. Find the most distant pair of these
                pix1, pix2 = get_min_or_max_euclidean_pair(vertex_connected_pixels, "max")
                # 3. The two best vertices are the two nearest to these two most distant edge pixels
                dist_to_pix1 = np.zeros(v_nb, np.float64)
                dist_to_pix2 = np.zeros(v_nb, np.float64)
                for _i, v_i in enumerate(unique_vertices):
                    v_coord = self.vertex_index_map[v_i]
                    dist_to_pix1[_i] = eudist(pix1, v_coord)
                    dist_to_pix2[_i] = eudist(pix2, v_coord)
                start, end = unique_vertices[np.argmin(dist_to_pix1)], unique_vertices[np.argmin(dist_to_pix2)]
                self._update_edge_data(start, end, new_edge_lengths, new_edge_pix_coord)
            else:
                logging.error(f"t={self.t}, One long edge is not identified: i={loop_i} of length={edge_i.sum()} close to {len(unique_vertices)} vertices.")
        self.identified[self.edge_pix_coord[:, 0], self.edge_pix_coord[:, 1]] = 1

    def clear_areas_of_1_or_2_unidentified_pixels(self):
        """Removes 1 or 2 pixel size non-identified areas from the skeleton.

        This function checks whether small non-identified areas (1 or 2 pixels)
        can be removed without breaking the skeleton structure. It performs
        a series of operations to ensure only safe removals are made, logging
        errors if the final skeleton is not fully connected or if some unidentified pixels remain.
        """
        # Check whether the 1 or 2 pixel size non-identified areas can be removed without breaking the skel
        one_pix = np.nonzero(self.unidentified_stats[:, 4 ] <= 2)[0] # == 1)[0]
        cutting_removal = []
        for pix_i in one_pix: #pix_i=one_pix[0]
            skel_copy = self.pad_skeleton.copy()
            y1, y2, x1, x2 = self.unidentified_stats[pix_i, 1], self.unidentified_stats[pix_i, 1] + self.unidentified_stats[pix_i, 3], self.unidentified_stats[pix_i, 0], self.unidentified_stats[pix_i, 0] + self.unidentified_stats[pix_i, 2]
            skel_copy[y1:y2, x1:x2][self.unidentified_shapes[y1:y2, x1:x2] == pix_i] = 0
            nb1, sh1 = cv2.connectedComponents(skel_copy.astype(np.uint8), connectivity=8)
            if nb1 > 2:
                cutting_removal.append(pix_i)
            else:
                self.pad_skeleton[y1:y2, x1:x2][self.unidentified_shapes[y1:y2, x1:x2] == pix_i] = 0
        if len(cutting_removal) > 0:
            logging.error(f"t={self.t}, These pixels break the skeleton when removed: {cutting_removal}")
        if (self.identified > 0).sum() != self.pad_skeleton.sum():
            logging.error(f"t={self.t}, Proportion of identified pixels in the skeleton: {(self.identified > 0).sum() / self.pad_skeleton.sum()}")
        self.pad_distances *= self.pad_skeleton
        del self.identified
        del self.unidentified_stats
        del self.unidentified_shapes


    def _identify_edges_connecting_a_vertex_list(self, cropped_skeleton: NDArray[np.uint8], cropped_non_tip_vertices: NDArray, starting_vertices_coord: NDArray) -> Tuple[NDArray[np.uint8], NDArray]:
        """Identify edges connecting a list of vertices within a cropped skeleton.

        This function iteratively connects the closest vertices from starting_vertices_coord to their nearest neighbors,
        updating the skeleton and removing already connected vertices until no new connections can be made or
        a maximum number of connections is reached.

        Parameters
        ----------
        cropped_skeleton : ndarray of uint8
            A binary skeleton image where skeletal pixels are marked as 1.
        cropped_non_tip_vertices : ndarray of int
            Coordinates of non-tip vertices in the cropped skeleton.
        starting_vertices_coord : ndarray of int
            Coordinates of vertices from which to find connections.

        Returns
        -------
        cropped_skeleton : ndarray of uint8
            Updated skeleton with edges marked as 0.
        cropped_non_tip_vertices : ndarray of int
            Updated list of non-tip vertices after removing those that have been connected.
        """
        explored_connexions_per_vertex = 0  # the maximal edge number that can connect a vertex
        new_connexions = True
        while new_connexions and explored_connexions_per_vertex < 5 and np.any(cropped_non_tip_vertices) and np.any(starting_vertices_coord):

            explored_connexions_per_vertex += 1
            # 1. Find the ith closest vertex to each focal vertex
            ending_vertices_coord, new_edge_lengths, new_edge_pix_coord = _find_closest_vertices(
                cropped_skeleton, cropped_non_tip_vertices, starting_vertices_coord)
            if np.isnan(new_edge_lengths).sum() + (new_edge_lengths == 0).sum() == new_edge_lengths.shape[0]:
                new_connexions = False
            else:
                # In new_edge_lengths, zeros are duplicates and nan are lone vertices (from starting_vertices_coord)
                # Find out which starting_vertices_coord should be kept and which one should be used to save edges
                no_new_connexion = np.isnan(new_edge_lengths)
                no_found_connexion = np.logical_or(no_new_connexion, new_edge_lengths == 0)
                found_connexion = np.logical_not(no_found_connexion)

                # Any vertex_to_vertex_connexions must be analyzed only once. We remove them with the non-connectable vertices
                vertex_to_vertex_connexions = new_edge_lengths == 1

                # Save edge data
                start = self.numbered_vertices[
                    starting_vertices_coord[found_connexion, 0], starting_vertices_coord[found_connexion, 1]]
                end = self.numbered_vertices[
                    ending_vertices_coord[found_connexion, 0], ending_vertices_coord[found_connexion, 1]]
                new_edge_lengths = new_edge_lengths[found_connexion]
                self._update_edge_data(start, end, new_edge_lengths, new_edge_pix_coord)

                no_new_connexion = np.logical_or(no_new_connexion, vertex_to_vertex_connexions)
                vertices_to_crop = starting_vertices_coord[no_new_connexion, :]

                # Remove non-connectable and connected_vertices from:
                cropped_non_tip_vertices = remove_coordinates(cropped_non_tip_vertices, vertices_to_crop)
                starting_vertices_coord = remove_coordinates(starting_vertices_coord, vertices_to_crop)

                if new_edge_pix_coord.shape[0] > 0:
                    # Update cropped_skeleton to not identify each edge more than once
                    cropped_skeleton[new_edge_pix_coord[:, 0], new_edge_pix_coord[:, 1]] = 0
                # And the starting vertices that cannot connect anymore
                cropped_skeleton[vertices_to_crop[:, 0], vertices_to_crop[:, 1]] = 0

                if self.new_level_vertices is None:
                    self.new_level_vertices = ending_vertices_coord[found_connexion, :].copy()
                else:
                    self.new_level_vertices = np.vstack((self.new_level_vertices, ending_vertices_coord[found_connexion, :]))

        return cropped_skeleton, cropped_non_tip_vertices

    def _update_edge_data(self, start, end, new_edge_lengths: NDArray, new_edge_pix_coord: NDArray):
        """
        Update edge data by expanding existing arrays with new edges.

        This method updates the internal edge data structures (lengths,
        labels, and pixel coordinates) by appending new edges.

        Parameters
        ----------
        start : int or ndarray of int
            The starting vertex label(s) for the new edges.
        end : int or ndarray of int
            The ending vertex label(s) for the new edges.
        new_edge_lengths : ndarray of float
            The lengths of the new edges to be added.
        new_edge_pix_coord : ndarray of float
            The pixel coordinates of the new edges.

        Attributes
        ----------
        edge_lengths : ndarray of float
            The lengths of all edges.
        edges_labels : ndarray of int
            The labels for each edge (start and end vertices).
        edge_pix_coord : ndarray of float
            The pixel coordinates for all edges.
        """
        if isinstance(start, np.ndarray):
            end_idx = len(start)
            self.edge_lengths = np.concatenate((self.edge_lengths, new_edge_lengths))
        else:
            end_idx = 1
            self.edge_lengths = np.append(self.edge_lengths, new_edge_lengths)
        start_idx = self.edges_labels.shape[0]
        new_edges = np.zeros((end_idx, 3), dtype=np.uint32)
        new_edges[:, 0] = np.arange(start_idx, start_idx + end_idx) + 1  # edge label
        new_edges[:, 1] = start  # starting vertex label
        new_edges[:, 2] = end  # ending vertex label
        self.edges_labels = np.vstack((self.edges_labels, new_edges))
        # Add the new edge coord
        if new_edge_pix_coord.shape[0] > 0:
            # Add the new edge pixel coord
            new_edge_pix_coord[:, 2] += start_idx
            self.edge_pix_coord = np.vstack((self.edge_pix_coord, new_edge_pix_coord))

    def clear_edge_duplicates(self):
        """
        Remove duplicate edges by checking vertices and coordinates.

        This method identifies and removes duplicate edges based on their vertex labels
        and pixel coordinates. It scans through the edge attributes, compares them,
        and removes duplicates if they are found.
        """
        edges_to_remove = []
        duplicates = find_duplicates_coord(np.vstack((self.edges_labels[:, 1:], self.edges_labels[:, :0:-1])))
        duplicates = np.logical_or(duplicates[:len(duplicates)//2], duplicates[len(duplicates)//2:])
        for v in self.edges_labels[duplicates, 1:]: #v = self.edges_labels[duplicates, 1:][4]
            edges_bool = np.logical_or(np.all(self.edges_labels[:, 1:] == v, axis=1), np.all(self.edges_labels[:, 1:] == v[::-1], axis=1))
            edge_labs = self.edges_labels[edges_bool, 0]
            for edge_i in range(0, len(edge_labs) - 1):  #  edge_i = 0
                edge_i_coord = self.edge_pix_coord[self.edge_pix_coord[:, 2] == edge_labs[edge_i], :2]
                for edge_j in range(edge_i + 1, len(edge_labs)):  #  edge_j = 1
                    edge_j_coord = self.edge_pix_coord[self.edge_pix_coord[:, 2] == edge_labs[edge_j], :2]
                    if np.array_equal(edge_i_coord, edge_j_coord):
                        edges_to_remove.append(edge_labs[edge_j])
        edges_to_remove = np.unique(edges_to_remove)
        for edge in edges_to_remove:
            edge_bool = self.edges_labels[:, 0] != edge
            self.edges_labels = self.edges_labels[edge_bool, :]
            self.edge_lengths = self.edge_lengths[edge_bool]
            self.edge_pix_coord = self.edge_pix_coord[self.edge_pix_coord[:, 2] != edge, :]


    def clear_vertices_connecting_2_edges(self):
        """
        Remove vertices connecting exactly two edges and update edge-related attributes.

        This method identifies vertices that are connected to exactly 2 edges,
        renames edges, updates edge lengths and vertex coordinates accordingly.
        It also removes the corresponding vertices from non-tip vertices list.
        """
        v_labels, v_counts = np.unique(self.edges_labels[:, 1:], return_counts=True)
        vertices2 = v_labels[v_counts == 2]
        for vertex2 in vertices2:  # vertex2 = vertices2[0]
            edge_indices = np.nonzero(self.edges_labels[:, 1:] == vertex2)[0]
            edge_names = [self.edges_labels[edge_indices[0], 0], self.edges_labels[edge_indices[1], 0]]
            v_names = np.concatenate((self.edges_labels[edge_indices[0], 1:], self.edges_labels[edge_indices[1], 1:]))
            v_names = v_names[v_names != vertex2]
            if len(v_names) > 0: # Otherwise it's a vertex between a normal edge and a loop
                kept_edge = int(self.edge_lengths[edge_indices[1]] >= self.edge_lengths[edge_indices[0]])
                # Rename the removed edge by the kept edge name in pix_coord:
                self.edge_pix_coord[self.edge_pix_coord[:, 2] == edge_names[1 - kept_edge], 2] = edge_names[kept_edge]
                # Add the removed edge length to the kept edge length (minus 2, corresponding to the removed vertex)
                self.edge_lengths[self.edges_labels[:, 0] == edge_names[kept_edge]] += self.edge_lengths[self.edges_labels[:, 0] == edge_names[1 - kept_edge]] - 1
                # Remove the corresponding edge length from the list
                self.edge_lengths = self.edge_lengths[self.edges_labels[:, 0] != edge_names[1 - kept_edge]]
                # Rename the vertex of the kept edge in edges_labels
                self.edges_labels[self.edges_labels[:, 0] == edge_names[kept_edge], 1:] = v_names[1 - kept_edge], v_names[kept_edge]
                # Remove the removed edge from the edges_labels array
                self.edges_labels = self.edges_labels[self.edges_labels[:, 0] != edge_names[1 - kept_edge], :]
                # vY, vX = np.nonzero(self.numbered_vertices == vertex2)
                # v_idx = np.nonzero(np.all(self.non_tip_vertices == [vY[0], vX[0]], axis=1))
                vY, vX = self.vertex_index_map[vertex2]
                v_idx = np.nonzero(np.all(self.non_tip_vertices == [vY, vX], axis=1))
                self.non_tip_vertices = np.delete(self.non_tip_vertices, v_idx, axis=0)
        # Sometimes, clearing vertices connecting 2 edges can create edge duplicates, so:
        self.clear_edge_duplicates()

    def _remove_padding(self):
        """
        Removes padding from various coordinate arrays.

        This method adjusts the coordinates of edge pixels, tips,
        and non-tip vertices by subtracting 1 from their x and y values.
        It also removes padding from the skeleton, distances, and vertices
        using the `remove_padding` function.
        """
        self.edge_pix_coord[:, :2] -= 1
        self.tips_coord[:, :2] -= 1
        self.non_tip_vertices[:, :2] -= 1
        del self.vertex_index_map
        self.skeleton, self.distances, self.vertices = remove_padding(
            [self.pad_skeleton, self.pad_distances, self.numbered_vertices])


    def make_vertex_table(self, origin_contours: NDArray[np.uint8]=None, growing_areas: NDArray=None):
        """
        Generate a table for the vertices.

        This method constructs and returns a 2D NumPy array holding information
        about all vertices. Each row corresponds to one vertex identified either
        by its coordinates in `self.tips_coord` or `self.non_tip_vertices`. The
        array includes additional information about each vertex, including whether
        they are food vertices, growing areas, and connected components.

        Parameters
        ----------
        origin_contours : ndarray of uint8, optional
            Binary map to identify food vertices. Default is `None`.
        growing_areas : ndarray, optional
            Binary map to identify growing regions. Default is `None`.

        Notes
        -----
            The method updates the instance attribute `self.vertex_table` with
            the generated vertex information.
        """
        if self.vertices is None:
            self._remove_padding()
        self.vertex_table = np.zeros((self.tips_coord.shape[0] + self.non_tip_vertices.shape[0], 6), dtype=self.vertices.dtype)
        self.vertex_table[:self.tips_coord.shape[0], :2] = self.tips_coord
        self.vertex_table[self.tips_coord.shape[0]:, :2] = self.non_tip_vertices
        self.vertex_table[:self.tips_coord.shape[0], 2] = self.vertices[self.tips_coord[:, 0], self.tips_coord[:, 1]]
        self.vertex_table[self.tips_coord.shape[0]:, 2] = self.vertices[self.non_tip_vertices[:, 0], self.non_tip_vertices[:, 1]]
        self.vertex_table[:self.tips_coord.shape[0], 3] = 1
        if origin_contours is not None:
            food_vertices = self.vertices[origin_contours > 0]
            food_vertices = food_vertices[food_vertices > 0]
            self.vertex_table[np.isin(self.vertex_table[:, 2], food_vertices), 4] = 1

        if growing_areas is not None and growing_areas.shape[1] > 0:
            # growing = np.unique(self.vertices * growing_areas)[1:]
            growing = np.unique(self.vertices[growing_areas[0], growing_areas[1]])
            growing = growing[growing > 0]
            if len(growing) > 0:
                growing = np.isin(self.vertex_table[:, 2], growing)
                self.vertex_table[growing, 4] = 2

        nb, sh, stats, cent = cv2.connectedComponentsWithStats((self.vertices > 0).astype(np.uint8))
        for i, v_i in enumerate(np.nonzero(stats[:, 4] > 1)[0][1:]):
            v_labs = self.vertices[sh == v_i]
            for v_lab in v_labs: # v_lab = v_labs[0]
                self.vertex_table[self.vertex_table[:, 2] == v_lab, 5] = 1


    def make_edge_table(self, greyscale: NDArray[np.uint8], compute_BC: bool=False):
        """
        Generate edge table with length and average intensity information.

        This method processes the vertex coordinates, calculates lengths
        between vertices for each edge, and computes average width and intensity
        along the edges. Additionally, it computes edge betweenness centrality
        for each vertex pair.

        Parameters
        ----------
        greyscale : ndarray of uint8
            Grayscale image.
        """
        if self.vertices is None:
            self._remove_padding()
        self.edge_table = np.zeros((self.edges_labels.shape[0], 7), float) # edge_id, vertex1, vertex2, length, average_width, int, BC
        self.edge_table[:, :3] = self.edges_labels[:, :]
        self.edge_table[:, 3] = self.edge_lengths
        for idx, edge_lab in enumerate(self.edges_labels[:, 0]):
            edge_coord = self.edge_pix_coord[self.edge_pix_coord[:, 2] == edge_lab, :]
            pix_widths = self.distances[edge_coord[:, 0], edge_coord[:, 1]]
            v_id = self.edges_labels[self.edges_labels[:, 0] == edge_lab, 1:][0]
            v1_coord = self.vertex_table[self.vertex_table[:, 2] == v_id[0], :2][0]#
            v2_coord = self.vertex_table[self.vertex_table[:, 2] == v_id[1], :2][0]#
            v1_width, v2_width = self.distances[v1_coord[0], v1_coord[1]], self.distances[v2_coord[0], v2_coord[1]]

            if not np.isnan(v1_width):
                pix_widths = np.append(pix_widths, v1_width)
            if not np.isnan(v2_width):
                pix_widths = np.append(pix_widths, v2_width)
            if pix_widths.size > 0:
                self.edge_table[idx, 4] = pix_widths.mean()
            else:
                self.edge_table[idx, 4] = np.nan
            pix_ints = greyscale[edge_coord[:, 0], edge_coord[:, 1]]
            v1_int, v2_int = greyscale[v1_coord[0], v1_coord[1]], greyscale[v2_coord[0], v2_coord[1]]
            pix_ints = np.append(pix_ints, (v1_int, v2_int))
            self.edge_table[idx, 5] = pix_ints.mean()

        if compute_BC:
            G = nx.from_edgelist(self.edges_labels[:, 1:])
            e_BC = nx.edge_betweenness_centrality(G, seed=0)
            self.BC_net = np.zeros_like(self.distances)
            for v, k in e_BC.items(): # v=(81, 80)
                v_bool = np.logical_or(self.vertex_table[:, 2] == v[0], self.vertex_table[:, 2] == v[1])
                full_coord = self.vertex_table[v_bool, :2]
                edges_bool = np.logical_or(np.all(self.edges_labels[:, 1:] == v[::-1], axis=1),
                                           np.all(self.edges_labels[:, 1:] == v, axis=1))
                edge_labs = self.edges_labels[edges_bool, 0]
                if len(edge_labs) == 1:
                    edge_coord = self.edge_pix_coord[self.edge_pix_coord[:, 2] == edge_labs, :2]
                    full_coord = np.concatenate((full_coord, edge_coord))
                    self.BC_net[full_coord[:, 0], full_coord[:, 1]] = k
                    self.edge_table[self.edge_table[:, 0] == edge_labs, 6] = k
                elif len(edge_labs) > 1:
                    edge_coord0 = self.edge_pix_coord[self.edge_pix_coord[:, 2] == edge_labs[0], :2]
                    for edge_i in range(len(edge_labs)): #  edge_i=1
                        edge_coord = self.edge_pix_coord[self.edge_pix_coord[:, 2] == edge_labs[edge_i], :2]
                        self.edge_table[self.edge_table[:, 0] == edge_labs[edge_i], 6] = k
                        full_coord = np.concatenate((full_coord, edge_coord))
                        self.BC_net[full_coord[:, 0], full_coord[:, 1]] = k
                        if edge_i > 0 and np.array_equal(edge_coord0, edge_coord):
                            logging.error(f"There still is two identical edges: {edge_labs} of len: {len(edge_coord)} linking vertices {v}")
                            break

__init__(pad_skeleton, pad_distances, t=0)

Initialize the class with skeleton and distance arrays.

Parameters:

Name Type Description Default
pad_skeleton ndarray of uint8

Array representing the skeleton to pad.

required
pad_distances ndarray of float64

Array representing distances corresponding to the skeleton.

required

Attributes:

Name Type Description
remaining_vertices None

Remaining vertices. Initialized as None.

vertices None

Vertices. Initialized as None.

growing_vertices None

Growing vertices. Initialized as None.

im_shape tuple of ints

Shape of the skeleton array.

Source code in src/cellects/image_analysis/network_functions.py
def __init__(self, pad_skeleton: NDArray[np.uint8], pad_distances: NDArray[np.float64], t: int=0):
    """
    Initialize the class with skeleton and distance arrays.

    Parameters
    ----------
    pad_skeleton : ndarray of uint8
        Array representing the skeleton to pad.
    pad_distances : ndarray of float64
        Array representing distances corresponding to the skeleton.

    Attributes
    ----------
    remaining_vertices : None
        Remaining vertices. Initialized as `None`.
    vertices : None
        Vertices. Initialized as `None`.
    growing_vertices : None
        Growing vertices. Initialized as `None`.
    im_shape : tuple of ints
        Shape of the skeleton array.
    """
    self.pad_skeleton = pad_skeleton
    self.pad_distances = pad_distances
    self.t = t
    self.remaining_vertices = None
    self.vertices = None
    self.growing_vertices = None
    self.im_shape = pad_skeleton.shape

clear_areas_of_1_or_2_unidentified_pixels()

Removes 1 or 2 pixel size non-identified areas from the skeleton.

This function checks whether small non-identified areas (1 or 2 pixels) can be removed without breaking the skeleton structure. It performs a series of operations to ensure only safe removals are made, logging errors if the final skeleton is not fully connected or if some unidentified pixels remain.

Source code in src/cellects/image_analysis/network_functions.py
def clear_areas_of_1_or_2_unidentified_pixels(self):
    """Removes 1 or 2 pixel size non-identified areas from the skeleton.

    This function checks whether small non-identified areas (1 or 2 pixels)
    can be removed without breaking the skeleton structure. It performs
    a series of operations to ensure only safe removals are made, logging
    errors if the final skeleton is not fully connected or if some unidentified pixels remain.
    """
    # Check whether the 1 or 2 pixel size non-identified areas can be removed without breaking the skel
    one_pix = np.nonzero(self.unidentified_stats[:, 4 ] <= 2)[0] # == 1)[0]
    cutting_removal = []
    for pix_i in one_pix: #pix_i=one_pix[0]
        skel_copy = self.pad_skeleton.copy()
        y1, y2, x1, x2 = self.unidentified_stats[pix_i, 1], self.unidentified_stats[pix_i, 1] + self.unidentified_stats[pix_i, 3], self.unidentified_stats[pix_i, 0], self.unidentified_stats[pix_i, 0] + self.unidentified_stats[pix_i, 2]
        skel_copy[y1:y2, x1:x2][self.unidentified_shapes[y1:y2, x1:x2] == pix_i] = 0
        nb1, sh1 = cv2.connectedComponents(skel_copy.astype(np.uint8), connectivity=8)
        if nb1 > 2:
            cutting_removal.append(pix_i)
        else:
            self.pad_skeleton[y1:y2, x1:x2][self.unidentified_shapes[y1:y2, x1:x2] == pix_i] = 0
    if len(cutting_removal) > 0:
        logging.error(f"t={self.t}, These pixels break the skeleton when removed: {cutting_removal}")
    if (self.identified > 0).sum() != self.pad_skeleton.sum():
        logging.error(f"t={self.t}, Proportion of identified pixels in the skeleton: {(self.identified > 0).sum() / self.pad_skeleton.sum()}")
    self.pad_distances *= self.pad_skeleton
    del self.identified
    del self.unidentified_stats
    del self.unidentified_shapes

clear_edge_duplicates()

Remove duplicate edges by checking vertices and coordinates.

This method identifies and removes duplicate edges based on their vertex labels and pixel coordinates. It scans through the edge attributes, compares them, and removes duplicates if they are found.

Source code in src/cellects/image_analysis/network_functions.py
def clear_edge_duplicates(self):
    """
    Remove duplicate edges by checking vertices and coordinates.

    This method identifies and removes duplicate edges based on their vertex labels
    and pixel coordinates. It scans through the edge attributes, compares them,
    and removes duplicates if they are found.
    """
    edges_to_remove = []
    duplicates = find_duplicates_coord(np.vstack((self.edges_labels[:, 1:], self.edges_labels[:, :0:-1])))
    duplicates = np.logical_or(duplicates[:len(duplicates)//2], duplicates[len(duplicates)//2:])
    for v in self.edges_labels[duplicates, 1:]: #v = self.edges_labels[duplicates, 1:][4]
        edges_bool = np.logical_or(np.all(self.edges_labels[:, 1:] == v, axis=1), np.all(self.edges_labels[:, 1:] == v[::-1], axis=1))
        edge_labs = self.edges_labels[edges_bool, 0]
        for edge_i in range(0, len(edge_labs) - 1):  #  edge_i = 0
            edge_i_coord = self.edge_pix_coord[self.edge_pix_coord[:, 2] == edge_labs[edge_i], :2]
            for edge_j in range(edge_i + 1, len(edge_labs)):  #  edge_j = 1
                edge_j_coord = self.edge_pix_coord[self.edge_pix_coord[:, 2] == edge_labs[edge_j], :2]
                if np.array_equal(edge_i_coord, edge_j_coord):
                    edges_to_remove.append(edge_labs[edge_j])
    edges_to_remove = np.unique(edges_to_remove)
    for edge in edges_to_remove:
        edge_bool = self.edges_labels[:, 0] != edge
        self.edges_labels = self.edges_labels[edge_bool, :]
        self.edge_lengths = self.edge_lengths[edge_bool]
        self.edge_pix_coord = self.edge_pix_coord[self.edge_pix_coord[:, 2] != edge, :]

clear_vertices_connecting_2_edges()

Remove vertices connecting exactly two edges and update edge-related attributes.

This method identifies vertices that are connected to exactly 2 edges, renames edges, updates edge lengths and vertex coordinates accordingly. It also removes the corresponding vertices from non-tip vertices list.

Source code in src/cellects/image_analysis/network_functions.py
def clear_vertices_connecting_2_edges(self):
    """
    Remove vertices connecting exactly two edges and update edge-related attributes.

    This method identifies vertices that are connected to exactly 2 edges,
    renames edges, updates edge lengths and vertex coordinates accordingly.
    It also removes the corresponding vertices from non-tip vertices list.
    """
    v_labels, v_counts = np.unique(self.edges_labels[:, 1:], return_counts=True)
    vertices2 = v_labels[v_counts == 2]
    for vertex2 in vertices2:  # vertex2 = vertices2[0]
        edge_indices = np.nonzero(self.edges_labels[:, 1:] == vertex2)[0]
        edge_names = [self.edges_labels[edge_indices[0], 0], self.edges_labels[edge_indices[1], 0]]
        v_names = np.concatenate((self.edges_labels[edge_indices[0], 1:], self.edges_labels[edge_indices[1], 1:]))
        v_names = v_names[v_names != vertex2]
        if len(v_names) > 0: # Otherwise it's a vertex between a normal edge and a loop
            kept_edge = int(self.edge_lengths[edge_indices[1]] >= self.edge_lengths[edge_indices[0]])
            # Rename the removed edge by the kept edge name in pix_coord:
            self.edge_pix_coord[self.edge_pix_coord[:, 2] == edge_names[1 - kept_edge], 2] = edge_names[kept_edge]
            # Add the removed edge length to the kept edge length (minus 2, corresponding to the removed vertex)
            self.edge_lengths[self.edges_labels[:, 0] == edge_names[kept_edge]] += self.edge_lengths[self.edges_labels[:, 0] == edge_names[1 - kept_edge]] - 1
            # Remove the corresponding edge length from the list
            self.edge_lengths = self.edge_lengths[self.edges_labels[:, 0] != edge_names[1 - kept_edge]]
            # Rename the vertex of the kept edge in edges_labels
            self.edges_labels[self.edges_labels[:, 0] == edge_names[kept_edge], 1:] = v_names[1 - kept_edge], v_names[kept_edge]
            # Remove the removed edge from the edges_labels array
            self.edges_labels = self.edges_labels[self.edges_labels[:, 0] != edge_names[1 - kept_edge], :]
            # vY, vX = np.nonzero(self.numbered_vertices == vertex2)
            # v_idx = np.nonzero(np.all(self.non_tip_vertices == [vY[0], vX[0]], axis=1))
            vY, vX = self.vertex_index_map[vertex2]
            v_idx = np.nonzero(np.all(self.non_tip_vertices == [vY, vX], axis=1))
            self.non_tip_vertices = np.delete(self.non_tip_vertices, v_idx, axis=0)
    # Sometimes, clearing vertices connecting 2 edges can create edge duplicates, so:
    self.clear_edge_duplicates()

get_tipped_edges()

get_tipped_edges : method to extract skeleton edges connecting branching points and tips.

Makes sure that there is only one connected component constituting the skeleton of the network and identifies all edges that are connected to a tip.

Attributes:

Name Type Description
pad_skeleton ndarray of bool, modified

Boolean mask representing the pruned skeleton after isolating the largest connected component.

vertices_branching_tips ndarray of int, shape (N, 2)

Coordinates of branching points that connect to tips in the skeleton structure.

edge_lengths ndarray of float, shape (M,)

Lengths of edges connecting non-tip vertices to identified tip locations.

edge_pix_coord list of array of int

Pixel coordinates for each edge path between connected skeleton elements.

Source code in src/cellects/image_analysis/network_functions.py
def get_tipped_edges(self):
    """
    get_tipped_edges : method to extract skeleton edges connecting branching points and tips.

    Makes sure that there is only one connected component constituting the skeleton of the network and
    identifies all edges that are connected to a tip.

    Attributes
    ----------
    pad_skeleton : ndarray of bool, modified
        Boolean mask representing the pruned skeleton after isolating the largest connected component.
    vertices_branching_tips : ndarray of int, shape (N, 2)
        Coordinates of branching points that connect to tips in the skeleton structure.
    edge_lengths : ndarray of float, shape (M,)
        Lengths of edges connecting non-tip vertices to identified tip locations.
    edge_pix_coord : list of array of int
        Pixel coordinates for each edge path between connected skeleton elements.

    """
    self.pad_skeleton = keep_one_connected_component(self.pad_skeleton)
    self.vertices_branching_tips, self.edge_lengths, self.edge_pix_coord = _find_closest_vertices(self.pad_skeleton,
                                                                                    self.non_tip_vertices,
                                                                                    self.tips_coord[:, :2])

get_vertices_and_tips_coord()

Process skeleton data to extract non-tip vertices and tip coordinates.

This method processes the skeleton stored in self.pad_skeleton by first extracting all vertices and tips. It then separates these into branch points (non-tip vertices) and specific tip coordinates using internal processing.

Attributes:

Name Type Description
self.non_tip_vertices array - like

Coordinates of non-tip (branch) vertices.

self.tips_coord array - like

Coordinates of identified tips in the skeleton.

Source code in src/cellects/image_analysis/network_functions.py
def get_vertices_and_tips_coord(self):
    """Process skeleton data to extract non-tip vertices and tip coordinates.

    This method processes the skeleton stored in `self.pad_skeleton` by first
    extracting all vertices and tips. It then separates these into branch points
    (non-tip vertices) and specific tip coordinates using internal processing.

    Attributes
    ----------
    self.non_tip_vertices : array-like
        Coordinates of non-tip (branch) vertices.
    self.tips_coord : array-like
        Coordinates of identified tips in the skeleton.
    """
    pad_vertices, pad_tips = get_vertices_and_tips_from_skeleton(self.pad_skeleton)
    self.non_tip_vertices, self.tips_coord = get_branches_and_tips_coord(pad_vertices, pad_tips)

label_edges_connected_with_vertex_clusters()

Identify edges connected to touching vertices by processing vertex clusters.

This function processes the skeleton to identify edges connecting vertices that are part of touching clusters. It creates a cropped version of the skeleton by removing already detected edges and their tips, then iterates through vertex clusters to explore and identify nearby edges.

Source code in src/cellects/image_analysis/network_functions.py
def label_edges_connected_with_vertex_clusters(self):
    """
    Identify edges connected to touching vertices by processing vertex clusters.

    This function processes the skeleton to identify edges connecting vertices
    that are part of touching clusters. It creates a cropped version of the skeleton
    by removing already detected edges and their tips, then iterates through vertex
    clusters to explore and identify nearby edges.
    """
    # I.1. Identify edges connected to touching vertices:
    # First, create another version of these arrays, where we remove every already detected edge and their tips
    cropped_skeleton = self.pad_skeleton.copy()
    cropped_skeleton[self.edge_pix_coord[:, 0], self.edge_pix_coord[:, 1]] = 0
    cropped_skeleton[self.tips_coord[:, 0], self.tips_coord[:, 1]] = 0

    # non_tip_vertices does not need to be updated yet, because it only contains verified branching vertices
    cropped_non_tip_vertices = self.non_tip_vertices.copy()

    self.new_level_vertices = None
    # The problem with vertex_to_vertex_connexion is that since they are not separated by zeros,
    # they always atract each other instead of exploring other paths.
    # To fix this, we loop over each vertex group to
    # 1. Add one edge per inter-vertex connexion inside the group
    # 2. Remove all except one, and loop as many time as necessary.
    # Inside that second loop, we explore and identify every edge nearby.
    # Find every vertex_to_vertex_connexion
    v_cluster_nb, self.v_cluster_lab, self.v_cluster_stats, vgc = cv2.connectedComponentsWithStats(
        (self.numbered_vertices > 0).astype(np.uint8), connectivity=8)
    if v_cluster_nb > 0:
        max_v_nb = np.max(self.v_cluster_stats[1:, 4])
        cropped_skeleton_list = []
        starting_vertices_list = []
        for v_nb in range(2, max_v_nb + 1):
            labels = np.nonzero(self.v_cluster_stats[:, 4] == v_nb)[0]
            coord_list = []
            for lab in labels:  # lab=labels[0]
                coord_list.append(np.nonzero(self.v_cluster_lab == lab))
            for iter in range(v_nb):
                for lab_ in range(labels.shape[0]): # lab=labels[0]
                    cs = cropped_skeleton.copy()
                    sv = []
                    v_c = coord_list[lab_]
                    # Save the current coordinate in the starting vertices array of this iteration
                    sv.append([v_c[0][iter], v_c[1][iter]])
                    # Remove one vertex coordinate to keep it from cs
                    v_y, v_x = np.delete(v_c[0], iter), np.delete(v_c[1], iter)
                    cs[v_y, v_x] = 0
                    cropped_skeleton_list.append(cs)
                    starting_vertices_list.append(np.array(sv))
        for cropped_skeleton, starting_vertices in zip(cropped_skeleton_list, starting_vertices_list):
            _, _ = self._identify_edges_connecting_a_vertex_list(cropped_skeleton, cropped_non_tip_vertices, starting_vertices)

label_edges_connecting_vertex_clusters()

Label edges connecting vertex clusters.

This method identifies the connections between connected vertices within vertex clusters and labels these edges. It uses the previously found connected vertices, creates an image of the connections, and then identifies and labels the edges between these touching vertices.

Source code in src/cellects/image_analysis/network_functions.py
def label_edges_connecting_vertex_clusters(self):
    """
    Label edges connecting vertex clusters.

    This method identifies the connections between connected vertices within
    vertex clusters and labels these edges. It uses the previously found connected
    vertices, creates an image of the connections, and then identifies
    and labels the edges between these touching vertices.
    """
    # I.2. Identify the connexions between connected vertices:
    all_connected_vertices = np.nonzero(self.v_cluster_stats[:, 4] > 1)[0][1:]
    all_con_v_im = np.zeros_like(self.pad_skeleton)
    for v_group in all_connected_vertices:
        all_con_v_im[self.v_cluster_lab == v_group] = 1
    cropped_skeleton = all_con_v_im
    self.vertex_clusters_coord = np.transpose(np.array(np.nonzero(cropped_skeleton)))
    _, _ = self._identify_edges_connecting_a_vertex_list(cropped_skeleton, self.vertex_clusters_coord, self.vertex_clusters_coord)
    # self.edges_labels
    del self.v_cluster_stats
    del self.v_cluster_lab

label_edges_from_known_vertices_iteratively()

Label edges iteratively from known vertices.

This method labels edges in an iterative process starting from known vertices. It handles the removal of detected edges and updates the skeleton accordingly, to avoid detecting edges twice.

Source code in src/cellects/image_analysis/network_functions.py
def label_edges_from_known_vertices_iteratively(self):
    """
    Label edges iteratively from known vertices.

    This method labels edges in an iterative process starting from
    known vertices. It handles the removal of detected edges and
    updates the skeleton accordingly, to avoid detecting edges twice.
    """
    # II/ Identify all remaining edges
    if self.new_level_vertices is not None:
        starting_vertices_coord = np.vstack((self.new_level_vertices[:, :2], self.vertices_branching_tips))
        starting_vertices_coord = np.unique(starting_vertices_coord, axis=0)
    else:
        # We start from the vertices connecting tips
        starting_vertices_coord = self.vertices_branching_tips.copy()
    # Remove the detected edges from cropped_skeleton:
    cropped_skeleton = self.pad_skeleton.copy()
    cropped_skeleton[self.edge_pix_coord[:, 0], self.edge_pix_coord[:, 1]] = 0
    cropped_skeleton[self.tips_coord[:, 0], self.tips_coord[:, 1]] = 0
    cropped_skeleton[self.vertex_clusters_coord[:, 0], self.vertex_clusters_coord[:, 1]] = 0

    # Reinitialize cropped_non_tip_vertices to browse all vertices except tips and groups
    cropped_non_tip_vertices = self.non_tip_vertices.copy()
    cropped_non_tip_vertices = remove_coordinates(cropped_non_tip_vertices, self.vertex_clusters_coord)
    del self.vertex_clusters_coord
    remaining_v = cropped_non_tip_vertices.shape[0] + 1
    while remaining_v > cropped_non_tip_vertices.shape[0]:
        remaining_v = cropped_non_tip_vertices.shape[0]
        cropped_skeleton, cropped_non_tip_vertices = self._identify_edges_connecting_a_vertex_list(cropped_skeleton, cropped_non_tip_vertices, starting_vertices_coord)
        if self.new_level_vertices is not None:
            starting_vertices_coord = np.unique(self.new_level_vertices[:, :2], axis=0)

label_edges_looping_on_1_vertex()

Identify and handle edges that form loops around a single vertex. This method processes the skeleton image to find looping edges and updates the edge data structure accordingly.

Source code in src/cellects/image_analysis/network_functions.py
def label_edges_looping_on_1_vertex(self):
    """
    Identify and handle edges that form loops around a single vertex.
    This method processes the skeleton image to find looping edges and updates
    the edge data structure accordingly.
    """
    self.identified = np.zeros_like(self.numbered_vertices)
    self.identified[self.edge_pix_coord[:, 0], self.edge_pix_coord[:, 1]] = 1
    self.identified[self.non_tip_vertices[:, 0], self.non_tip_vertices[:, 1]] = 1
    self.identified[self.tips_coord[:, 0], self.tips_coord[:, 1]] = 1
    unidentified = (1 - self.identified) * self.pad_skeleton

    # Find out the remaining non-identified pixels
    nb, self.unidentified_shapes, self.unidentified_stats, ce = cv2.connectedComponentsWithStats(unidentified.astype(np.uint8))
    # Handle the cases where edges are loops over only one vertex
    looping_edges = np.nonzero(self.unidentified_stats[:, 4 ] > 2)[0][1:]
    for loop_i in looping_edges: # loop_i = looping_edges[0] loop_i=11 #  zoom_on_nonzero(unique_vertices_im, return_coord=False)
        edge_i = (self.unidentified_shapes == loop_i).astype(np.uint8)
        dil_edge_i = cv2.dilate(edge_i, square_33)
        unique_vertices_im = self.numbered_vertices.copy()
        unique_vertices_im[self.tips_coord[:, 0], self.tips_coord[:, 1]] = 0
        unique_vertices_im = dil_edge_i * unique_vertices_im
        unique_vertices = np.unique(unique_vertices_im)
        unique_vertices = unique_vertices[unique_vertices > 0]
        v_nb = len(unique_vertices)
        new_edge_lengths = edge_i.sum()
        new_edge_pix_coord = np.transpose(np.vstack((np.nonzero(edge_i))))
        new_edge_pix_coord = np.hstack((new_edge_pix_coord, np.repeat(1, new_edge_pix_coord.shape[0])[:, None]))
        if v_nb == 1:
            start, end = unique_vertices[0], unique_vertices[0]
            self._update_edge_data(start, end, new_edge_lengths, new_edge_pix_coord)
        elif v_nb == 2:
            # The edge loops around a group of connected vertices
            start, end = unique_vertices[0], unique_vertices[1]
            self._update_edge_data(start, end, new_edge_lengths, new_edge_pix_coord)
            # conn_v_nb, conn_v = cv2.connectedComponents((unique_vertices_im > 0).astype(np.uint8))
            # if len(unique_vertices) == 2 and conn_v_nb == 2:
        elif v_nb > 2: # The question is: How to choose two vertices so that they link all missing pixels?
            # 1. Find every edge pixel connected to these vertices
            vertex_connected_pixels = np.nonzero(cv2.dilate((unique_vertices_im > 0).astype(np.uint8), square_33) * edge_i)
            # 2. Find the most distant pair of these
            pix1, pix2 = get_min_or_max_euclidean_pair(vertex_connected_pixels, "max")
            # 3. The two best vertices are the two nearest to these two most distant edge pixels
            dist_to_pix1 = np.zeros(v_nb, np.float64)
            dist_to_pix2 = np.zeros(v_nb, np.float64)
            for _i, v_i in enumerate(unique_vertices):
                v_coord = self.vertex_index_map[v_i]
                dist_to_pix1[_i] = eudist(pix1, v_coord)
                dist_to_pix2[_i] = eudist(pix2, v_coord)
            start, end = unique_vertices[np.argmin(dist_to_pix1)], unique_vertices[np.argmin(dist_to_pix2)]
            self._update_edge_data(start, end, new_edge_lengths, new_edge_pix_coord)
        else:
            logging.error(f"t={self.t}, One long edge is not identified: i={loop_i} of length={edge_i.sum()} close to {len(unique_vertices)} vertices.")
    self.identified[self.edge_pix_coord[:, 0], self.edge_pix_coord[:, 1]] = 1

label_tipped_edges_and_their_vertices()

Label edges connecting tip vertices to branching vertices and assign unique labels to all relevant vertices.

Processes vertex coordinates by stacking tips, vertices branching from tips, and remaining non-tip vertices. Assigns unique sequential identifiers to these vertices in a new array. Constructs an array of edge-label information, where each row contains the edge label (starting at 1), corresponding tip label, and connected vertex label.

Attributes:

Name Type Description
tip_number int

The number of tip coordinates available in tips_coord.

ordered_v_coord ndarray of float

Stack of unique vertex coordinates ordered by: tips first, vertices branching tips second, non-tip vertices third.

numbered_vertices ndarray of uint32

2D array where each coordinate position is labeled with a sequential integer (starting at 1) based on the order in ordered_v_coord.

edges_labels ndarray of uint32

Array of shape (n_edges, 3). Each row contains: - Edge label (sequential from 1 to n_edges) - Label of the tip vertex for that edge. - Label of the vertex branching the tip.

vertices_branching_tips ndarray of float

Unique coordinates of vertices directly connected to tips after removing duplicates.

Source code in src/cellects/image_analysis/network_functions.py
def label_tipped_edges_and_their_vertices(self):
    """Label edges connecting tip vertices to branching vertices and assign unique labels to all relevant vertices.

    Processes vertex coordinates by stacking tips, vertices branching from tips, and remaining non-tip vertices.
    Assigns unique sequential identifiers to these vertices in a new array. Constructs an array of edge-label information,
    where each row contains the edge label (starting at 1), corresponding tip label, and connected vertex label.

    Attributes
    ----------
    tip_number : int
        The number of tip coordinates available in `tips_coord`.

    ordered_v_coord : ndarray of float
        Stack of unique vertex coordinates ordered by: tips first, vertices branching tips second, non-tip vertices third.

    numbered_vertices : ndarray of uint32
        2D array where each coordinate position is labeled with a sequential integer (starting at 1) based on the order in `ordered_v_coord`.

    edges_labels : ndarray of uint32
        Array of shape (n_edges, 3). Each row contains:
        - Edge label (sequential from 1 to n_edges)
        - Label of the tip vertex for that edge.
        - Label of the vertex branching the tip.

    vertices_branching_tips : ndarray of float
        Unique coordinates of vertices directly connected to tips after removing duplicates.
    """
    self.tip_number = self.tips_coord.shape[0]

    # Stack vertex coordinates in that order: 1. Tips, 2. Vertices branching tips, 3. All remaining vertices
    ordered_v_coord = np.vstack((self.tips_coord[:, :2], self.vertices_branching_tips[:, :2], self.non_tip_vertices))
    ordered_v_coord = np.unique(ordered_v_coord, axis=0)

    # Create arrays to store edges and vertices labels
    self.numbered_vertices = np.zeros(self.im_shape, dtype=np.uint32)
    self.numbered_vertices[ordered_v_coord[:, 0], ordered_v_coord[:, 1]] = np.arange(1, ordered_v_coord.shape[0] + 1)
    self.vertices = None
    self.vertex_index_map = {}
    for idx, (y, x) in enumerate(ordered_v_coord):
        self.vertex_index_map[idx + 1] = tuple((np.uint32(y), np.uint32(x)))

    # Name edges from 1 to the number of edges connecting tips and set the vertices labels from all tips to their connected vertices:
    self.edges_labels = np.zeros((self.tip_number, 3), dtype=np.uint32)
    # edge label:
    self.edges_labels[:, 0] = np.arange(self.tip_number) + 1
    # tip label:
    self.edges_labels[:, 1] = self.numbered_vertices[self.tips_coord[:, 0], self.tips_coord[:, 1]]
    # vertex branching tip label:
    self.edges_labels[:, 2] = self.numbered_vertices[self.vertices_branching_tips[:, 0], self.vertices_branching_tips[:, 1]]

    # Remove duplicates in vertices_branching_tips
    self.vertices_branching_tips = np.unique(self.vertices_branching_tips[:, :2], axis=0)

make_edge_table(greyscale, compute_BC=False)

Generate edge table with length and average intensity information.

This method processes the vertex coordinates, calculates lengths between vertices for each edge, and computes average width and intensity along the edges. Additionally, it computes edge betweenness centrality for each vertex pair.

Parameters:

Name Type Description Default
greyscale ndarray of uint8

Grayscale image.

required
Source code in src/cellects/image_analysis/network_functions.py
def make_edge_table(self, greyscale: NDArray[np.uint8], compute_BC: bool=False):
    """
    Generate edge table with length and average intensity information.

    This method processes the vertex coordinates, calculates lengths
    between vertices for each edge, and computes average width and intensity
    along the edges. Additionally, it computes edge betweenness centrality
    for each vertex pair.

    Parameters
    ----------
    greyscale : ndarray of uint8
        Grayscale image.
    """
    if self.vertices is None:
        self._remove_padding()
    self.edge_table = np.zeros((self.edges_labels.shape[0], 7), float) # edge_id, vertex1, vertex2, length, average_width, int, BC
    self.edge_table[:, :3] = self.edges_labels[:, :]
    self.edge_table[:, 3] = self.edge_lengths
    for idx, edge_lab in enumerate(self.edges_labels[:, 0]):
        edge_coord = self.edge_pix_coord[self.edge_pix_coord[:, 2] == edge_lab, :]
        pix_widths = self.distances[edge_coord[:, 0], edge_coord[:, 1]]
        v_id = self.edges_labels[self.edges_labels[:, 0] == edge_lab, 1:][0]
        v1_coord = self.vertex_table[self.vertex_table[:, 2] == v_id[0], :2][0]#
        v2_coord = self.vertex_table[self.vertex_table[:, 2] == v_id[1], :2][0]#
        v1_width, v2_width = self.distances[v1_coord[0], v1_coord[1]], self.distances[v2_coord[0], v2_coord[1]]

        if not np.isnan(v1_width):
            pix_widths = np.append(pix_widths, v1_width)
        if not np.isnan(v2_width):
            pix_widths = np.append(pix_widths, v2_width)
        if pix_widths.size > 0:
            self.edge_table[idx, 4] = pix_widths.mean()
        else:
            self.edge_table[idx, 4] = np.nan
        pix_ints = greyscale[edge_coord[:, 0], edge_coord[:, 1]]
        v1_int, v2_int = greyscale[v1_coord[0], v1_coord[1]], greyscale[v2_coord[0], v2_coord[1]]
        pix_ints = np.append(pix_ints, (v1_int, v2_int))
        self.edge_table[idx, 5] = pix_ints.mean()

    if compute_BC:
        G = nx.from_edgelist(self.edges_labels[:, 1:])
        e_BC = nx.edge_betweenness_centrality(G, seed=0)
        self.BC_net = np.zeros_like(self.distances)
        for v, k in e_BC.items(): # v=(81, 80)
            v_bool = np.logical_or(self.vertex_table[:, 2] == v[0], self.vertex_table[:, 2] == v[1])
            full_coord = self.vertex_table[v_bool, :2]
            edges_bool = np.logical_or(np.all(self.edges_labels[:, 1:] == v[::-1], axis=1),
                                       np.all(self.edges_labels[:, 1:] == v, axis=1))
            edge_labs = self.edges_labels[edges_bool, 0]
            if len(edge_labs) == 1:
                edge_coord = self.edge_pix_coord[self.edge_pix_coord[:, 2] == edge_labs, :2]
                full_coord = np.concatenate((full_coord, edge_coord))
                self.BC_net[full_coord[:, 0], full_coord[:, 1]] = k
                self.edge_table[self.edge_table[:, 0] == edge_labs, 6] = k
            elif len(edge_labs) > 1:
                edge_coord0 = self.edge_pix_coord[self.edge_pix_coord[:, 2] == edge_labs[0], :2]
                for edge_i in range(len(edge_labs)): #  edge_i=1
                    edge_coord = self.edge_pix_coord[self.edge_pix_coord[:, 2] == edge_labs[edge_i], :2]
                    self.edge_table[self.edge_table[:, 0] == edge_labs[edge_i], 6] = k
                    full_coord = np.concatenate((full_coord, edge_coord))
                    self.BC_net[full_coord[:, 0], full_coord[:, 1]] = k
                    if edge_i > 0 and np.array_equal(edge_coord0, edge_coord):
                        logging.error(f"There still is two identical edges: {edge_labs} of len: {len(edge_coord)} linking vertices {v}")
                        break

make_vertex_table(origin_contours=None, growing_areas=None)

Generate a table for the vertices.

This method constructs and returns a 2D NumPy array holding information about all vertices. Each row corresponds to one vertex identified either by its coordinates in self.tips_coord or self.non_tip_vertices. The array includes additional information about each vertex, including whether they are food vertices, growing areas, and connected components.

Parameters:

Name Type Description Default
origin_contours ndarray of uint8

Binary map to identify food vertices. Default is None.

None
growing_areas ndarray

Binary map to identify growing regions. Default is None.

None
Notes
The method updates the instance attribute `self.vertex_table` with
the generated vertex information.
Source code in src/cellects/image_analysis/network_functions.py
def make_vertex_table(self, origin_contours: NDArray[np.uint8]=None, growing_areas: NDArray=None):
    """
    Generate a table for the vertices.

    This method constructs and returns a 2D NumPy array holding information
    about all vertices. Each row corresponds to one vertex identified either
    by its coordinates in `self.tips_coord` or `self.non_tip_vertices`. The
    array includes additional information about each vertex, including whether
    they are food vertices, growing areas, and connected components.

    Parameters
    ----------
    origin_contours : ndarray of uint8, optional
        Binary map to identify food vertices. Default is `None`.
    growing_areas : ndarray, optional
        Binary map to identify growing regions. Default is `None`.

    Notes
    -----
        The method updates the instance attribute `self.vertex_table` with
        the generated vertex information.
    """
    if self.vertices is None:
        self._remove_padding()
    self.vertex_table = np.zeros((self.tips_coord.shape[0] + self.non_tip_vertices.shape[0], 6), dtype=self.vertices.dtype)
    self.vertex_table[:self.tips_coord.shape[0], :2] = self.tips_coord
    self.vertex_table[self.tips_coord.shape[0]:, :2] = self.non_tip_vertices
    self.vertex_table[:self.tips_coord.shape[0], 2] = self.vertices[self.tips_coord[:, 0], self.tips_coord[:, 1]]
    self.vertex_table[self.tips_coord.shape[0]:, 2] = self.vertices[self.non_tip_vertices[:, 0], self.non_tip_vertices[:, 1]]
    self.vertex_table[:self.tips_coord.shape[0], 3] = 1
    if origin_contours is not None:
        food_vertices = self.vertices[origin_contours > 0]
        food_vertices = food_vertices[food_vertices > 0]
        self.vertex_table[np.isin(self.vertex_table[:, 2], food_vertices), 4] = 1

    if growing_areas is not None and growing_areas.shape[1] > 0:
        # growing = np.unique(self.vertices * growing_areas)[1:]
        growing = np.unique(self.vertices[growing_areas[0], growing_areas[1]])
        growing = growing[growing > 0]
        if len(growing) > 0:
            growing = np.isin(self.vertex_table[:, 2], growing)
            self.vertex_table[growing, 4] = 2

    nb, sh, stats, cent = cv2.connectedComponentsWithStats((self.vertices > 0).astype(np.uint8))
    for i, v_i in enumerate(np.nonzero(stats[:, 4] > 1)[0][1:]):
        v_labs = self.vertices[sh == v_i]
        for v_lab in v_labs: # v_lab = v_labs[0]
            self.vertex_table[self.vertex_table[:, 2] == v_lab, 5] = 1

remove_tipped_edge_smaller_than_branch_width()

Remove very short edges from the skeleton.

This method focuses on edges connecting tips. When too short, they are considered are noise and removed from the skeleton and distances matrices. These edges are considered too short when their length is smaller than the width of the nearest network branch (an information included in pad_distances). This method also updates internal data structures (skeleton, edge coordinates, vertex/tip positions) accordingly through pixel-wise analysis and connectivity checks.

Source code in src/cellects/image_analysis/network_functions.py
def remove_tipped_edge_smaller_than_branch_width(self):
    """Remove very short edges from the skeleton.

    This method focuses on edges connecting tips. When too short, they are considered are noise and
    removed from the skeleton and distances matrices. These edges are considered too short when their length is
    smaller than the width of the nearest network branch (an information included in pad_distances).
    This method also updates internal data structures (skeleton, edge coordinates, vertex/tip positions)
    accordingly through pixel-wise analysis and connectivity checks.
    """
    # Identify edges that are smaller than the width of the branch it is attached to
    tipped_edges_to_remove = np.zeros(self.edge_lengths.shape[0], dtype=bool)
    # connecting_vertices_to_remove = np.zeros(self.vertices_branching_tips.shape[0], dtype=bool)
    branches_to_remove = np.zeros(self.non_tip_vertices.shape[0], dtype=bool)
    new_edge_pix_coord = []
    remaining_tipped_edges_nb = 0
    for i in range(len(self.edge_lengths)): # i = 3142 #1096 # 974 # 222
        Y, X = self.vertices_branching_tips[i, 0], self.vertices_branching_tips[i, 1]
        edge_bool = self.edge_pix_coord[:, 2] == i + 1
        eY, eX = self.edge_pix_coord[edge_bool, 0], self.edge_pix_coord[edge_bool, 1]
        if np.nanmax(self.pad_distances[(Y - 1): (Y + 2), (X - 1): (X + 2)]) >= self.edge_lengths[i]:
            tipped_edges_to_remove[i] = True
            # Remove the edge
            self.pad_skeleton[eY, eX] = 0
            # Remove the tip
            self.pad_skeleton[self.tips_coord[i, 0], self.tips_coord[i, 1]] = 0

            # Remove the coordinates corresponding to that edge
            self.edge_pix_coord = np.delete(self.edge_pix_coord, edge_bool, 0)

            # check whether the connecting vertex remains a vertex of not
            pad_sub_skeleton = np.pad(self.pad_skeleton[(Y - 2): (Y + 3), (X - 2): (X + 3)], [(1,), (1,)],
                                      mode='constant')
            sub_vertices, sub_tips = get_vertices_and_tips_from_skeleton(pad_sub_skeleton)
            # If the vertex does not connect at least 3 edges anymore, remove its vertex label
            if sub_vertices[3, 3] == 0:
                vertex_to_remove = np.nonzero(np.logical_and(self.non_tip_vertices[:, 0] == Y, self.non_tip_vertices[:, 1] == X))[0]
                branches_to_remove[vertex_to_remove] = True
            # If that pixel became a tip connected to another vertex remove it from the skeleton
            if sub_tips[3, 3]:
                if sub_vertices[2:5, 2:5].sum() > 1:
                    self.pad_skeleton[Y, X] = 0
                    self.edge_pix_coord = np.delete(self.edge_pix_coord, np.all(self.edge_pix_coord[:, :2] == [Y, X], axis=1), 0)
                    vertex_to_remove = np.nonzero(np.logical_and(self.non_tip_vertices[:, 0] == Y, self.non_tip_vertices[:, 1] == X))[0]
                    branches_to_remove[vertex_to_remove] = True
        else:
            remaining_tipped_edges_nb += 1
            new_edge_pix_coord.append(np.stack((eY, eX, np.repeat(remaining_tipped_edges_nb, len(eY))), axis=1))

    # Check that excedent connected components are 1 pixel size, if so:
    # It means that they were neighbors to removed tips and not necessary for the skeleton
    nb, sh = cv2.connectedComponents(self.pad_skeleton)
    if nb > 2:
        logging.error("Removing small tipped edges split the skeleton")
        # for i in range(2, nb):
        #     excedent = sh == i
        #     if (excedent).sum() == 1:
        #         self.pad_skeleton[excedent] = 0

    # Remove in distances the pixels removed in skeleton:
    self.pad_distances *= self.pad_skeleton

    # update edge_pix_coord
    if len(new_edge_pix_coord) > 0:
        self.edge_pix_coord = np.vstack(new_edge_pix_coord)

    # # Remove tips connected to very small edges
    # self.tips_coord = np.delete(self.tips_coord, tipped_edges_to_remove, 0)
    # # Add corresponding edge names
    # self.tips_coord = np.hstack((self.tips_coord, np.arange(1, len(self.tips_coord) + 1)[:, None]))

    # # Within all branching (non-tip) vertices, keep those that did not lose their vertex status because of the edge removal
    # self.non_tip_vertices = np.delete(self.non_tip_vertices, branches_to_remove, 0)

    # # Get the branching vertices who kept their typped edge
    # self.vertices_branching_tips = np.delete(self.vertices_branching_tips, tipped_edges_to_remove, 0)

    # Within all branching (non-tip) vertices, keep those that do not connect a tipped edge.
    # v_branching_tips_in_branching_v = find_common_coord(self.non_tip_vertices, self.vertices_branching_tips[:, :2])
    # self.remaining_vertices = np.delete(self.non_tip_vertices, v_branching_tips_in_branching_v, 0)
    # ordered_v_coord = np.vstack((self.tips_coord[:, :2], self.vertices_branching_tips[:, :2], self.remaining_vertices))

    # tips = self.tips_coord
    # branching_any_edge = self.non_tip_vertices
    # branching_typped_edges = self.vertices_branching_tips
    # branching_no_typped_edges = self.remaining_vertices

    self.get_vertices_and_tips_coord()
    self.get_tipped_edges()

run_edge_identification()

Run the edge identification process.

This method orchestrates a series of steps to identify and label edges within the graph structure. Each step handles a specific aspect of edge identification, ultimately leading to a clearer and more refined edge network.

Steps involved: 1. Get vertices and tips coordinates. 2. Identify tipped edges. 3. Remove tipped edges smaller than branch width. 4. Label tipped edges and their vertices. 5. Label edges connected with vertex clusters. 6. Label edges connecting vertex clusters. 7. Label edges from known vertices iteratively. 8. Label edges looping on 1 vertex. 9. Clear areas with 1 or 2 unidentified pixels. 10. Clear edge duplicates. 11. Clear vertices connecting 2 edges.

Source code in src/cellects/image_analysis/network_functions.py
def run_edge_identification(self):
    """
    Run the edge identification process.

    This method orchestrates a series of steps to identify and label edges
    within the graph structure. Each step handles a specific aspect of edge
    identification, ultimately leading to a clearer and more refined edge network.

    Steps involved:
    1. Get vertices and tips coordinates.
    2. Identify tipped edges.
    3. Remove tipped edges smaller than branch width.
    4. Label tipped edges and their vertices.
    5. Label edges connected with vertex clusters.
    6. Label edges connecting vertex clusters.
    7. Label edges from known vertices iteratively.
    8. Label edges looping on 1 vertex.
    9. Clear areas with 1 or 2 unidentified pixels.
    10. Clear edge duplicates.
    11. Clear vertices connecting 2 edges.
    """
    self.get_vertices_and_tips_coord()
    self.get_tipped_edges()
    self.remove_tipped_edge_smaller_than_branch_width()
    self.label_tipped_edges_and_their_vertices()
    self.check_vertex_existence()
    self.label_edges_connected_with_vertex_clusters()
    self.label_edges_connecting_vertex_clusters()
    self.label_edges_from_known_vertices_iteratively()
    self.label_edges_looping_on_1_vertex()
    self.clear_areas_of_1_or_2_unidentified_pixels()
    self.clear_edge_duplicates()
    self.clear_vertices_connecting_2_edges()

NetworkDetection

NetworkDetection

Class for detecting vessels in images using Frangi and Sato filters with various parameter sets. It applies different thresholding methods, calculates quality metrics, and selects the best detection method.

Source code in src/cellects/image_analysis/network_functions.py
class  NetworkDetection:
    """
    NetworkDetection

    Class for detecting vessels in images using Frangi and Sato filters with various parameter sets.
    It applies different thresholding methods, calculates quality metrics, and selects the best detection method.
    """
    def __init__(self, greyscale_image: NDArray[np.uint8], possibly_filled_pixels: NDArray[np.uint8]=None, add_rolling_window: bool=False, origin_to_add: NDArray[np.uint8]=None, edge_max_width: int=5, best_result: dict=None):
        """
        Initialize the object with given parameters.

        Parameters
        ----------
        greyscale_image : NDArray[np.uint8]
            The input greyscale image.
        possibly_filled_pixels : NDArray[np.uint8], optional
            Image containing possibly filled pixels. Defaults to None.
        add_rolling_window : bool, optional
            Flag to add rolling window. Defaults to False.
        origin_to_add : NDArray[np.uint8], optional
            Origin to add. Defaults to None.
        edge_max_width : int, optional
            Maximal width of network edges. Defaults to 5.
        best_result : dict, optional
            Best result dictionary. Defaults to None.
        """
        self.greyscale_image = greyscale_image
        if possibly_filled_pixels is None:
            self.possibly_filled_pixels = np.ones(self.greyscale_image.shape, dtype=np.uint8)
        else:
            self.possibly_filled_pixels = possibly_filled_pixels
        self.edge_max_width = edge_max_width
        k_size = edge_max_width // 2
        k_size = k_size - k_size % 2 + 1
        self.kernel = create_ellipse(k_size, k_size).astype(np.uint8)
        self.best_result = best_result
        self.add_rolling_window = add_rolling_window
        self.origin_to_add = origin_to_add
        self.frangi_beta = 1.
        self.frangi_gamma = 1.
        self.black_ridges = True

    def apply_frangi_variations(self) -> list:
        """
        Applies various Frangi filter variations with different sigma values and thresholding methods.

        This method applies the Frangi vesselness filter with multiple sets of sigma values
        to detect vessels at different scales. It applies both Otsu thresholding and rolling window
        segmentation to the filtered results and calculates binary quality indices.

        Returns
        -------
        results : list of dict
            A list containing dictionaries with the method name, binary result, quality index,
            filtered image, filter type, rolling window flag, and sigma values used.
        """
        results = []

        # Parameter variations for Frangi filter
        frangi_sigmas = {
            's_fine_vessels': [0.75],
            'fine_vessels': [0.5, 1.0],  # Very fine capillaries, thin fibers
            'small_vessels': [1.0, 2.0],  # Small vessels, fine structures
            'multi_scale_medium': [1.0, 2.0, 3.0],  # Standard multi-scale
            'ultra_fine': [0.3, 0.5, 0.8],  # Ultra-fine structures
            'comprehensive': [0.5, 1.0, 2.0, 4.0],  # Multi-scale
            'retinal_vessels': [1.0, 2.0, 4.0, 8.0],  # Optimized for retinal imaging
            'microscopy': [0.5, 1.0, 1.5, 2.5],  # Microscopy applications
            'broad_spectrum': [0.5, 1.5, 3.0, 6.0, 10.0]
        }

        for i, (key, sigmas) in enumerate(frangi_sigmas.items()):
            # Apply Frangi filter
            frangi_result = frangi(self.greyscale_image, sigmas=sigmas, beta=self.frangi_beta, gamma=self.frangi_gamma, black_ridges=self.black_ridges)

            # Apply both thresholding methods
            # Method 1: Otsu thresholding
            thresh_otsu = threshold_otsu(frangi_result)
            binary_otsu = frangi_result > thresh_otsu
            quality_otsu = binary_quality_index(self.possibly_filled_pixels * binary_otsu)

            # Method 2: Rolling window thresholding

            # Store results
            results.append({
                'method': f'f_{sigmas}_thresh',
                'binary': binary_otsu,
                'quality': quality_otsu,
                'filtered': frangi_result,
                'filter': f'Frangi',
                'rolling_window': False,
                'sigmas': sigmas
            })
            # Method 2: Rolling window thresholding
            if self.add_rolling_window:
                binary_rolling = rolling_window_segmentation(frangi_result, self.possibly_filled_pixels, patch_size=(10, 10))
                quality_rolling = binary_quality_index(binary_rolling)
                results.append({
                    'method': f'f_{sigmas}_roll',
                    'binary': binary_rolling,
                    'quality': quality_rolling,
                    'filtered': frangi_result,
                    'filter': f'Frangi',
                    'rolling_window': True,
                    'sigmas': sigmas
                })

        return results


    def apply_sato_variations(self) -> list:
        """
        Apply various Sato filter variations to an image and store the results.

        This function applies different parameter sets for the Sato vesselness
        filter to an image, applies two thresholding methods (Otsu and rolling window),
        and stores the results. The function supports optional rolling window
        segmentation based on a configuration flag.

        Returns
        -------
        list of dict
            A list containing dictionaries with the results for each filter variation.
            Each dictionary includes method name, binary image, quality index,
            filtered result, filter type, rolling window flag, and sigma values.
        """
        results = []

        # Parameter variations for Frangi filter
        sato_sigmas = {
            'super_small_tubes': [0.01, 0.05, 0.1, 0.15],  #
            'small_tubes': [0.1, 0.2, 0.4, 0.8],  #
            's_thick_ridges': [0.25, 0.75],  # Thick ridges/tubes
            'small_multi_scale': [0.1, 0.2, 0.4, 0.8, 1.6],  #
            'fine_ridges': [0.8, 1.5],  # Fine ridge detection
            'medium_ridges': [1.5, 3.0],  # Medium ridge structures
            'multi_scale_fine': [0.8, 1.5, 2.5],  # Multi-scale fine detection
            'multi_scale_standard': [1.0, 2.5, 5.0],  # Standard multi-scale
            'edge_enhanced': [0.5, 1.0, 2.0],  # Edge-enhanced detection
            'noise_robust': [1.5, 2.5, 4.0],  # Robust to noise
            'fingerprint': [1.0, 1.5, 2.0, 3.0],  # Fingerprint ridge detection
            'geological': [2.0, 5.0, 10.0, 15.0]  # Geological structures
        }

        for i, (key, sigmas) in enumerate(sato_sigmas.items()):
            # Apply sato filter
            sato_result = sato(self.greyscale_image, sigmas=sigmas, black_ridges=self.black_ridges, mode='reflect')

            # Apply both thresholding methods
            # Method 1: Otsu thresholding
            thresh_otsu = threshold_otsu(sato_result)
            binary_otsu = sato_result > thresh_otsu
            quality_otsu = binary_quality_index(self.possibly_filled_pixels * binary_otsu)


            # Store results
            results.append({
                'method': f's_{sigmas}_thresh',
                'binary': binary_otsu,
                'quality': quality_otsu,
                'filtered': sato_result,
                'filter': f'Sato',
                'rolling_window': False,
                'sigmas': sigmas
            })

            # Method 2: Rolling window thresholding
            if self.add_rolling_window:
                binary_rolling = rolling_window_segmentation(sato_result, self.possibly_filled_pixels, patch_size=(10, 10))
                quality_rolling = binary_quality_index(binary_rolling)

                results.append({
                    'method': f's_{sigmas}_roll',
                    'binary': binary_rolling,
                    'quality': quality_rolling,
                    'filtered': sato_result,
                    'filter': f'Sato',
                    'rolling_window': True,
                    'sigmas': sigmas
                })

        return results

    def get_best_network_detection_method(self):
        """
        Get the best network detection method based on quality metrics.

        This function applies Frangi and Sato variations, combines their results,
        calculates quality metrics for each result, and selects the best method.

        Attributes
        ----------
        all_results : list of dicts
            Combined results from Frangi and Sato variations.
        quality_metrics : ndarray of float64
            Quality metrics for each detection result.
        best_idx : int
            Index of the best detection method based on quality metrics.
        best_result : dict
            The best detection result from all possible methods.
        incomplete_network : ndarray of bool
            Binary representation of the best detection result.

        Examples
        ----------
        >>> possibly_filled_pixels = np.zeros((9, 9), dtype=np.uint8)
        >>> possibly_filled_pixels[3:6, 3:6] = 1
        >>> possibly_filled_pixels[1:6, 3] = 1
        >>> possibly_filled_pixels[6:-1, 5] = 1
        >>> possibly_filled_pixels[4, 1:-1] = 1
        >>> greyscale_image = possibly_filled_pixels.copy()
        >>> greyscale_image[greyscale_image > 0] = np.random.randint(170, 255, possibly_filled_pixels.sum())
        >>> greyscale_image[greyscale_image == 0] = np.random.randint(0, 120, possibly_filled_pixels.size - possibly_filled_pixels.sum())
        >>> add_rolling_window=False
        >>> origin_to_add = np.zeros((9, 9), dtype=np.uint8)
        >>> origin_to_add[3:6, 3:6] = 1
        >>> NetDet = NetworkDetection(greyscale_image, possibly_filled_pixels, add_rolling_window, origin_to_add)
        >>> NetDet.get_best_network_detection_method()
        >>> print(NetDet.best_result['method'])
        >>> print(NetDet.best_result['binary'])
        >>> print(NetDet.best_result['quality'])
        >>> print(NetDet.best_result['filtered'])
        >>> print(NetDet.best_result['filter'])
        >>> print(NetDet.best_result['rolling_window'])
        >>> print(NetDet.best_result['sigmas'])
        bgr_image = np.random.randint(0, 256, (100, 100, 3), dtype=np.uint8)
        """
        frangi_res = self.apply_frangi_variations()
        sato_res = self.apply_sato_variations()
        self.all_results = frangi_res + sato_res
        self.quality_metrics = np.array([result['quality'] for result in self.all_results])
        self.best_idx = np.argmax(self.quality_metrics)
        self.best_result = self.all_results[self.best_idx]
        self.incomplete_network = self.best_result['binary'] * self.possibly_filled_pixels
        self.incomplete_network = cv2.morphologyEx(self.incomplete_network, cv2.MORPH_CLOSE, kernel=self.kernel)


    def detect_network(self):
        """
        Process and detect network features in the greyscale image.

        This method applies a frangi or sato filter based on the best result and
        performs segmentation using either rolling window or Otsu's thresholding.
        The final network detection result is stored in `self.incomplete_network`.
        """
        if self.best_result['filter'] == 'Frangi':
            filtered_result = frangi(self.greyscale_image, sigmas=self.best_result['sigmas'], beta=self.frangi_beta, gamma=self.frangi_gamma, black_ridges=self.black_ridges)
        else:
            filtered_result = sato(self.greyscale_image, sigmas=self.best_result['sigmas'], black_ridges=self.black_ridges, mode='reflect')

        if self.best_result['rolling_window']:
            binary_image = rolling_window_segmentation(filtered_result, self.possibly_filled_pixels, patch_size=(10, 10))
        else:
            thresh_otsu = threshold_otsu(filtered_result)
            binary_image = filtered_result > thresh_otsu
        self.incomplete_network = binary_image * self.possibly_filled_pixels
        self.incomplete_network = cv2.morphologyEx(self.incomplete_network.astype(np.uint8), cv2.MORPH_CLOSE, kernel=self.kernel)

    def change_greyscale(self, img: NDArray[np.uint8], first_dict: dict):
        """
        Change the image to greyscale using color space combinations.

        This function converts an input image to greyscale by generating
        and applying a combination of color spaces specified in the dictionary.
        The resulting greyscale image is stored as an attribute of the instance.

        Parameters
        ----------
        img : ndarray of uint8
            The input image to be converted to greyscale.
        """
        self.greyscale_image, g2, all_c_spaces, first_pc_vector  = generate_color_space_combination(img, list(first_dict.keys()), first_dict)

    def detect_pseudopods(self, lighter_background: bool, pseudopod_min_size: int=50, only_one_connected_component: bool=True):
        """
        Detect pseudopods in a binary image.

        Identify and process regions that resemble pseudopods based on width, size,
        and connectivity criteria. This function is used to detect and label areas
        that are indicative of pseudopod-like structures within a binary image.

        Parameters
        ----------
        lighter_background : bool
            Boolean flag to indicate if the background should be considered lighter.
        pseudopod_min_size : int, optional
            Minimum size for pseudopods to be considered valid. Default is 50.
        only_one_connected_component : bool, optional
            Flag to ensure only one connected component is kept. Default is True.

        Returns
        -------
        None

        Notes
        -----
        This function modifies internal attributes of the object, specifically setting `self.pseudopods` to an array indicating pseudopod regions.

        Examples
        --------
        >>> result = detect_pseudopods(True, 5, 50)
        >>> print(self.pseudopods)
        array([[0, 1, ..., 0],
               [0, 0, ..., 0],
               ...,
               [0, 1, ..., 0]], dtype=uint8)

        """
        if self.possibly_filled_pixels.all():
            dist_trans = self.possibly_filled_pixels
        else:
            closed_im = close_holes(self.possibly_filled_pixels)
            dist_trans = distance_transform_edt(closed_im)
            dist_trans = dist_trans.max() - dist_trans
        # Add dilatation of bracket of distances from medial_axis to the multiplication
        if lighter_background:
            grey = self.greyscale_image.max() - self.greyscale_image
        else:
            grey = self.greyscale_image
        if self.origin_to_add is not None:
            dist_trans_ori = distance_transform_edt(1 - self.origin_to_add)
            scored_im = dist_trans * dist_trans_ori * grey
        else:
            scored_im = (dist_trans**2) * grey
        scored_im = bracket_to_uint8_image_contrast(scored_im)
        thresh = threshold_otsu(scored_im)
        thresh = find_threshold_given_mask(scored_im, self.possibly_filled_pixels, min_threshold=thresh)
        high_int_in_periphery = (scored_im > thresh).astype(np.uint8) * self.possibly_filled_pixels

        _, pseudopod_widths = morphology.medial_axis(high_int_in_periphery, return_distance=True, rng=0)
        bin_im = pseudopod_widths >= self.edge_max_width
        dil_bin_im = cv2.dilate(bin_im.astype(np.uint8), kernel=create_ellipse(7, 7).astype(np.uint8), iterations=1)
        bin_im = high_int_in_periphery * dil_bin_im
        nb, shapes, stats, centro = cv2.connectedComponentsWithStats(bin_im)
        true_pseudopods = np.nonzero(stats[:, 4] > pseudopod_min_size)[0][1:]
        true_pseudopods = np.isin(shapes, true_pseudopods)

        # Make sure that the tubes connecting two pseudopods belong to pseudopods if removing pseudopods cuts the network
        complete_network = np.logical_or(true_pseudopods, self.incomplete_network).astype(np.uint8)
        complete_network = cv2.morphologyEx(complete_network.astype(np.uint8), cv2.MORPH_CLOSE, kernel=self.kernel)
        if only_one_connected_component:
            complete_network = keep_one_connected_component(complete_network)
            without_pseudopods = complete_network.copy()
            true_pseudopods = cv2.dilate(true_pseudopods.astype(np.uint8), kernel=self.kernel, iterations=2)
            without_pseudopods[true_pseudopods > 0] = 0
            only_connected_network = keep_one_connected_component(without_pseudopods)
            self.pseudopods = (1 - only_connected_network) * complete_network
            # Merge the connected network with pseudopods to get the complete network
            self.complete_network = np.logical_or(only_connected_network, self.pseudopods).astype(np.uint8)
        else:
            self.pseudopods = true_pseudopods.astype(np.uint8)
            self.complete_network = complete_network
        self.incomplete_network *= (1 - self.pseudopods)

__init__(greyscale_image, possibly_filled_pixels=None, add_rolling_window=False, origin_to_add=None, edge_max_width=5, best_result=None)

Initialize the object with given parameters.

Parameters:

Name Type Description Default
greyscale_image NDArray[uint8]

The input greyscale image.

required
possibly_filled_pixels NDArray[uint8]

Image containing possibly filled pixels. Defaults to None.

None
add_rolling_window bool

Flag to add rolling window. Defaults to False.

False
origin_to_add NDArray[uint8]

Origin to add. Defaults to None.

None
edge_max_width int

Maximal width of network edges. Defaults to 5.

5
best_result dict

Best result dictionary. Defaults to None.

None
Source code in src/cellects/image_analysis/network_functions.py
def __init__(self, greyscale_image: NDArray[np.uint8], possibly_filled_pixels: NDArray[np.uint8]=None, add_rolling_window: bool=False, origin_to_add: NDArray[np.uint8]=None, edge_max_width: int=5, best_result: dict=None):
    """
    Initialize the object with given parameters.

    Parameters
    ----------
    greyscale_image : NDArray[np.uint8]
        The input greyscale image.
    possibly_filled_pixels : NDArray[np.uint8], optional
        Image containing possibly filled pixels. Defaults to None.
    add_rolling_window : bool, optional
        Flag to add rolling window. Defaults to False.
    origin_to_add : NDArray[np.uint8], optional
        Origin to add. Defaults to None.
    edge_max_width : int, optional
        Maximal width of network edges. Defaults to 5.
    best_result : dict, optional
        Best result dictionary. Defaults to None.
    """
    self.greyscale_image = greyscale_image
    if possibly_filled_pixels is None:
        self.possibly_filled_pixels = np.ones(self.greyscale_image.shape, dtype=np.uint8)
    else:
        self.possibly_filled_pixels = possibly_filled_pixels
    self.edge_max_width = edge_max_width
    k_size = edge_max_width // 2
    k_size = k_size - k_size % 2 + 1
    self.kernel = create_ellipse(k_size, k_size).astype(np.uint8)
    self.best_result = best_result
    self.add_rolling_window = add_rolling_window
    self.origin_to_add = origin_to_add
    self.frangi_beta = 1.
    self.frangi_gamma = 1.
    self.black_ridges = True

apply_frangi_variations()

Applies various Frangi filter variations with different sigma values and thresholding methods.

This method applies the Frangi vesselness filter with multiple sets of sigma values to detect vessels at different scales. It applies both Otsu thresholding and rolling window segmentation to the filtered results and calculates binary quality indices.

Returns:

Name Type Description
results list of dict

A list containing dictionaries with the method name, binary result, quality index, filtered image, filter type, rolling window flag, and sigma values used.

Source code in src/cellects/image_analysis/network_functions.py
def apply_frangi_variations(self) -> list:
    """
    Applies various Frangi filter variations with different sigma values and thresholding methods.

    This method applies the Frangi vesselness filter with multiple sets of sigma values
    to detect vessels at different scales. It applies both Otsu thresholding and rolling window
    segmentation to the filtered results and calculates binary quality indices.

    Returns
    -------
    results : list of dict
        A list containing dictionaries with the method name, binary result, quality index,
        filtered image, filter type, rolling window flag, and sigma values used.
    """
    results = []

    # Parameter variations for Frangi filter
    frangi_sigmas = {
        's_fine_vessels': [0.75],
        'fine_vessels': [0.5, 1.0],  # Very fine capillaries, thin fibers
        'small_vessels': [1.0, 2.0],  # Small vessels, fine structures
        'multi_scale_medium': [1.0, 2.0, 3.0],  # Standard multi-scale
        'ultra_fine': [0.3, 0.5, 0.8],  # Ultra-fine structures
        'comprehensive': [0.5, 1.0, 2.0, 4.0],  # Multi-scale
        'retinal_vessels': [1.0, 2.0, 4.0, 8.0],  # Optimized for retinal imaging
        'microscopy': [0.5, 1.0, 1.5, 2.5],  # Microscopy applications
        'broad_spectrum': [0.5, 1.5, 3.0, 6.0, 10.0]
    }

    for i, (key, sigmas) in enumerate(frangi_sigmas.items()):
        # Apply Frangi filter
        frangi_result = frangi(self.greyscale_image, sigmas=sigmas, beta=self.frangi_beta, gamma=self.frangi_gamma, black_ridges=self.black_ridges)

        # Apply both thresholding methods
        # Method 1: Otsu thresholding
        thresh_otsu = threshold_otsu(frangi_result)
        binary_otsu = frangi_result > thresh_otsu
        quality_otsu = binary_quality_index(self.possibly_filled_pixels * binary_otsu)

        # Method 2: Rolling window thresholding

        # Store results
        results.append({
            'method': f'f_{sigmas}_thresh',
            'binary': binary_otsu,
            'quality': quality_otsu,
            'filtered': frangi_result,
            'filter': f'Frangi',
            'rolling_window': False,
            'sigmas': sigmas
        })
        # Method 2: Rolling window thresholding
        if self.add_rolling_window:
            binary_rolling = rolling_window_segmentation(frangi_result, self.possibly_filled_pixels, patch_size=(10, 10))
            quality_rolling = binary_quality_index(binary_rolling)
            results.append({
                'method': f'f_{sigmas}_roll',
                'binary': binary_rolling,
                'quality': quality_rolling,
                'filtered': frangi_result,
                'filter': f'Frangi',
                'rolling_window': True,
                'sigmas': sigmas
            })

    return results

apply_sato_variations()

Apply various Sato filter variations to an image and store the results.

This function applies different parameter sets for the Sato vesselness filter to an image, applies two thresholding methods (Otsu and rolling window), and stores the results. The function supports optional rolling window segmentation based on a configuration flag.

Returns:

Type Description
list of dict

A list containing dictionaries with the results for each filter variation. Each dictionary includes method name, binary image, quality index, filtered result, filter type, rolling window flag, and sigma values.

Source code in src/cellects/image_analysis/network_functions.py
def apply_sato_variations(self) -> list:
    """
    Apply various Sato filter variations to an image and store the results.

    This function applies different parameter sets for the Sato vesselness
    filter to an image, applies two thresholding methods (Otsu and rolling window),
    and stores the results. The function supports optional rolling window
    segmentation based on a configuration flag.

    Returns
    -------
    list of dict
        A list containing dictionaries with the results for each filter variation.
        Each dictionary includes method name, binary image, quality index,
        filtered result, filter type, rolling window flag, and sigma values.
    """
    results = []

    # Parameter variations for Frangi filter
    sato_sigmas = {
        'super_small_tubes': [0.01, 0.05, 0.1, 0.15],  #
        'small_tubes': [0.1, 0.2, 0.4, 0.8],  #
        's_thick_ridges': [0.25, 0.75],  # Thick ridges/tubes
        'small_multi_scale': [0.1, 0.2, 0.4, 0.8, 1.6],  #
        'fine_ridges': [0.8, 1.5],  # Fine ridge detection
        'medium_ridges': [1.5, 3.0],  # Medium ridge structures
        'multi_scale_fine': [0.8, 1.5, 2.5],  # Multi-scale fine detection
        'multi_scale_standard': [1.0, 2.5, 5.0],  # Standard multi-scale
        'edge_enhanced': [0.5, 1.0, 2.0],  # Edge-enhanced detection
        'noise_robust': [1.5, 2.5, 4.0],  # Robust to noise
        'fingerprint': [1.0, 1.5, 2.0, 3.0],  # Fingerprint ridge detection
        'geological': [2.0, 5.0, 10.0, 15.0]  # Geological structures
    }

    for i, (key, sigmas) in enumerate(sato_sigmas.items()):
        # Apply sato filter
        sato_result = sato(self.greyscale_image, sigmas=sigmas, black_ridges=self.black_ridges, mode='reflect')

        # Apply both thresholding methods
        # Method 1: Otsu thresholding
        thresh_otsu = threshold_otsu(sato_result)
        binary_otsu = sato_result > thresh_otsu
        quality_otsu = binary_quality_index(self.possibly_filled_pixels * binary_otsu)


        # Store results
        results.append({
            'method': f's_{sigmas}_thresh',
            'binary': binary_otsu,
            'quality': quality_otsu,
            'filtered': sato_result,
            'filter': f'Sato',
            'rolling_window': False,
            'sigmas': sigmas
        })

        # Method 2: Rolling window thresholding
        if self.add_rolling_window:
            binary_rolling = rolling_window_segmentation(sato_result, self.possibly_filled_pixels, patch_size=(10, 10))
            quality_rolling = binary_quality_index(binary_rolling)

            results.append({
                'method': f's_{sigmas}_roll',
                'binary': binary_rolling,
                'quality': quality_rolling,
                'filtered': sato_result,
                'filter': f'Sato',
                'rolling_window': True,
                'sigmas': sigmas
            })

    return results

change_greyscale(img, first_dict)

Change the image to greyscale using color space combinations.

This function converts an input image to greyscale by generating and applying a combination of color spaces specified in the dictionary. The resulting greyscale image is stored as an attribute of the instance.

Parameters:

Name Type Description Default
img ndarray of uint8

The input image to be converted to greyscale.

required
Source code in src/cellects/image_analysis/network_functions.py
def change_greyscale(self, img: NDArray[np.uint8], first_dict: dict):
    """
    Change the image to greyscale using color space combinations.

    This function converts an input image to greyscale by generating
    and applying a combination of color spaces specified in the dictionary.
    The resulting greyscale image is stored as an attribute of the instance.

    Parameters
    ----------
    img : ndarray of uint8
        The input image to be converted to greyscale.
    """
    self.greyscale_image, g2, all_c_spaces, first_pc_vector  = generate_color_space_combination(img, list(first_dict.keys()), first_dict)

detect_network()

Process and detect network features in the greyscale image.

This method applies a frangi or sato filter based on the best result and performs segmentation using either rolling window or Otsu's thresholding. The final network detection result is stored in self.incomplete_network.

Source code in src/cellects/image_analysis/network_functions.py
def detect_network(self):
    """
    Process and detect network features in the greyscale image.

    This method applies a frangi or sato filter based on the best result and
    performs segmentation using either rolling window or Otsu's thresholding.
    The final network detection result is stored in `self.incomplete_network`.
    """
    if self.best_result['filter'] == 'Frangi':
        filtered_result = frangi(self.greyscale_image, sigmas=self.best_result['sigmas'], beta=self.frangi_beta, gamma=self.frangi_gamma, black_ridges=self.black_ridges)
    else:
        filtered_result = sato(self.greyscale_image, sigmas=self.best_result['sigmas'], black_ridges=self.black_ridges, mode='reflect')

    if self.best_result['rolling_window']:
        binary_image = rolling_window_segmentation(filtered_result, self.possibly_filled_pixels, patch_size=(10, 10))
    else:
        thresh_otsu = threshold_otsu(filtered_result)
        binary_image = filtered_result > thresh_otsu
    self.incomplete_network = binary_image * self.possibly_filled_pixels
    self.incomplete_network = cv2.morphologyEx(self.incomplete_network.astype(np.uint8), cv2.MORPH_CLOSE, kernel=self.kernel)

detect_pseudopods(lighter_background, pseudopod_min_size=50, only_one_connected_component=True)

Detect pseudopods in a binary image.

Identify and process regions that resemble pseudopods based on width, size, and connectivity criteria. This function is used to detect and label areas that are indicative of pseudopod-like structures within a binary image.

Parameters:

Name Type Description Default
lighter_background bool

Boolean flag to indicate if the background should be considered lighter.

required
pseudopod_min_size int

Minimum size for pseudopods to be considered valid. Default is 50.

50
only_one_connected_component bool

Flag to ensure only one connected component is kept. Default is True.

True

Returns:

Type Description
None
Notes

This function modifies internal attributes of the object, specifically setting self.pseudopods to an array indicating pseudopod regions.

Examples:

>>> result = detect_pseudopods(True, 5, 50)
>>> print(self.pseudopods)
array([[0, 1, ..., 0],
       [0, 0, ..., 0],
       ...,
       [0, 1, ..., 0]], dtype=uint8)
Source code in src/cellects/image_analysis/network_functions.py
def detect_pseudopods(self, lighter_background: bool, pseudopod_min_size: int=50, only_one_connected_component: bool=True):
    """
    Detect pseudopods in a binary image.

    Identify and process regions that resemble pseudopods based on width, size,
    and connectivity criteria. This function is used to detect and label areas
    that are indicative of pseudopod-like structures within a binary image.

    Parameters
    ----------
    lighter_background : bool
        Boolean flag to indicate if the background should be considered lighter.
    pseudopod_min_size : int, optional
        Minimum size for pseudopods to be considered valid. Default is 50.
    only_one_connected_component : bool, optional
        Flag to ensure only one connected component is kept. Default is True.

    Returns
    -------
    None

    Notes
    -----
    This function modifies internal attributes of the object, specifically setting `self.pseudopods` to an array indicating pseudopod regions.

    Examples
    --------
    >>> result = detect_pseudopods(True, 5, 50)
    >>> print(self.pseudopods)
    array([[0, 1, ..., 0],
           [0, 0, ..., 0],
           ...,
           [0, 1, ..., 0]], dtype=uint8)

    """
    if self.possibly_filled_pixels.all():
        dist_trans = self.possibly_filled_pixels
    else:
        closed_im = close_holes(self.possibly_filled_pixels)
        dist_trans = distance_transform_edt(closed_im)
        dist_trans = dist_trans.max() - dist_trans
    # Add dilatation of bracket of distances from medial_axis to the multiplication
    if lighter_background:
        grey = self.greyscale_image.max() - self.greyscale_image
    else:
        grey = self.greyscale_image
    if self.origin_to_add is not None:
        dist_trans_ori = distance_transform_edt(1 - self.origin_to_add)
        scored_im = dist_trans * dist_trans_ori * grey
    else:
        scored_im = (dist_trans**2) * grey
    scored_im = bracket_to_uint8_image_contrast(scored_im)
    thresh = threshold_otsu(scored_im)
    thresh = find_threshold_given_mask(scored_im, self.possibly_filled_pixels, min_threshold=thresh)
    high_int_in_periphery = (scored_im > thresh).astype(np.uint8) * self.possibly_filled_pixels

    _, pseudopod_widths = morphology.medial_axis(high_int_in_periphery, return_distance=True, rng=0)
    bin_im = pseudopod_widths >= self.edge_max_width
    dil_bin_im = cv2.dilate(bin_im.astype(np.uint8), kernel=create_ellipse(7, 7).astype(np.uint8), iterations=1)
    bin_im = high_int_in_periphery * dil_bin_im
    nb, shapes, stats, centro = cv2.connectedComponentsWithStats(bin_im)
    true_pseudopods = np.nonzero(stats[:, 4] > pseudopod_min_size)[0][1:]
    true_pseudopods = np.isin(shapes, true_pseudopods)

    # Make sure that the tubes connecting two pseudopods belong to pseudopods if removing pseudopods cuts the network
    complete_network = np.logical_or(true_pseudopods, self.incomplete_network).astype(np.uint8)
    complete_network = cv2.morphologyEx(complete_network.astype(np.uint8), cv2.MORPH_CLOSE, kernel=self.kernel)
    if only_one_connected_component:
        complete_network = keep_one_connected_component(complete_network)
        without_pseudopods = complete_network.copy()
        true_pseudopods = cv2.dilate(true_pseudopods.astype(np.uint8), kernel=self.kernel, iterations=2)
        without_pseudopods[true_pseudopods > 0] = 0
        only_connected_network = keep_one_connected_component(without_pseudopods)
        self.pseudopods = (1 - only_connected_network) * complete_network
        # Merge the connected network with pseudopods to get the complete network
        self.complete_network = np.logical_or(only_connected_network, self.pseudopods).astype(np.uint8)
    else:
        self.pseudopods = true_pseudopods.astype(np.uint8)
        self.complete_network = complete_network
    self.incomplete_network *= (1 - self.pseudopods)

get_best_network_detection_method()

Get the best network detection method based on quality metrics.

This function applies Frangi and Sato variations, combines their results, calculates quality metrics for each result, and selects the best method.

Attributes:

Name Type Description
all_results list of dicts

Combined results from Frangi and Sato variations.

quality_metrics ndarray of float64

Quality metrics for each detection result.

best_idx int

Index of the best detection method based on quality metrics.

best_result dict

The best detection result from all possible methods.

incomplete_network ndarray of bool

Binary representation of the best detection result.

Examples:

>>> possibly_filled_pixels = np.zeros((9, 9), dtype=np.uint8)
>>> possibly_filled_pixels[3:6, 3:6] = 1
>>> possibly_filled_pixels[1:6, 3] = 1
>>> possibly_filled_pixels[6:-1, 5] = 1
>>> possibly_filled_pixels[4, 1:-1] = 1
>>> greyscale_image = possibly_filled_pixels.copy()
>>> greyscale_image[greyscale_image > 0] = np.random.randint(170, 255, possibly_filled_pixels.sum())
>>> greyscale_image[greyscale_image == 0] = np.random.randint(0, 120, possibly_filled_pixels.size - possibly_filled_pixels.sum())
>>> add_rolling_window=False
>>> origin_to_add = np.zeros((9, 9), dtype=np.uint8)
>>> origin_to_add[3:6, 3:6] = 1
>>> NetDet = NetworkDetection(greyscale_image, possibly_filled_pixels, add_rolling_window, origin_to_add)
>>> NetDet.get_best_network_detection_method()
>>> print(NetDet.best_result['method'])
>>> print(NetDet.best_result['binary'])
>>> print(NetDet.best_result['quality'])
>>> print(NetDet.best_result['filtered'])
>>> print(NetDet.best_result['filter'])
>>> print(NetDet.best_result['rolling_window'])
>>> print(NetDet.best_result['sigmas'])
bgr_image = np.random.randint(0, 256, (100, 100, 3), dtype=np.uint8)
Source code in src/cellects/image_analysis/network_functions.py
def get_best_network_detection_method(self):
    """
    Get the best network detection method based on quality metrics.

    This function applies Frangi and Sato variations, combines their results,
    calculates quality metrics for each result, and selects the best method.

    Attributes
    ----------
    all_results : list of dicts
        Combined results from Frangi and Sato variations.
    quality_metrics : ndarray of float64
        Quality metrics for each detection result.
    best_idx : int
        Index of the best detection method based on quality metrics.
    best_result : dict
        The best detection result from all possible methods.
    incomplete_network : ndarray of bool
        Binary representation of the best detection result.

    Examples
    ----------
    >>> possibly_filled_pixels = np.zeros((9, 9), dtype=np.uint8)
    >>> possibly_filled_pixels[3:6, 3:6] = 1
    >>> possibly_filled_pixels[1:6, 3] = 1
    >>> possibly_filled_pixels[6:-1, 5] = 1
    >>> possibly_filled_pixels[4, 1:-1] = 1
    >>> greyscale_image = possibly_filled_pixels.copy()
    >>> greyscale_image[greyscale_image > 0] = np.random.randint(170, 255, possibly_filled_pixels.sum())
    >>> greyscale_image[greyscale_image == 0] = np.random.randint(0, 120, possibly_filled_pixels.size - possibly_filled_pixels.sum())
    >>> add_rolling_window=False
    >>> origin_to_add = np.zeros((9, 9), dtype=np.uint8)
    >>> origin_to_add[3:6, 3:6] = 1
    >>> NetDet = NetworkDetection(greyscale_image, possibly_filled_pixels, add_rolling_window, origin_to_add)
    >>> NetDet.get_best_network_detection_method()
    >>> print(NetDet.best_result['method'])
    >>> print(NetDet.best_result['binary'])
    >>> print(NetDet.best_result['quality'])
    >>> print(NetDet.best_result['filtered'])
    >>> print(NetDet.best_result['filter'])
    >>> print(NetDet.best_result['rolling_window'])
    >>> print(NetDet.best_result['sigmas'])
    bgr_image = np.random.randint(0, 256, (100, 100, 3), dtype=np.uint8)
    """
    frangi_res = self.apply_frangi_variations()
    sato_res = self.apply_sato_variations()
    self.all_results = frangi_res + sato_res
    self.quality_metrics = np.array([result['quality'] for result in self.all_results])
    self.best_idx = np.argmax(self.quality_metrics)
    self.best_result = self.all_results[self.best_idx]
    self.incomplete_network = self.best_result['binary'] * self.possibly_filled_pixels
    self.incomplete_network = cv2.morphologyEx(self.incomplete_network, cv2.MORPH_CLOSE, kernel=self.kernel)

PickleRick

A class to handle safe file reading and writing operations using pickle.

This class ensures that files are not being accessed concurrently by creating a lock file (PickleRickX.pkl) to signal that the file is open. It includes methods to check for the lock file, write data safely, and read data safely.

Source code in src/cellects/utils/load_display_save.py
class PickleRick:
    """
    A class to handle safe file reading and writing operations using pickle.

    This class ensures that files are not being accessed concurrently by
    creating a lock file (PickleRickX.pkl) to signal that the file is open.
    It includes methods to check for the lock file, write data safely,
    and read data safely.
    """
    def __init__(self, pickle_rick_number=""):
        """
        Initialize a new instance of the class.

        This constructor sets up initial attributes for tracking Rick's state, including
        a boolean flag for waiting for Pickle Rick, a counter, the provided pickle Rick number,
        and the time when the first check was performed.

        Parameters
        ----------
        pickle_rick_number : str, optional
            The number associated with Pickle Rick. Defaults to an empty string.
        """
        self.wait_for_pickle_rick: bool = False
        self.counter = 0
        self.pickle_rick_number = pickle_rick_number
        self.first_check_time = default_timer()

    def _check_that_file_is_not_open(self):
        """
        Check if a specific pickle file exists and handle it accordingly.

        This function checks whether a file named `PickleRick{self.pickle_rick_number}.pkl`
        exists. If the file has not been modified for more than 2 seconds, it is removed.
        The function then updates an attribute to indicate whether the file exists.

        Parameters
        ----------
        self : PickleRickObject
            The instance of the class containing this method.

        Returns
        -------
        None
            This function does not return any value.
            It updates the `self.wait_for_pickle_rick` attribute.

        Notes
        -----
        This function removes the pickle file if it has not been modified for more than 2 seconds.
        The `self.wait_for_pickle_rick` attribute is updated based on the existence of the file.
        """
        if os.path.isfile(f"PickleRick{self.pickle_rick_number}.pkl"):
            if default_timer() - self.first_check_time > 2:
                os.remove(f"PickleRick{self.pickle_rick_number}.pkl")
            # logging.error((f"Cannot read/write, Trying again... tip: unlock by deleting the file named PickleRick{self.pickle_rick_number}.pkl"))
        self.wait_for_pickle_rick = os.path.isfile(f"PickleRick{self.pickle_rick_number}.pkl")

    def _write_pickle_rick(self):
        """
        Write pickle data to a file for Pickle Rick.

        Parameters
        ----------
        self : object
            The instance of the class that this method belongs to.
            This typically contains attributes and methods relevant to managing
            pickle operations for Pickle Rick.

        Raises
        ------
        Exception
            General exception raised if there is any issue with writing the file.
            The error details are logged.

        Notes
        -----
        This function creates a file named `PickleRick{self.pickle_rick_number}.pkl`
        with a dictionary indicating readiness for Pickle Rick.

        Examples
        --------
        >>> obj = PickleRick()  # Assuming `YourClassInstance` is the class containing this method
        >>> obj.pickle_rick_number = 1  # Set an example value for the attribute
        >>> obj._write_pickle_rick()     # Call the method to create and write to file
        """
        try:
            with open(f"PickleRick{self.pickle_rick_number}.pkl", 'wb') as file_to_write:
                pickle.dump({'wait_for_pickle_rick': True}, file_to_write)
        except Exception as exc:
            logging.error(f"Don't know how but Pickle Rick failed... Error is: {exc}")

    def _delete_pickle_rick(self):
        """

        Delete a specific Pickle Rick file.

        Deletes the pickle file associated with the current instance's
        `pickle_rick_number`.

        Raises
        ------
        FileNotFoundError
            If the file with name `PickleRick{self.pickle_rick_number}.pkl` does not exist.
        """
        if os.path.isfile(f"PickleRick{self.pickle_rick_number}.pkl"):
            os.remove(f"PickleRick{self.pickle_rick_number}.pkl")

    def write_file(self, file_content, file_name):
        """
        Write content to a file with error handling and retry logic.

        This function attempts to write the provided content into a file.
        If it fails, it retries up to 100 times with some additional checks
        and delays. Note that the content is serialized using pickle.

        Parameters
        ----------
        file_content : Any
            The data to be written into the file. This will be pickled.
        file_name : str
            The name of the file where data should be written.

        Returns
        -------
        None

        Raises
        ------
        Exception
            If the file cannot be written after 100 attempts, an error is logged.

        Notes
        -----
        This function uses pickle to serialize the data, which can introduce security risks
        if untrusted content is being written. It performs some internal state checks,
        such as verifying that the target file isn't open and whether it should delete
        some internal state, represented by `_delete_pickle_rick`.

        The function implements a retry mechanism with a backoff strategy that can include
        random delays, though the example code does not specify these details explicitly.

        Examples
        --------
        >>> result = PickleRick().write_file({'key': 'value'}, 'test.pkl')
        Success to write file
        """
        self.counter += 1
        if self.counter < 100:
            if self.counter > 95:
                self._delete_pickle_rick()
            # time.sleep(np.random.choice(np.arange(1, os.cpu_count(), 0.5)))
            self._check_that_file_is_not_open()
            if self.wait_for_pickle_rick:
                time.sleep(2)
                self.write_file(file_content, file_name)
            else:
                self._write_pickle_rick()
                try:
                    with open(file_name, 'wb') as file_to_write:
                        pickle.dump(file_content, file_to_write, protocol=0)
                    self._delete_pickle_rick()
                    logging.info(f"Success to write file")
                except Exception as exc:
                    logging.error(f"The Pickle error on the file {file_name} is: {exc}")
                    self._delete_pickle_rick()
                    self.write_file(file_content, file_name)
        else:
            logging.error(f"Failed to write {file_name}")

    def read_file(self, file_name):
        """
        Reads the contents of a file using pickle and returns it.

        Parameters
        ----------
        file_name : str
            The name of the file to be read.

        Returns
        -------
        Union[Any, None]
            The content of the file if successfully read; otherwise, `None`.

        Raises
        ------
        Exception
            If there is an error reading the file.

        Notes
        -----
        This function attempts to read a file multiple times if it fails.
        If the number of attempts exceeds 1000, it logs an error and returns `None`.

        Examples
        --------
        >>> PickleRick().read_file("example.pkl")
        """
        self.counter += 1
        if self.counter < 1000:
            if self.counter > 950:
                self._delete_pickle_rick()
            self._check_that_file_is_not_open()
            if self.wait_for_pickle_rick:
                time.sleep(2)
                self.read_file(file_name)
            else:
                self._write_pickle_rick()
                try:
                    with open(file_name, 'rb') as fileopen:
                        file_content = pickle.load(fileopen)
                except Exception as exc:
                    logging.error(f"The Pickle error on the file {file_name} is: {exc}")
                    file_content = None
                self._delete_pickle_rick()
                if file_content is None:
                    self.read_file(file_name)
                else:
                    logging.info(f"Success to read file")
                return file_content
        else:
            logging.error(f"Failed to read {file_name}")

__init__(pickle_rick_number='')

Initialize a new instance of the class.

This constructor sets up initial attributes for tracking Rick's state, including a boolean flag for waiting for Pickle Rick, a counter, the provided pickle Rick number, and the time when the first check was performed.

Parameters:

Name Type Description Default
pickle_rick_number str

The number associated with Pickle Rick. Defaults to an empty string.

''
Source code in src/cellects/utils/load_display_save.py
def __init__(self, pickle_rick_number=""):
    """
    Initialize a new instance of the class.

    This constructor sets up initial attributes for tracking Rick's state, including
    a boolean flag for waiting for Pickle Rick, a counter, the provided pickle Rick number,
    and the time when the first check was performed.

    Parameters
    ----------
    pickle_rick_number : str, optional
        The number associated with Pickle Rick. Defaults to an empty string.
    """
    self.wait_for_pickle_rick: bool = False
    self.counter = 0
    self.pickle_rick_number = pickle_rick_number
    self.first_check_time = default_timer()

read_file(file_name)

Reads the contents of a file using pickle and returns it.

Parameters:

Name Type Description Default
file_name str

The name of the file to be read.

required

Returns:

Type Description
Union[Any, None]

The content of the file if successfully read; otherwise, None.

Raises:

Type Description
Exception

If there is an error reading the file.

Notes

This function attempts to read a file multiple times if it fails. If the number of attempts exceeds 1000, it logs an error and returns None.

Examples:

>>> PickleRick().read_file("example.pkl")
Source code in src/cellects/utils/load_display_save.py
def read_file(self, file_name):
    """
    Reads the contents of a file using pickle and returns it.

    Parameters
    ----------
    file_name : str
        The name of the file to be read.

    Returns
    -------
    Union[Any, None]
        The content of the file if successfully read; otherwise, `None`.

    Raises
    ------
    Exception
        If there is an error reading the file.

    Notes
    -----
    This function attempts to read a file multiple times if it fails.
    If the number of attempts exceeds 1000, it logs an error and returns `None`.

    Examples
    --------
    >>> PickleRick().read_file("example.pkl")
    """
    self.counter += 1
    if self.counter < 1000:
        if self.counter > 950:
            self._delete_pickle_rick()
        self._check_that_file_is_not_open()
        if self.wait_for_pickle_rick:
            time.sleep(2)
            self.read_file(file_name)
        else:
            self._write_pickle_rick()
            try:
                with open(file_name, 'rb') as fileopen:
                    file_content = pickle.load(fileopen)
            except Exception as exc:
                logging.error(f"The Pickle error on the file {file_name} is: {exc}")
                file_content = None
            self._delete_pickle_rick()
            if file_content is None:
                self.read_file(file_name)
            else:
                logging.info(f"Success to read file")
            return file_content
    else:
        logging.error(f"Failed to read {file_name}")

write_file(file_content, file_name)

Write content to a file with error handling and retry logic.

This function attempts to write the provided content into a file. If it fails, it retries up to 100 times with some additional checks and delays. Note that the content is serialized using pickle.

Parameters:

Name Type Description Default
file_content Any

The data to be written into the file. This will be pickled.

required
file_name str

The name of the file where data should be written.

required

Returns:

Type Description
None

Raises:

Type Description
Exception

If the file cannot be written after 100 attempts, an error is logged.

Notes

This function uses pickle to serialize the data, which can introduce security risks if untrusted content is being written. It performs some internal state checks, such as verifying that the target file isn't open and whether it should delete some internal state, represented by _delete_pickle_rick.

The function implements a retry mechanism with a backoff strategy that can include random delays, though the example code does not specify these details explicitly.

Examples:

>>> result = PickleRick().write_file({'key': 'value'}, 'test.pkl')
Success to write file
Source code in src/cellects/utils/load_display_save.py
def write_file(self, file_content, file_name):
    """
    Write content to a file with error handling and retry logic.

    This function attempts to write the provided content into a file.
    If it fails, it retries up to 100 times with some additional checks
    and delays. Note that the content is serialized using pickle.

    Parameters
    ----------
    file_content : Any
        The data to be written into the file. This will be pickled.
    file_name : str
        The name of the file where data should be written.

    Returns
    -------
    None

    Raises
    ------
    Exception
        If the file cannot be written after 100 attempts, an error is logged.

    Notes
    -----
    This function uses pickle to serialize the data, which can introduce security risks
    if untrusted content is being written. It performs some internal state checks,
    such as verifying that the target file isn't open and whether it should delete
    some internal state, represented by `_delete_pickle_rick`.

    The function implements a retry mechanism with a backoff strategy that can include
    random delays, though the example code does not specify these details explicitly.

    Examples
    --------
    >>> result = PickleRick().write_file({'key': 'value'}, 'test.pkl')
    Success to write file
    """
    self.counter += 1
    if self.counter < 100:
        if self.counter > 95:
            self._delete_pickle_rick()
        # time.sleep(np.random.choice(np.arange(1, os.cpu_count(), 0.5)))
        self._check_that_file_is_not_open()
        if self.wait_for_pickle_rick:
            time.sleep(2)
            self.write_file(file_content, file_name)
        else:
            self._write_pickle_rick()
            try:
                with open(file_name, 'wb') as file_to_write:
                    pickle.dump(file_content, file_to_write, protocol=0)
                self._delete_pickle_rick()
                logging.info(f"Success to write file")
            except Exception as exc:
                logging.error(f"The Pickle error on the file {file_name} is: {exc}")
                self._delete_pickle_rick()
                self.write_file(file_content, file_name)
    else:
        logging.error(f"Failed to write {file_name}")

ad_pad(arr)

Pad the input array with a single layer of zeros around its edges.

Parameters:

Name Type Description Default
arr ndarray

The input array to pad. Must be at least 2-dimensional.

required

Returns:

Name Type Description
padded_arr ndarray

The output array with a single 0-padded layer around its edges.

Notes

This function uses NumPy's pad with mode='constant' to add a single layer of zeros around the edges of the input array.

Examples:

>>> arr = np.array([[1, 2], [3, 4]])
>>> ad_pad(arr)
array([[0, 0, 0, 0],
   [0, 1, 2, 0],
   [0, 3, 4, 0],
   [0, 0, 0, 0]])
Source code in src/cellects/image_analysis/network_functions.py
def ad_pad(arr: NDArray) -> NDArray:
    """
    Pad the input array with a single layer of zeros around its edges.

    Parameters
    ----------
    arr : ndarray
        The input array to pad. Must be at least 2-dimensional.

    Returns
    -------
    padded_arr : ndarray
        The output array with a single 0-padded layer around its edges.

    Notes
    -----
    This function uses NumPy's `pad` with mode='constant' to add a single layer
    of zeros around the edges of the input array.

    Examples
    --------
    >>> arr = np.array([[1, 2], [3, 4]])
    >>> ad_pad(arr)
    array([[0, 0, 0, 0],
       [0, 1, 2, 0],
       [0, 3, 4, 0],
       [0, 0, 0, 0]])
    """
    return np.pad(arr, [(1, ), (1, )], mode='constant')

add_padding(array_list)

Add padding to each 2D array in a list.

Parameters:

Name Type Description Default
array_list list of ndarrays

List of 2D NumPy arrays to be processed.

required

Returns:

Name Type Description
out list of ndarrays

List of 2D NumPy arrays with the padding removed.

Examples:

>>> array_list = [np.array([[1, 2], [3, 4]])]
>>> padded_list = add_padding(array_list)
>>> print(padded_list[0])
[[0 0 0]
 [0 1 2 0]
 [0 3 4 0]
 [0 0 0]]
Source code in src/cellects/image_analysis/network_functions.py
def add_padding(array_list: list) -> list:
    """
    Add padding to each 2D array in a list.

    Parameters
    ----------
    array_list : list of ndarrays
        List of 2D NumPy arrays to be processed.

    Returns
    -------
    out : list of ndarrays
        List of 2D NumPy arrays with the padding removed.

    Examples
    --------
    >>> array_list = [np.array([[1, 2], [3, 4]])]
    >>> padded_list = add_padding(array_list)
    >>> print(padded_list[0])
    [[0 0 0]
     [0 1 2 0]
     [0 3 4 0]
     [0 0 0]]
    """
    new_array_list = []
    for arr in array_list:
        new_array_list.append(ad_pad(arr))
    return new_array_list

bracket_to_uint8_image_contrast(image)

Convert an image with bracket contrast values to uint8 type.

This function normalizes an input image by scaling the minimum and maximum values of the image to the range [0, 255] and then converts it to uint8 data type.

Parameters:

Name Type Description Default
image ndarray

Input image as a numpy array with floating-point values.

required

Returns:

Type Description
ndarray of uint8

Output image converted to uint8 type after normalization.

Examples:

>>> image = np.random.randint(0, 255, (10, 10), dtype=np.uint8)
>>> res = bracket_to_uint8_image_contrast(image)
>>> print(res)
>>> image = np.zeros((10, 10), dtype=np.uint8)
>>> res = bracket_to_uint8_image_contrast(image)
>>> print(res)
Source code in src/cellects/utils/formulas.py
@njit()
def bracket_to_uint8_image_contrast(image: NDArray):
    """
    Convert an image with bracket contrast values to uint8 type.

    This function normalizes an input image by scaling the minimum and maximum
    values of the image to the range [0, 255] and then converts it to uint8
    data type.

    Parameters
    ----------
    image : ndarray
        Input image as a numpy array with floating-point values.

    Returns
    -------
    ndarray of uint8
        Output image converted to uint8 type after normalization.

    Examples
    --------
    >>> image = np.random.randint(0, 255, (10, 10), dtype=np.uint8)
    >>> res = bracket_to_uint8_image_contrast(image)
    >>> print(res)

    >>> image = np.zeros((10, 10), dtype=np.uint8)
    >>> res = bracket_to_uint8_image_contrast(image)
    >>> print(res)
    """
    image -= image.min()
    if image.max() == 0:
        return np.zeros_like(image, dtype=np.uint8)
    else:
        return to_uint8(255 * (image / np.max(image)))

create_empty_videos(image_list, cr, lose_accuracy_to_save_memory, already_greyscale, csc_dict)

Create empty video arrays based on input parameters.

Parameters:

Name Type Description Default
image_list list

List of images.

required
cr list

Crop region defined by [x_start, y_start, x_end, y_end].

required
lose_accuracy_to_save_memory bool

Boolean flag to determine if memory should be saved by using uint8 data type.

required
already_greyscale bool

Boolean flag indicating if the images are already in greyscale format.

required
csc_dict dict

Dictionary containing color space conversion settings, including 'logical' key.

required

Returns:

Type Description
tuple

A tuple containing three elements: - visu: NumPy array with shape (len(image_list), cr[1] - cr[0] + 1, cr[3] - cr[2] + 1, 3) and dtype uint8 for RGB images. - converted_video: NumPy array with shape (len(image_list), cr[1] - cr[0] + 1, cr[3] - cr[2] + 1) and dtype uint8 or float according to lose_accuracy_to_save_memory. - converted_video2: NumPy array with shape same as converted_video and dtype uint8 or float according to lose_accuracy_to_save_memory.

Notes

Performance considerations: - If lose_accuracy_to_save_memory is True, the function uses np.uint8 for memory efficiency. - If already_greyscale is False, additional arrays are created to store RGB data.

Source code in src/cellects/utils/load_display_save.py
def create_empty_videos(image_list: list, cr: list, lose_accuracy_to_save_memory: bool,
                        already_greyscale: bool, csc_dict: dict):
    """

    Create empty video arrays based on input parameters.

    Parameters
    ----------
    image_list : list
        List of images.
    cr : list
        Crop region defined by [x_start, y_start, x_end, y_end].
    lose_accuracy_to_save_memory : bool
        Boolean flag to determine if memory should be saved by using uint8 data type.
    already_greyscale : bool
        Boolean flag indicating if the images are already in greyscale format.
    csc_dict : dict
        Dictionary containing color space conversion settings, including 'logical' key.

    Returns
    -------
    tuple
        A tuple containing three elements:
            - `visu`: NumPy array with shape (len(image_list), cr[1] - cr[0] + 1, cr[3] - cr[2] + 1, 3) and dtype uint8 for RGB images.
            - `converted_video`: NumPy array with shape (len(image_list), cr[1] - cr[0] + 1, cr[3] - cr[2] + 1) and dtype uint8 or float according to `lose_accuracy_to_save_memory`.
            - `converted_video2`: NumPy array with shape same as `converted_video` and dtype uint8 or float according to `lose_accuracy_to_save_memory`.

    Notes
    -----
    Performance considerations:
        - If `lose_accuracy_to_save_memory` is True, the function uses np.uint8 for memory efficiency.
        - If `already_greyscale` is False, additional arrays are created to store RGB data.
    """
    visu, converted_video, converted_video2 = None, None, None
    dims = len(image_list), cr[1] - cr[0], cr[3] - cr[2]
    if lose_accuracy_to_save_memory:
        converted_video = np.zeros(dims, dtype=np.uint8)
    else:
        converted_video = np.zeros(dims, dtype=float)
    if not already_greyscale:
        visu = np.zeros((dims[0], dims[1], dims[2], 3), dtype=np.uint8)
        if csc_dict['logical'] != 'None':
            if lose_accuracy_to_save_memory:
                converted_video2 = np.zeros(dims, dtype=np.uint8)
            else:
                converted_video2 = np.zeros(dims, dtype=float)
    return visu, converted_video, converted_video2

detect_first_move(size_dynamics, growth_threshold)

Detects the first move in a time series where the value exceeds the initial value by a given threshold.

Parameters:

Name Type Description Default
size_dynamics ndarray

The time series data of dynamics.

required
growth_threshold

The threshold value for detecting the first move.

required

Returns:

Type Description
int or NA

The index of the first move where the condition is met. Returns pandas.NA if no such index exists.

Examples:

>>> size_dynamics = np.array([10, 12, 15, 18])
>>> growth_threshold = 5
>>> detect_first_move(size_dynamics, growth_threshold)
2
Source code in src/cellects/utils/formulas.py
def detect_first_move(size_dynamics: NDArray, growth_threshold)-> int:
    """
    Detects the first move in a time series where the value exceeds the initial value by a given threshold.

    Parameters
    ----------
    size_dynamics : numpy.ndarray
        The time series data of dynamics.
    growth_threshold: int or float
        The threshold value for detecting the first move.

    Returns
    -------
    int or pandas.NA
        The index of the first move where the condition is met.
        Returns `pandas.NA` if no such index exists.

    Examples
    --------
    >>> size_dynamics = np.array([10, 12, 15, 18])
    >>> growth_threshold = 5
    >>> detect_first_move(size_dynamics, growth_threshold)
    2
    """
    first_move = pd.NA
    thresh_reached = np.nonzero(size_dynamics >= (size_dynamics[0] + growth_threshold))[0]
    if len(thresh_reached) > 0:
        first_move = thresh_reached[0]
    return first_move

detect_network_dynamics(converted_video, binary, arena_label=1, starting_time=0, visu=None, origin=None, smooth_segmentation_over_time=True, edge_max_width=5, detect_pseudopods=True, save_coord_network=True, show_seg=False)

Detects and tracks dynamic features (e.g., pseudopods) in a biological network over time from video data.

Analyzes spatiotemporal dynamics of a network structure using binary masks and grayscale video data. Processes each frame to detect network components, optionally identifies pseudopods, applies temporal smoothing, and generates visualization overlays. Saves coordinate data for detected networks if enabled.

Parameters:

Name Type Description Default
converted_video NDArray

Input video data array with shape (time x y x z) representing grayscale intensities.

required
binary NDArray[uint8]

Binary mask array with shape (time x y x z) indicating filled regions in each frame.

required
arena_label int

Unique identifier for the current processing arena/session to name saved output files.

1
starting_time int

Zero-based index of the first frame to begin network detection and analysis from.

0
visu NDArray

Visualization video array (time x y x z) with RGB channels for overlay rendering.

None
origin NDArray[uint8]

Binary mask defining a central region of interest to exclude from network detection.

None
smooth_segmentation_over_time (bool, optional(default=True))

Flag indicating whether to apply temporal smoothing using adjacent frame data.

True
edge_max_width int

Maximal width of network edges. Defaults to 5.

5
detect_pseudopods (bool, optional(default=True))

Determines if pseudopod regions should be detected and merged with the network.

True
save_coord_network (bool, optional(default=True))

Controls saving of detected network/pseudopod coordinates as NumPy arrays.

True
show_seg (bool, optional(default=False))

Enables real-time visualization display during processing.

False

Returns:

Type Description
NDArray[uint8]

3D array containing detected network structures with shape (time x y x z). Uses: - 0 for background, - 1 for regular network components, - 2 for pseudopod regions when detect_pseudopods is True.

Notes
  • Memory-intensive operations on large arrays may require system resources.
  • Temporal smoothing effectiveness depends on network dynamics consistency between frames.
  • Pseudopod detection requires sufficient contrast with the background in grayscale images.
Source code in src/cellects/image_analysis/network_functions.py
def detect_network_dynamics(converted_video: NDArray, binary: NDArray[np.uint8], arena_label: int=1,
                            starting_time: int=0, visu: NDArray=None, origin: NDArray[np.uint8]=None,
                            smooth_segmentation_over_time: bool = True, edge_max_width:int = 5,
                            detect_pseudopods: bool = True, save_coord_network: bool = True, show_seg: bool = False):
    """
    Detects and tracks dynamic features (e.g., pseudopods) in a biological network over time from video data.

    Analyzes spatiotemporal dynamics of a network structure using binary masks and grayscale video data. Processes each frame to detect network components, optionally identifies pseudopods, applies temporal smoothing, and generates visualization overlays. Saves coordinate data for detected networks if enabled.

    Parameters
    ----------
    converted_video : NDArray
        Input video data array with shape (time x y x z) representing grayscale intensities.
    binary : NDArray[np.uint8]
        Binary mask array with shape (time x y x z) indicating filled regions in each frame.
    arena_label : int
        Unique identifier for the current processing arena/session to name saved output files.
    starting_time : int
        Zero-based index of the first frame to begin network detection and analysis from.
    visu : NDArray
        Visualization video array (time x y x z) with RGB channels for overlay rendering.
    origin : NDArray[np.uint8]
        Binary mask defining a central region of interest to exclude from network detection.
    smooth_segmentation_over_time : bool, optional (default=True)
        Flag indicating whether to apply temporal smoothing using adjacent frame data.
    edge_max_width : int, optional
        Maximal width of network edges. Defaults to 5.
    detect_pseudopods : bool, optional (default=True)
        Determines if pseudopod regions should be detected and merged with the network.
    save_coord_network : bool, optional (default=True)
        Controls saving of detected network/pseudopod coordinates as NumPy arrays.
    show_seg : bool, optional (default=False)
        Enables real-time visualization display during processing.

    Returns
    -------
    NDArray[np.uint8]
        3D array containing detected network structures with shape (time x y x z). Uses:
        - `0` for background,
        - `1` for regular network components,
        - `2` for pseudopod regions when detect_pseudopods is True.

    Notes
    -----
    - Memory-intensive operations on large arrays may require system resources.
    - Temporal smoothing effectiveness depends on network dynamics consistency between frames.
    - Pseudopod detection requires sufficient contrast with the background in grayscale images.
    """
    logging.info(f"Arena n°{arena_label}. Starting network detection.")
    # converted_video = self.converted_video; binary=self.binary; arena_label=1; starting_time=0; visu=self.visu; origin=None; smooth_segmentation_over_time=True; detect_pseudopods=True;save_coord_network=True; show_seg=False
    dims = binary.shape
    pseudopod_min_size = 50
    if detect_pseudopods:
        pseudopod_vid = np.zeros_like(binary, dtype=bool)
    potential_network = np.zeros_like(binary, dtype=bool)
    network_dynamics = np.zeros_like(binary, dtype=np.uint8)
    do_convert = True
    if visu is None:
        do_convert = False
        visu = np.stack((converted_video, converted_video, converted_video), axis=3)
        greyscale = converted_video[-1, ...]
    else:
        greyscale = visu[-1, ...].mean(axis=-1)
    NetDet = NetworkDetection(greyscale, possibly_filled_pixels=binary[-1, ...],
                              origin_to_add=origin)
    NetDet.get_best_network_detection_method()
    if do_convert:
        NetDet.greyscale_image = converted_video[-1, ...]
    lighter_background = NetDet.greyscale_image[binary[-1, ...] > 0].mean() < NetDet.greyscale_image[
        binary[-1, ...] == 0].mean()

    for t in np.arange(starting_time, dims[0]):  # 20):#
        if do_convert:
            greyscale = visu[t, ...].mean(axis=-1)
        else:
            greyscale = converted_video[t, ...]
        NetDet_fast = NetworkDetection(greyscale, possibly_filled_pixels=binary[t, ...], origin_to_add=origin,
                                       edge_max_width=edge_max_width, best_result=NetDet.best_result)
        NetDet_fast.detect_network()
        NetDet_fast.greyscale_image = converted_video[t, ...]
        if detect_pseudopods:
            NetDet_fast.detect_pseudopods(lighter_background, pseudopod_min_size=pseudopod_min_size)
            pseudopod_vid[t, ...] = NetDet_fast.pseudopods
        potential_network[t, ...] = NetDet_fast.complete_network
    del NetDet_fast
    del NetDet
    if dims[0] == 1:
        network_dynamics = potential_network
    else:
        for t in np.arange(starting_time, dims[0]):  # 20):#
            if smooth_segmentation_over_time:
                if 2 <= t <= (dims[0] - 2):
                    computed_network = potential_network[(t - 2):(t + 3), :, :].sum(axis=0)
                    computed_network[computed_network == 1] = 0
                    computed_network[computed_network > 1] = 1
                else:
                    if t < 2:
                        computed_network = potential_network[:2, :, :].sum(axis=0)
                    else:
                        computed_network = potential_network[-2:, :, :].sum(axis=0)
                    computed_network[computed_network > 0] = 1
            else:
                computed_network = potential_network[t, :, :].copy()

            if origin is not None:
                computed_network = computed_network * (1 - origin)
                origin_contours = get_contours(origin)
                complete_network = np.logical_or(origin_contours, computed_network).astype(np.uint8)
            else:
                complete_network = computed_network
            complete_network = keep_one_connected_component(complete_network)

            if detect_pseudopods:
                # Make sure that removing pseudopods do not cut the network:
                without_pseudopods = complete_network * (1 - pseudopod_vid[t])
                only_connected_network = keep_one_connected_component(without_pseudopods)
                # # Option A: To add these cutting regions to the pseudopods do:
                pseudopods = (1 - only_connected_network) * complete_network
                pseudopod_vid[t] = pseudopods
            network_dynamics[t] = complete_network

            imtoshow = visu[t, ...]
            eroded_binary = cv2.erode(network_dynamics[t, ...], cross_33)
            net_coord = np.nonzero(network_dynamics[t, ...] - eroded_binary)
            imtoshow[net_coord[0], net_coord[1], :] = (34, 34, 158)
            if show_seg:
                cv2.imshow("", cv2.resize(imtoshow, (1000, 1000)))
                cv2.waitKey(1)
            else:
                visu[t, ...] = imtoshow
            if show_seg:
                cv2.destroyAllWindows()

    network_coord = smallest_memory_array(np.nonzero(network_dynamics), "uint")
    pseudopod_coord = None
    if detect_pseudopods:
        network_dynamics[pseudopod_vid > 0] = 2
        pseudopod_coord = smallest_memory_array(np.nonzero(pseudopod_vid), "uint")
        if save_coord_network:
            write_h5(f"coord_pseudopods{arena_label}_t{dims[0]}_y{dims[1]}_x{dims[2]}.h5", pseudopod_coord)
    if save_coord_network:
        write_h5(f"coord_network{arena_label}_t{dims[0]}_y{dims[1]}_x{dims[2]}.h5", network_coord)
    return network_coord, pseudopod_coord

display_boxes(binary_image, box_diameter, show=True)

Display grid lines on a binary image at specified box diameter intervals.

This function displays the given binary image with vertical and horizontal grid lines drawn at regular intervals defined by box_diameter. The function returns the total number of grid lines drawn.

Parameters:

Name Type Description Default
binary_image ndarray

Binary image on which to draw the grid lines.

required
box_diameter int

Diameter of each box in pixels.

required

Returns:

Name Type Description
line_nb int

Number of grid lines drawn, both vertical and horizontal.

Examples:

>>> import numpy as np
>>> binary_image = np.random.randint(0, 2, (100, 100), dtype=np.uint8)
>>> display_boxes(binary_image, box_diameter=25)
Source code in src/cellects/utils/load_display_save.py
def display_boxes(binary_image: NDArray, box_diameter: int, show: bool = True):
    """
    Display grid lines on a binary image at specified box diameter intervals.

    This function displays the given binary image with vertical and horizontal
    grid lines drawn at regular intervals defined by `box_diameter`. The function
    returns the total number of grid lines drawn.

    Parameters
    ----------
    binary_image : ndarray
        Binary image on which to draw the grid lines.
    box_diameter : int
        Diameter of each box in pixels.

    Returns
    -------
    line_nb : int
        Number of grid lines drawn, both vertical and horizontal.

    Examples
    --------
    >>> import numpy as np
    >>> binary_image = np.random.randint(0, 2, (100, 100), dtype=np.uint8)
    >>> display_boxes(binary_image, box_diameter=25)
    """
    plt.imshow(binary_image, cmap='gray', extent=(0, binary_image.shape[1], 0, binary_image.shape[0]))
    height, width = binary_image.shape
    line_nb = 0
    for x in range(0, width + 1, box_diameter):
        line_nb += 1
        plt.axvline(x=x, color='white', linewidth=1)
    for y in range(0, height + 1, box_diameter):
        line_nb += 1
        plt.axhline(y=y, color='white', linewidth=1)

    if show:
        plt.show()

    return line_nb

display_network_methods(network_detection, save_path=None)

Display segmentation results from a network detection object.

Extended Description

Plots the binary segmentation results for various methods stored in network_detection.all_results. Highlights the best result based on quality metrics and allows for saving the figure to a file.

Parameters:

Name Type Description Default
network_detection object

An object containing segmentation results and quality metrics.

required
save_path str

Path to save the figure. If None, the plot is displayed.

None
Source code in src/cellects/utils/load_display_save.py
def display_network_methods(network_detection: object, save_path: str=None):
    """

    Display segmentation results from a network detection object.

    Extended Description
    --------------------

    Plots the binary segmentation results for various methods stored in ``network_detection.all_results``.
    Highlights the best result based on quality metrics and allows for saving the figure to a file.

    Parameters
    ----------
    network_detection : object
        An object containing segmentation results and quality metrics.
    save_path : str, optional
        Path to save the figure. If ``None``, the plot is displayed.

    """
    row_nb = 6
    fig, axes = plt.subplots(int(np.ceil(len(network_detection.all_results) / row_nb)), row_nb, figsize=(100, 100))
    fig.suptitle(f'Segmentation Comparison: Frangi + Sato Variations', fontsize=16)

    # Plot all results
    for idx, result in enumerate(network_detection.all_results):
        row = idx // row_nb
        col = idx % row_nb

        ax = axes[row, col]

        # Display binary segmentation result
        ax.imshow(result['binary'], cmap='gray')

        # Create title with filter info and quality score
        title = f"{result['method']}: {str(np.round(network_detection.quality_metrics[idx], 0))}"

        # Highlight the best result
        if idx == network_detection.best_idx:
            ax.set_title(title, fontsize=8, color='red', fontweight='bold')
            ax.add_patch(plt.Rectangle((0, 0), result['binary'].shape[1] - 1,
                                       result['binary'].shape[0] - 1,
                                       fill=False, edgecolor='red', linewidth=3))
        else:
            ax.set_title(title, fontsize=8)

        ax.axis('off')
    plt.tight_layout()

    if save_path is not None:
        plt.savefig(save_path, bbox_inches='tight', pad_inches=0., transparent=True, dpi=500)
        plt.close()
    else:
        plt.show()

eudist(v1, v2)

Calculate the Euclidean distance between two points in n-dimensional space.

Parameters:

Name Type Description Default
v1 iterable of float

The coordinates of the first point.

required
v2 iterable of float

The coordinates of the second point.

required

Returns:

Type Description
float

The Euclidean distance between v1 and v2.

Raises:

Type Description
ValueError

If v1 and v2 do not have the same length.

Notes

The Euclidean distance is calculated using the standard formula: √((x2 − x1)^2 + (y2 − y1)^2 + ...).

Examples:

>>> v1 = [1.0, 2.0]
>>> v2 = [4.0, 6.0]
>>> eudist(v1, v2)
5.0
>>> v1 = [1.0, 2.0, 3.0]
>>> v2 = [4.0, 6.0, 8.0]
>>> eudist(v1, v2)
7.0710678118654755
Source code in src/cellects/utils/formulas.py
def eudist(v1, v2) -> float:
    """
    Calculate the Euclidean distance between two points in n-dimensional space.

    Parameters
    ----------
    v1 : iterable of float
        The coordinates of the first point.
    v2 : iterable of float
        The coordinates of the second point.

    Returns
    -------
    float
        The Euclidean distance between `v1` and `v2`.

    Raises
    ------
    ValueError
        If `v1` and `v2` do not have the same length.

    Notes
    -----
    The Euclidean distance is calculated using the standard formula:
    √((x2 − x1)^2 + (y2 − y1)^2 + ...).

    Examples
    --------
    >>> v1 = [1.0, 2.0]
    >>> v2 = [4.0, 6.0]
    >>> eudist(v1, v2)
    5.0

    >>> v1 = [1.0, 2.0, 3.0]
    >>> v2 = [4.0, 6.0, 8.0]
    >>> eudist(v1, v2)
    7.0710678118654755
    """
    dist = [(a - b)**2 for a, b in zip(v1, v2)]
    dist = np.sqrt(np.sum(dist))
    return dist

extract_graph_dynamics(converted_video, coord_network, arena_label, starting_time=0, origin=None, coord_pseudopods=None)

Extracts dynamic graph data from video frames based on network dynamics.

This function processes time-series binary network structures to extract evolving vertices and edges over time. It computes spatial relationships between networks and an origin point through image processing steps including contour detection, padding for alignment, skeleton extraction, and morphological analysis. Vertex and edge attributes like position, connectivity, width, intensity, and betweenness are compiled into tables saved as CSV files.

Parameters:

Name Type Description Default
converted_video NDArray

3D video data array (t x y x) containing pixel intensities used for calculating edge intensity attributes during table generation.

required
coord_network NDArray[uint8]

3D binary network mask array (t x y x) representing connectivity structures across time points.

required
arena_label int

Unique identifier to prefix output filenames corresponding to specific experimental arenas.

required
starting_time (int, optional(default=0))

Time index within coord_network to begin processing from (exclusive of origin initialization).

0
origin (NDArray[uint8], optional(default=None))

Binary mask identifying the region of interest's central origin for spatial reference during network comparison.

None

Returns:

Type Description
None
Saves two CSV files in working directory:
1. `vertex_table{arena_label}_t{T}_y{Y}_x{X}.csv` - Vertex table with time, coordinates, and connectivity information
2. `edge_table{arena_label}_t{T}_y{Y}_x{X}.csv` - Edge table containing attributes like length, width, intensity, and betweenness
Notes

Output CSVs use NumPy arrays converted to pandas DataFrames with columns: - Vertex table includes timestamps (t), coordinates (y,x), and connectivity flags. - Edge table contains betweenness centrality calculated during skeleton processing. Origin contours are spatially aligned through padding operations to maintain coordinate consistency across time points.

Source code in src/cellects/image_analysis/network_functions.py
def extract_graph_dynamics(converted_video: NDArray, coord_network: NDArray, arena_label: int,
                            starting_time: int=0, origin: NDArray[np.uint8]=None, coord_pseudopods: NDArray=None):
    """
    Extracts dynamic graph data from video frames based on network dynamics.

    This function processes time-series binary network structures to extract evolving vertices and edges over time. It computes spatial relationships between networks and an origin point through image processing steps including contour detection, padding for alignment, skeleton extraction, and morphological analysis. Vertex and edge attributes like position, connectivity, width, intensity, and betweenness are compiled into tables saved as CSV files.

    Parameters
    ----------
    converted_video : NDArray
        3D video data array (t x y x) containing pixel intensities used for calculating edge intensity attributes during table generation.
    coord_network : NDArray[np.uint8]
        3D binary network mask array (t x y x) representing connectivity structures across time points.
    arena_label : int
        Unique identifier to prefix output filenames corresponding to specific experimental arenas.
    starting_time : int, optional (default=0)
        Time index within `coord_network` to begin processing from (exclusive of origin initialization).
    origin : NDArray[np.uint8], optional (default=None)
        Binary mask identifying the region of interest's central origin for spatial reference during network comparison.

    Returns
    -------
    None
    Saves two CSV files in working directory:
    1. `vertex_table{arena_label}_t{T}_y{Y}_x{X}.csv` - Vertex table with time, coordinates, and connectivity information
    2. `edge_table{arena_label}_t{T}_y{Y}_x{X}.csv` - Edge table containing attributes like length, width, intensity, and betweenness

    Notes
    ---
    Output CSVs use NumPy arrays converted to pandas DataFrames with columns:
    - Vertex table includes timestamps (t), coordinates (y,x), and connectivity flags.
    - Edge table contains betweenness centrality calculated during skeleton processing.
    Origin contours are spatially aligned through padding operations to maintain coordinate consistency across time points.
    """
    logging.info(f"Arena n°{arena_label}. Starting graph extraction.")
    # converted_video = self.converted_video; coord_network=self.coord_network; arena_label=1; starting_time=0; origin=self.origin
    dims = converted_video.shape[:3]
    if origin is not None:
        _, _, _, origin_centroid = cv2.connectedComponentsWithStats(origin)
        origin_centroid = np.round((origin_centroid[1, 1], origin_centroid[1, 0])).astype(np.int64)
        pad_origin_centroid = origin_centroid + 1
        origin_contours = get_contours(origin)
        pad_origin = add_padding([origin])[0]
    else:
        pad_origin_centroid = None
        pad_origin = None
        origin_contours = None
    vertex_table = None
    for t in np.arange(starting_time, dims[0]): # t=320   Y, X = 729, 554
        computed_network = np.zeros((dims[1], dims[2]), dtype=np.uint8)
        net_t = coord_network[1:, coord_network[0, :] == t]
        computed_network[net_t[0], net_t[1]] = 1
        if origin is not None:
            computed_network = computed_network * (1 - origin)
            computed_network = np.logical_or(origin_contours, computed_network).astype(np.uint8)
        else:
            computed_network = computed_network.astype(np.uint8)
        if computed_network.any():
            computed_network = keep_one_connected_component(computed_network)
            pad_network = add_padding([computed_network])[0]
            pad_skeleton, pad_distances, pad_origin_contours = get_skeleton_and_widths(pad_network, pad_origin,
                                                                                           pad_origin_centroid)
            edge_id = EdgeIdentification(pad_skeleton, pad_distances, t)
            edge_id.run_edge_identification()
            if pad_origin_contours is not None:
                origin_contours = remove_padding([pad_origin_contours])[0]
            growing_areas = None
            if coord_pseudopods is not None:
                growing_areas = coord_pseudopods[1:, coord_pseudopods[0, :] == t]
            edge_id.make_vertex_table(origin_contours, growing_areas)
            edge_id.make_edge_table(converted_video[t, ...])
            edge_id.vertex_table = np.hstack((np.repeat(t, edge_id.vertex_table.shape[0])[:, None], edge_id.vertex_table))
            edge_id.edge_table = np.hstack((np.repeat(t, edge_id.edge_table.shape[0])[:, None], edge_id.edge_table))
            if vertex_table is None:
                vertex_table = edge_id.vertex_table.copy()
                edge_table = edge_id.edge_table.copy()
            else:
                vertex_table = np.vstack((vertex_table, edge_id.vertex_table))
                edge_table = np.vstack((edge_table, edge_id.edge_table))

    vertex_table = pd.DataFrame(vertex_table, columns=["t", "y", "x", "vertex_id", "is_tip", "origin",
                                                       "vertex_connected"])
    edge_table = pd.DataFrame(edge_table,
                              columns=["t", "edge_id", "vertex1", "vertex2", "length", "average_width", "intensity",
                                       "betweenness_centrality"])
    vertex_table.to_csv(
        f"vertex_table{arena_label}_t{dims[0]}_y{dims[1]}_x{dims[2]}.csv")
    edge_table.to_csv(
        f"edge_table{arena_label}_t{dims[0]}_y{dims[1]}_x{dims[2]}.csv")

extract_time(pathway='', image_list=None, raw_images=False)

Extract timestamps from a list of images.

This function extracts the DateTimeOriginal or datetime values from the EXIF data of a list of image files, and computes the total time in seconds.

Parameters:

Name Type Description Default
pathway str

Path to the directory containing the images. Default is an empty string.

''
image_list list of str

List of image file names.

None
raw_images bool

If True, use the exifread library. Otherwise, use the exif library. Default is False.

False

Returns:

Name Type Description
time ndarray of int64

Array containing the total time in seconds for each image.

Examples:

>>> pathway = Path(__name__).resolve().parents[0] / "data" / "single_experiment"
>>> image_list = ['image1.tif', 'image2.tif']
>>> time = extract_time(pathway, image_list)
>>> print(time)
array([0, 0])
Source code in src/cellects/utils/load_display_save.py
def extract_time(pathway="", image_list: list=None, raw_images:bool=False):
    """
    Extract timestamps from a list of images.

    This function extracts the DateTimeOriginal or datetime values from
    the EXIF data of a list of image files, and computes the total time in seconds.

    Parameters
    ----------
    pathway : str, optional
        Path to the directory containing the images. Default is an empty string.
    image_list : list of str
        List of image file names.
    raw_images : bool, optional
        If True, use the exifread library. Otherwise, use the exif library.
        Default is False.

    Returns
    -------
    time : ndarray of int64
        Array containing the total time in seconds for each image.

    Examples
    --------
    >>> pathway = Path(__name__).resolve().parents[0] / "data" / "single_experiment"
    >>> image_list = ['image1.tif', 'image2.tif']
    >>> time = extract_time(pathway, image_list)
    >>> print(time)
    array([0, 0])

    """
    if isinstance(pathway, str):
        pathway = Path(pathway)
    os.chdir(pathway)
    if image_list is None:
        image_list = list_image_dir(pathway)
    nb = len(image_list)
    timings = np.zeros((nb, 6), dtype=np.int64)
    if raw_images:
        for i in np.arange(nb):
            with open(image_list, 'rb') as image_file:
                my_image = exifread.process_file(image_file, details=False, stop_tag='DateTimeOriginal')
                datetime = my_image["EXIF DateTimeOriginal"]
            datetime = datetime.values[:10] + ':' + datetime.values[11:]
            timings[i, :] = datetime.split(':')
    else:
        for i in np.arange(nb):
            with open(image_list[i], 'rb') as image_file:
                my_image = Image(image_file)
                if my_image.has_exif:
                    datetime = my_image.datetime
                    datetime = datetime[:10] + ':' + datetime[11:]
                    timings[i, :] = datetime.split(':')

    if np.all(timings[:, 0] == timings[0, 0]):
        if np.all(timings[:, 1] == timings[0, 1]):
            if np.all(timings[:, 2] == timings[0, 2]):
                time = timings[:, 3] * 3600 + timings[:, 4] * 60 + timings[:, 5]
            else:
                time = timings[:, 2] * 86400 + timings[:, 3] * 3600 + timings[:, 4] * 60 + timings[:, 5]
        else:
            days_per_month = (31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31)
            for j in np.arange(nb):
                month_number = timings[j, 1]#int(timings[j, 1])
                timings[j, 1] = days_per_month[month_number] * month_number
            time = (timings[:, 1] + timings[:, 2]) * 86400 + timings[:, 3] * 3600 + timings[:, 4] * 60 + timings[:, 5]
        #time = int(time)
    else:
        time = np.repeat(0, nb)#arange(1, nb * 60, 60)#"Do not experiment the 31th of december!!!"
    if time.sum() == 0:
        time = np.repeat(0, nb)#arange(1, nb * 60, 60)
    return time

find_common_coord(array1, array2)

Find common coordinates between two arrays.

This function compares the given 2D array1 and array2 to determine if there are any common coordinates.

Parameters:

Name Type Description Default
array1 ndarray of int

A 2D numpy ndarray.

required
array2 ndarray of int

Another 2D numpy ndarray.

required

Returns:

Name Type Description
out ndarray of bool

A boolean numpy ndarray where True indicates common coordinates.

Examples:

>>> array1 = np.array([[1, 2], [3, 4]])
>>> array2 = np.array([[5, 6], [1, 2]])
>>> result = find_common_coord(array1, array2)
>>> print(result)
array([ True, False])
Source code in src/cellects/utils/formulas.py
def find_common_coord(array1: NDArray[int], array2: NDArray[int]) -> NDArray[bool]:
    """Find common coordinates between two arrays.

    This function compares the given 2D `array1` and `array2`
    to determine if there are any common coordinates.

    Parameters
    ----------
    array1 : ndarray of int
        A 2D numpy ndarray.
    array2 : ndarray of int
        Another 2D numpy ndarray.

    Returns
    -------
    out : ndarray of bool
        A boolean numpy ndarray where True indicates common
        coordinates.

    Examples
    --------
    >>> array1 = np.array([[1, 2], [3, 4]])
    >>> array2 = np.array([[5, 6], [1, 2]])
    >>> result = find_common_coord(array1, array2)
    >>> print(result)
    array([ True, False])"""
    return (array1[:, None, :] == array2[None, :, :]).all(-1).any(-1)

find_duplicates_coord(array1)

Find duplicate rows in a 2D array and return their coordinate indices.

Given a 2D NumPy array, this function identifies rows that are duplicated (i.e., appear more than once) and returns a boolean array indicating their positions.

Parameters:

Name Type Description Default
array1 ndarray of int

Input 2D array of shape (n_rows, n_columns) from which to find duplicate rows.

required

Returns:

Name Type Description
duplicates ndarray of bool

Boolean array of shape (n_rows,), where True indicates that the corresponding row in array1 is a duplicate.

Examples:

>>> import numpy as np
>>> array1 = np.array([[1, 2], [3, 4], [1, 2], [5, 6]])
>>> find_duplicates_coord(array1)
array([ True, False,  True, False])
Source code in src/cellects/utils/formulas.py
def find_duplicates_coord(array1: NDArray[int]) -> NDArray[bool]:
    """
    Find duplicate rows in a 2D array and return their coordinate indices.

    Given a 2D NumPy array, this function identifies rows that are duplicated (i.e., appear more than once) and returns a boolean array indicating their positions.

    Parameters
    ----------
    array1 : ndarray of int
        Input 2D array of shape (n_rows, n_columns) from which to find duplicate rows.

    Returns
    -------
    duplicates : ndarray of bool
        Boolean array of shape (n_rows,), where `True` indicates that the corresponding row in `array1` is a duplicate.

    Examples
    --------
    >>> import numpy as np
    >>> array1 = np.array([[1, 2], [3, 4], [1, 2], [5, 6]])
    >>> find_duplicates_coord(array1)
    array([ True, False,  True, False])"""
    unique_rows, inverse_indices = np.unique(array1, axis=0, return_inverse=True)
    counts = np.bincount(inverse_indices)
    # A row is duplicate if its count > 1
    return counts[inverse_indices] > 1

get_branches_and_tips_coord(pad_vertices, pad_tips)

Extracts the coordinates of branches and tips from vertices and tips binary images.

This function calculates branch coordinates by subtracting tips from vertices. Then it finds and outputs the non-zero indices of branches and tips separatly.

Parameters:

Name Type Description Default
pad_vertices ndarray

Array containing the vertices to be padded.

required
pad_tips ndarray

Array containing the tips of the padding.

required

Returns:

Name Type Description
branch_v_coord ndarray

Coordinates of branches derived from subtracting tips from vertices.

tips_coord ndarray

Coordinates of the tips.

Examples:

>>> branch_v, tip_c = get_branches_and_tips_coord(pad_vertices, pad_tips)
>>> branch_v
>>> tip_c
Source code in src/cellects/image_analysis/network_functions.py
def get_branches_and_tips_coord(pad_vertices: NDArray[np.uint8], pad_tips: NDArray[np.uint8]) -> Tuple[NDArray, NDArray]:
    """
    Extracts the coordinates of branches and tips from vertices and tips binary images.

    This function calculates branch coordinates by subtracting
    tips from vertices. Then it finds and outputs the non-zero indices of branches and tips separatly.

    Parameters
    ----------
    pad_vertices : ndarray
        Array containing the vertices to be padded.
    pad_tips : ndarray
        Array containing the tips of the padding.

    Returns
    -------
    branch_v_coord : ndarray
        Coordinates of branches derived from subtracting tips from vertices.
    tips_coord : ndarray
        Coordinates of the tips.

    Examples
    --------
    >>> branch_v, tip_c = get_branches_and_tips_coord(pad_vertices, pad_tips)
    >>> branch_v
    >>> tip_c
    """
    pad_branches = pad_vertices - pad_tips
    branch_v_coord = np.transpose(np.array(np.nonzero(pad_branches)))
    tips_coord = np.transpose(np.array(np.nonzero(pad_tips)))
    return branch_v_coord, tips_coord

get_contour_width_from_im_shape(im_shape)

Calculate the contour width based on image shape.

Parameters:

Name Type Description Default
im_shape tuple of int, two items

The dimensions of the image.

required

Returns:

Type Description
int

The calculated contour width.

Source code in src/cellects/utils/formulas.py
def get_contour_width_from_im_shape(im_shape: Tuple) -> int:
    """
    Calculate the contour width based on image shape.

    Parameters
    ----------
    im_shape : tuple of int, two items
        The dimensions of the image.

    Returns
    -------
    int
        The calculated contour width.
    """
    return np.max((np.round(np.log10(np.max(im_shape)) - 2).astype(int), 2))

get_h5_keys(file_name)

Retrieve all keys from a given HDF5 file.

Parameters:

Name Type Description Default
file_name str

The path to the HDF5 file from which keys are to be retrieved.

required

Returns:

Type Description
list of str

A list containing all the keys present in the specified HDF5 file.

Raises:

Type Description
FileNotFoundError

If the specified HDF5 file does not exist.

Source code in src/cellects/utils/load_display_save.py
def get_h5_keys(file_name):
    """
    Retrieve all keys from a given HDF5 file.

    Parameters
    ----------
    file_name : str
        The path to the HDF5 file from which keys are to be retrieved.

    Returns
    -------
    list of str
        A list containing all the keys present in the specified HDF5 file.

    Raises
    ------
    FileNotFoundError
        If the specified HDF5 file does not exist.
    """
    try:
        with h5py.File(file_name, 'r') as h5f:
            all_keys = list(h5f.keys())
            return all_keys
    except FileNotFoundError:
        raise FileNotFoundError(f"The file '{file_name}' does not exist.")

get_inertia_axes(mo)

Calculate the inertia axes of a moment object.

This function computes the barycenters, central moments, and the lengths of the major and minor axes, as well as their orientation.

Parameters:

Name Type Description Default
mo dict

Dictionary containing moments, which should include keys: 'm00', 'm10', 'm01', 'm20', and 'm11'.

required

Returns:

Type Description
tuple

A tuple containing: - cx : float The x-coordinate of the barycenter. - cy : float The y-coordinate of the barycenter. - major_axis_len : float The length of the major axis. - minor_axis_len : float The length of the minor axis. - axes_orientation : float The orientation of the axes in radians.

Notes

This function uses Numba's @njit decorator for performance. The moments in the input dictionary should be computed from the same image region.

Examples:

>>> mo = {'m00': 1.0, 'm10': 2.0, 'm01': 3.0, 'm20': 4.0, 'm11': 5.0}
>>> get_inertia_axes(mo)
(2.0, 3.0, 9.165151389911677, 0.8421875803239, 0.7853981633974483)
Source code in src/cellects/utils/formulas.py
@njit()
def get_inertia_axes(mo: dict) -> Tuple[float, float, float, float, float]:
    """
    Calculate the inertia axes of a moment object.

    This function computes the barycenters, central moments,
    and the lengths of the major and minor axes, as well as
    their orientation.

    Parameters
    ----------
    mo : dict
        Dictionary containing moments, which should include keys:
        'm00', 'm10', 'm01', 'm20', and 'm11'.

    Returns
    -------
    tuple
        A tuple containing:
            - cx : float
                The x-coordinate of the barycenter.
            - cy : float
                The y-coordinate of the barycenter.
            - major_axis_len : float
                The length of the major axis.
            - minor_axis_len : float
                The length of the minor axis.
            - axes_orientation : float
                The orientation of the axes in radians.

    Notes
    -----
    This function uses Numba's @njit decorator for performance.
    The moments in the input dictionary should be computed from
    the same image region.

    Examples
    --------
    >>> mo = {'m00': 1.0, 'm10': 2.0, 'm01': 3.0, 'm20': 4.0, 'm11': 5.0}
    >>> get_inertia_axes(mo)
    (2.0, 3.0, 9.165151389911677, 0.8421875803239, 0.7853981633974483)

    """
    #L. Rocha, L. Velho and P.C.P. Calvalho (2002)
    #http://sibgrapi.sid.inpe.br/col/sid.inpe.br/banon/2002/10.23.11.34/doc/35.pdf
    # http://raphael.candelier.fr/?blog=Image%20Moments

    # Calculate barycenters
    cx = mo["m10"] / mo["m00"]
    cy = mo["m01"] / mo["m00"]
    # Calculate central moments
    c20 = (mo["m20"] / mo["m00"]) - np.square(cx)
    c02 = (mo["m02"] / mo["m00"]) - np.square(cy)
    c11 = (mo["m11"] / mo["m00"]) - (cx * cy)
    # Calculate major and minor axi lengths OK
    major_axis_len = np.sqrt(6 * (c20 + c02 + np.sqrt(np.square(2 * c11) + np.square(c20 - c02))))
    minor_axis_len = np.sqrt(6 * (c20 + c02 - np.sqrt(np.square(2 * c11) + np.square(c20 - c02))))
    if (c20 - c02) != 0:
        axes_orientation = (0.5 * np.arctan((2 * c11) / (c20 - c02))) + ((c20 < c02) * (np.pi /2))
    else:
        axes_orientation = 0.
    return cx, cy, major_axis_len, minor_axis_len, axes_orientation

get_inner_vertices(pad_skeleton, potential_tips, cnv4, cnv8)

Get inner vertices from skeleton image.

This function identifies and returns the inner vertices of a skeletonized image. It processes potential tips to determine which pixels should be considered as vertices based on their neighbor count and connectivity.

Parameters:

Name Type Description Default
pad_skeleton ndarray of uint8

The padded skeleton image.

required
potential_tips ndarray of uint8

Potential tip points in the skeleton. Defaults to pad_tips.

required
cnv4 object

Object for handling 4-connections.

required
cnv8 object

Object for handling 8-connections.

required

Returns:

Name Type Description
out tuple of ndarray of uint8, ndarray of uint8

A tuple containing the final vertices matrix and the updated potential tips.

Examples:

>>> pad_vertices, potential_tips = get_inner_vertices(pad_skeleton, potential_tips)
>>> print(pad_vertices)
Source code in src/cellects/image_analysis/network_functions.py
def get_inner_vertices(pad_skeleton: NDArray[np.uint8], potential_tips: NDArray[np.uint8], cnv4: object, cnv8: object) -> Tuple[NDArray[np.uint8], NDArray[np.uint8]]: # potential_tips=pad_tips
    """
    Get inner vertices from skeleton image.

    This function identifies and returns the inner vertices of a skeletonized image.
    It processes potential tips to determine which pixels should be considered as
    vertices based on their neighbor count and connectivity.

    Parameters
    ----------
    pad_skeleton : ndarray of uint8
        The padded skeleton image.
    potential_tips : ndarray of uint8, optional
        Potential tip points in the skeleton. Defaults to pad_tips.
    cnv4 : object
        Object for handling 4-connections.
    cnv8 : object
        Object for handling 8-connections.

    Returns
    -------
    out : tuple of ndarray of uint8, ndarray of uint8
        A tuple containing the final vertices matrix and the updated potential tips.

    Examples
    --------
    >>> pad_vertices, potential_tips = get_inner_vertices(pad_skeleton, potential_tips)
    >>> print(pad_vertices)
    """

    # Initiate the vertices final matrix as a copy of the potential_tips
    pad_vertices = potential_tips.copy()
    for neighbor_nb in [8, 7, 6, 5, 4]:
        # All pixels having neighbor_nb neighbor are potential vertices
        potential_vertices = np.zeros(potential_tips.shape, dtype=np.uint8)

        potential_vertices[cnv8.equal_neighbor_nb == neighbor_nb] = 1
        # remove the false intersections that are a neighbor of a previously detected intersection
        # Dilate vertices to make sure that no neighbors of the current potential vertices are already vertices.
        dilated_previous_intersections = cv2.dilate(pad_vertices, cross_33, iterations=1)
        potential_vertices *= (1 - dilated_previous_intersections)
        pad_vertices[np.nonzero(potential_vertices)] = 1

    # Having 3 neighbors is ambiguous
    with_3_neighbors = cnv8.equal_neighbor_nb == 3
    if np.any(with_3_neighbors):
        # We compare 8-connections with 4-connections
        # We loop over all 3 connected
        coord_3 = np.nonzero(with_3_neighbors)
        for y3, x3 in zip(coord_3[0], coord_3[1]): # y3, x3 = 3,7
            # If, in the neighborhood of the 3, there is at least a 2 (in 8) that is 0 (in 4), and not a termination: the 3 is a node
            has_2_8neigh = cnv8.equal_neighbor_nb[(y3 - 1):(y3 + 2), (x3 - 1):(x3 + 2)] > 0  # 1
            has_2_8neigh_without_focal = has_2_8neigh.copy()
            has_2_8neigh_without_focal[1, 1] = 0
            node_but_not_term = pad_vertices[(y3 - 1):(y3 + 2), (x3 - 1):(x3 + 2)] * (1 - potential_tips[(y3 - 1):(y3 + 2), (x3 - 1):(x3 + 2)])
            all_are_node_but_not_term = np.array_equal(has_2_8neigh_without_focal, node_but_not_term)
            if np.any(has_2_8neigh * (1 - all_are_node_but_not_term)):
                # At least 3 of the 8neigh are not connected:
                has_2_8neigh_without_focal = np.pad(has_2_8neigh_without_focal, [(1,), (1,)], mode='constant')
                cnv_8con = CompareNeighborsWithValue(has_2_8neigh_without_focal, 4)
                cnv_8con.is_equal(1, and_itself=True)
                disconnected_nb = has_2_8neigh_without_focal.sum() - (cnv_8con.equal_neighbor_nb > 0).sum()
                if disconnected_nb > 2:
                    pad_vertices[y3, x3] = 1
    # Now there may be too many vertices:
    # - Those that are 4-connected:
    nb, sh, st, ce = cv2.connectedComponentsWithStats(pad_vertices, connectivity=4)
    problematic_vertices = np.nonzero(st[:, 4] > 1)[0][1:]
    for prob_v in problematic_vertices:
        vertices_group = sh == prob_v
        # If there is a tip in the group, do
        if np.any(potential_tips[vertices_group]):
            # Change the most connected one from tip to vertex
            curr_neighbor_nb = cnv8.equal_neighbor_nb * vertices_group
            wrong_tip = np.nonzero(curr_neighbor_nb == curr_neighbor_nb.max())
            potential_tips[wrong_tip] = 0
        else:
            #  otherwise do:
            # Find the most 8-connected one, if its 4-connected neighbors have no more 8-connexions than 4-connexions + 1, they can be removed
            # Otherwise,
            # Find the most 4-connected one, and remove its 4 connected neighbors having only 1 or other 8-connexion

            c = zoom_on_nonzero(vertices_group)
            # 1. Find the most 8-connected one:
            sub_v_grp = vertices_group[c[0]:c[1], c[2]:c[3]]
            c8 = cnv8.equal_neighbor_nb[c[0]:c[1], c[2]:c[3]]
            vertices_group_8 = c8 * sub_v_grp
            max_8_con = vertices_group_8.max()
            most_8_con = np.nonzero(vertices_group_8 == max_8_con)
            # c4[(most_8_con[0][0] - 1):(most_8_con[0][0] + 2), (most_8_con[1][0] - 1):(most_8_con[1][0] + 2)]
            if len(most_8_con[0]) == 1:
                skel_copy = pad_skeleton[c[0]:c[1], c[2]:c[3]].copy()
                skel_copy[most_8_con] = 0
                sub_cnv8 = CompareNeighborsWithValue(skel_copy, 8)
                sub_cnv8.is_equal(1, and_itself=False)
                sub_cnv4 = CompareNeighborsWithValue(skel_copy, 4)
                sub_cnv4.is_equal(1, and_itself=False)
                v_to_remove = sub_v_grp * (sub_cnv8.equal_neighbor_nb <= sub_cnv4.equal_neighbor_nb + 1)
            else:
                c4 = cnv4.equal_neighbor_nb[c[0]:c[1], c[2]:c[3]]
                # 1. # Find the most 4-connected one:
                vertices_group_4 = c4 * sub_v_grp
                max_con = vertices_group_4.max()
                most_con = np.nonzero(vertices_group_4 == max_con)
                if len(most_con[0]) < sub_v_grp.sum():
                    # 2. Check its 4-connected neighbors and remove those having only 1 other 8-connexion
                    skel_copy = pad_skeleton[c[0]:c[1], c[2]:c[3]].copy()
                    skel_copy[most_con] = 0
                    skel_copy[most_con[0] - 1, most_con[1]] = 0
                    skel_copy[most_con[0] + 1, most_con[1]] = 0
                    skel_copy[most_con[0], most_con[1] - 1] = 0
                    skel_copy[most_con[0], most_con[1] + 1] = 0
                    sub_cnv8 = CompareNeighborsWithValue(skel_copy, 8)
                    sub_cnv8.is_equal(1, and_itself=False)
                    # There are:
                    v_to_remove = ((vertices_group_4 > 0) * sub_cnv8.equal_neighbor_nb) == 1
                else:
                    v_to_remove = np.zeros(sub_v_grp.shape, dtype=bool)
            pad_vertices[c[0]:c[1], c[2]:c[3]][v_to_remove] = 0

    # Other vertices to remove:
    # - Those that are forming a cross with 0 at the center while the skeleton contains 1
    cnv4_false = CompareNeighborsWithValue(pad_vertices, 4)
    cnv4_false.is_equal(1, and_itself=False)
    cross_vertices = cnv4_false.equal_neighbor_nb == 4
    wrong_cross_vertices = cross_vertices * pad_skeleton
    if wrong_cross_vertices.any():
        pad_vertices[np.nonzero(wrong_cross_vertices)] = 1
        cross_fix = cv2.dilate(wrong_cross_vertices, kernel=cross_33, iterations=1)
        # Remove the 4-connected vertices that have no more than 4 8-connected neighbors
        # i.e. the three on the side of the surrounded 0 and only one on edge on the other side
        cross_fix = ((cnv8.equal_neighbor_nb * cross_fix) == 4) * (1 - wrong_cross_vertices)
        pad_vertices *= (1 - cross_fix)
    return pad_vertices, potential_tips

get_kurtosis(mo, binary_image, cx, cy, sx, sy)

Calculate the kurtosis of a binary image.

The function calculates the fourth moment (kurtosis) of the given binary image around the specified center coordinates with an option to specify the size of the square window.

Parameters:

Name Type Description Default
mo dict

Dictionary containing moments of binary image.

required
binary_image ndarray

A 2D numpy ndarray representing a binary image.

required
cx int or float

The x-coordinate of the center point of the square window.

required
cy int or float

The y-coordinate of the center point of the square window.

required
sx int or float

The x-length of the square window (width).

required
sy int or float

The y-length of the square window (height).

required

Returns:

Type Description
float

The kurtosis value calculated from the moments.

Examples:

>>> mo = np.array([[0, 1], [2, 3]])
>>> binary_image = np.array([[1, 0], [0, 1]])
>>> cx = 2
>>> cy = 3
>>> sx = 5
>>> sy = 6
>>> result = get_kurtosis(mo, binary_image, cx, cy, sx, sy)
>>> print(result)
expected output
Source code in src/cellects/utils/formulas.py
def get_kurtosis(mo: dict, binary_image: NDArray, cx: float, cy: float, sx: float, sy: float) -> Tuple[float, float]:
    """
    Calculate the kurtosis of a binary image.

    The function calculates the fourth moment (kurtosis) of the given
    binary image around the specified center coordinates with an option
    to specify the size of the square window.

    Parameters
    ----------
    mo : dict
        Dictionary containing moments of binary image.
    binary_image : np.ndarray
        A 2D numpy ndarray representing a binary image.
    cx : int or float
        The x-coordinate of the center point of the square window.
    cy : int or float
        The y-coordinate of the center point of the square window.
    sx : int or float
        The x-length of the square window (width).
    sy : int or float
        The y-length of the square window (height).

    Returns
    -------
    float
        The kurtosis value calculated from the moments.

    Examples
    --------
    >>> mo = np.array([[0, 1], [2, 3]])
    >>> binary_image = np.array([[1, 0], [0, 1]])
    >>> cx = 2
    >>> cy = 3
    >>> sx = 5
    >>> sy = 6
    >>> result = get_kurtosis(mo, binary_image, cx, cy, sx, sy)
    >>> print(result)
    expected output
    """
    x4, y4 = get_power_dists(binary_image, cx, cy, 4)
    X4, Y4 = np.meshgrid(x4, y4)
    m4x, m4y = get_var(mo, binary_image, X4, Y4)
    return get_skewness_kurtosis(m4x, m4y, sx, sy, 4)

get_mpl_colormap(cmap_name)

Returns a linear color range array for the given matplotlib colormap.

Parameters:

Name Type Description Default
cmap_name str

The name of the colormap to get.

required

Returns:

Type Description
ndarray

A 256x1x3 array of bytes representing the linear color range.

Examples:

>>> result = get_mpl_colormap('viridis')
>>> print(result.shape)
(256, 1, 3)
Source code in src/cellects/utils/load_display_save.py
def get_mpl_colormap(cmap_name: str):
    """
    Returns a linear color range array for the given matplotlib colormap.

    Parameters
    ----------
    cmap_name : str
        The name of the colormap to get.

    Returns
    -------
    numpy.ndarray
        A 256x1x3 array of bytes representing the linear color range.

    Examples
    --------
    >>> result = get_mpl_colormap('viridis')
    >>> print(result.shape)
    (256, 1, 3)

    """
    cmap = plt.get_cmap(cmap_name)

    # Initialize the matplotlib color map
    sm = plt.cm.ScalarMappable(cmap=cmap)

    # Obtain linear color range
    color_range = sm.to_rgba(np.linspace(0, 1, 256), bytes=True)[:, 2::-1]

    return color_range.reshape(256, 1, 3)

get_neighbor_comparisons(pad_skeleton)

Get neighbor comparisons for a padded skeleton.

This function creates two CompareNeighborsWithValue objects with different neighborhood sizes (4 and 8) and checks if the neighbors are equal to 1. It returns both comparison objects.

Parameters:

Name Type Description Default
pad_skeleton ndarray of uint8

The input padded skeleton array.

required

Returns:

Name Type Description
out tuple of CompareNeighborsWithValue, CompareNeighborsWithValue

Two comparison objects for 4 and 8 neighbors.

Examples:

>>> cnv4, cnv8 = get_neighbor_comparisons(pad_skeleton)
Source code in src/cellects/image_analysis/network_functions.py
def get_neighbor_comparisons(pad_skeleton: NDArray[np.uint8]) -> Tuple[object, object]:
    """
    Get neighbor comparisons for a padded skeleton.

    This function creates two `CompareNeighborsWithValue` objects with different
    neighborhood sizes (4 and 8) and checks if the neighbors are equal to 1. It
    returns both comparison objects.

    Parameters
    ----------
    pad_skeleton : ndarray of uint8
        The input padded skeleton array.

    Returns
    -------
    out : tuple of CompareNeighborsWithValue, CompareNeighborsWithValue
        Two comparison objects for 4 and 8 neighbors.

    Examples
    --------
    >>> cnv4, cnv8 = get_neighbor_comparisons(pad_skeleton)
    """
    cnv4 = CompareNeighborsWithValue(pad_skeleton, 4)
    cnv4.is_equal(1, and_itself=True)
    cnv8 = CompareNeighborsWithValue(pad_skeleton, 8)
    cnv8.is_equal(1, and_itself=True)
    return cnv4, cnv8

get_newly_explored_area(binary_vid)

Get newly explored area in a binary video.

Calculate the number of new pixels that have become active (==1) from the previous frame in a binary video representation.

Parameters:

Name Type Description Default
binary_vid ndarray

The current frame of the binary video.

required

Returns:

Type Description
ndarray

An array containing the number of new active pixels for each row.

Notes

This function uses Numba's @njit decorator for performance.

Examples:

>>> binary_vid=np.zeros((4, 5, 5), dtype=np.uint8)
>>> binary_vid[:2, 3, 3] = 1
>>> binary_vid[1, 4, 3] = 1
>>> binary_vid[2, 3, 4] = 1
>>> binary_vid[3, 2, 3] = 1
>>> get_newly_explored_area(binary_vid)
array([0, 1, 1, 1])
>>> binary_vid=np.zeros((5, 5), dtype=np.uint8)[None, :, :]
>>> get_newly_explored_area(binary_vid)
array([0])
Source code in src/cellects/utils/formulas.py
@njit()
def get_newly_explored_area(binary_vid: NDArray[np.uint8]) -> NDArray:
    """
    Get newly explored area in a binary video.

    Calculate the number of new pixels that have become active (==1) from
    the previous frame in a binary video representation.

    Parameters
    ----------
    binary_vid : np.ndarray
        The current frame of the binary video.

    Returns
    -------
    np.ndarray
        An array containing the number of new active pixels for each row.

    Notes
    -----
    This function uses Numba's @njit decorator for performance.

    Examples
    --------
    >>> binary_vid=np.zeros((4, 5, 5), dtype=np.uint8)
    >>> binary_vid[:2, 3, 3] = 1
    >>> binary_vid[1, 4, 3] = 1
    >>> binary_vid[2, 3, 4] = 1
    >>> binary_vid[3, 2, 3] = 1
    >>> get_newly_explored_area(binary_vid)
    array([0, 1, 1, 1])

    >>> binary_vid=np.zeros((5, 5), dtype=np.uint8)[None, :, :]
    >>> get_newly_explored_area(binary_vid)
    array([0])
    """
    return ((binary_vid - binary_vid[0, ...]) == 1).reshape(binary_vid.shape[0], - 1).sum(1)

get_power_dists(binary_image, cx, cy, n)

Calculate the power distributions based on the given center coordinates and exponent.

This function computes the nth powers of x and y distances from a given center point (cx, cy) for each pixel in the binary image.

Parameters:

Name Type Description Default
binary_image ndarray

A 2D array (binary image) where the power distributions are calculated.

required
cx float

The x-coordinate of the center point.

required
cy float

The y-coordinate of the center point.

required
n int

The exponent for power distribution calculation.

required

Returns:

Type Description
tuple[ndarray, ndarray]

A tuple containing two arrays: - The first array contains the nth power of x distances from the center. - The second array contains the nth power of y distances from the center.

Notes

This function uses Numba's @njit decorator for performance optimization. Ensure that binary_image is a NumPy ndarray to avoid type issues.

Examples:

>>> binary_image = np.zeros((10, 10))
>>> xn, yn = get_power_dists(binary_image, 5.0, 5.0, 2)
>>> print(xn.shape), print(yn.shape)
(10,) (10,)
Source code in src/cellects/utils/formulas.py
@njit()
def get_power_dists(binary_image: np.ndarray, cx: float, cy: float, n: int):
    """
    Calculate the power distributions based on the given center coordinates and exponent.

    This function computes the `n`th powers of x and y distances from
    a given center point `(cx, cy)` for each pixel in the binary image.

    Parameters
    ----------
    binary_image : np.ndarray
        A 2D array (binary image) where the power distributions are calculated.
    cx : float
        The x-coordinate of the center point.
    cy : float
        The y-coordinate of the center point.
    n : int
        The exponent for power distribution calculation.

    Returns
    -------
    tuple[np.ndarray, np.ndarray]
        A tuple containing two arrays:
        - The first array contains the `n`th power of x distances from the center.
        - The second array contains the `n`th power of y distances from the center.

    Notes
    -----
    This function uses Numba's `@njit` decorator for performance optimization.
    Ensure that `binary_image` is a NumPy ndarray to avoid type issues.

    Examples
    --------
    >>> binary_image = np.zeros((10, 10))
    >>> xn, yn = get_power_dists(binary_image, 5.0, 5.0, 2)
    >>> print(xn.shape), print(yn.shape)
    (10,) (10,)
    """
    xn = (np.arange(binary_image.shape[1]) - cx) ** n
    yn = (np.arange(binary_image.shape[0]) - cy) ** n
    return xn, yn

get_skeleton_and_widths(pad_network, pad_origin=None, pad_origin_centroid=None)

Get skeleton and widths from a network.

This function computes the morphological skeleton of a network and calculates the distances to the closest zero pixel for each non-zero pixel using medial_axis. If pad_origin is provided, it adds a central contour. Finally, the function removes small loops and keeps only one connected component.

Parameters:

Name Type Description Default
pad_network ndarray of uint8

The binary pad network image.

required
pad_origin ndarray of uint8

An array indicating the origin for adding central contour.

None
pad_origin_centroid ndarray

The centroid of the pad origin. Defaults to None.

None

Returns:

Name Type Description
out tuple(ndarray of uint8, ndarray of uint8, ndarray of uint8)

A tuple containing: - pad_skeleton: The skeletonized image. - pad_distances: The distances to the closest zero pixel. - pad_origin_contours: The contours of the central origin, or None if not used.

Examples:

>>> pad_network = np.array([[0, 1], [1, 0]])
>>> skeleton, distances, contours = get_skeleton_and_widths(pad_network)
>>> print(skeleton)
Source code in src/cellects/image_analysis/network_functions.py
def get_skeleton_and_widths(pad_network: NDArray[np.uint8], pad_origin: NDArray[np.uint8]=None, pad_origin_centroid: NDArray[np.int64]=None) -> Tuple[NDArray[np.uint8], NDArray[np.float64], NDArray[np.uint8]]:
    """
    Get skeleton and widths from a network.

    This function computes the morphological skeleton of a network and calculates
    the distances to the closest zero pixel for each non-zero pixel using medial_axis.
    If pad_origin is provided, it adds a central contour. Finally, the function
    removes small loops and keeps only one connected component.

    Parameters
    ----------
    pad_network : ndarray of uint8
        The binary pad network image.
    pad_origin : ndarray of uint8, optional
        An array indicating the origin for adding central contour.
    pad_origin_centroid : ndarray, optional
        The centroid of the pad origin. Defaults to None.

    Returns
    -------
    out : tuple(ndarray of uint8, ndarray of uint8, ndarray of uint8)
        A tuple containing:
        - pad_skeleton: The skeletonized image.
        - pad_distances: The distances to the closest zero pixel.
        - pad_origin_contours: The contours of the central origin, or None if not
          used.

    Examples
    --------
    >>> pad_network = np.array([[0, 1], [1, 0]])
    >>> skeleton, distances, contours = get_skeleton_and_widths(pad_network)
    >>> print(skeleton)
    """
    pad_skeleton, pad_distances = morphology.medial_axis(pad_network, return_distance=True, rng=0)
    pad_skeleton = pad_skeleton.astype(np.uint8)
    if pad_origin is not None:
        pad_skeleton, pad_distances, pad_origin_contours = _add_central_contour(pad_skeleton, pad_distances, pad_origin, pad_network, pad_origin_centroid)
    else:
        pad_origin_contours = None
    pad_skeleton, pad_distances = remove_small_loops(pad_skeleton, pad_distances)
    pad_skeleton = keep_one_connected_component(pad_skeleton)
    pad_distances *= pad_skeleton
    return pad_skeleton, pad_distances, pad_origin_contours

get_skewness(mo, binary_image, cx, cy, sx, sy)

Calculate skewness of the given moment.

This function computes the skewness based on the third moments and the central moments of a binary image.

Parameters:

Name Type Description Default
mo dict

Dictionary containing moments of binary image.

required
binary_image ndarray

Binary image as a 2D numpy array.

required
cx float

Description of parameter cx.

required
cy float

Description of parameter cy.

required
sx float

Description of parameter sx.

required
sy float

Description of parameter sy.

required

Returns:

Type Description
Tuple[float, float]

Tuple containing skewness values.

Examples:

>>> result = get_skewness(mo=example_mo, binary_image=binary_img,
... cx=0.5, cy=0.5, sx=1.0, sy=1.0)
>>> print(result)
(skewness_x, skewness_y)  # Example output
Source code in src/cellects/utils/formulas.py
def get_skewness(mo: dict, binary_image: NDArray, cx: float, cy: float, sx: float, sy: float) -> Tuple[float, float]:
    """Calculate skewness of the given moment.

    This function computes the skewness based on the third moments
    and the central moments of a binary image.

    Parameters
    ----------
    mo : dict
        Dictionary containing moments of binary image.
    binary_image : ndarray
        Binary image as a 2D numpy array.
    cx : float
        Description of parameter `cx`.
    cy : float
        Description of parameter `cy`.
    sx : float
        Description of parameter `sx`.
    sy : float
        Description of parameter `sy`.

    Returns
    -------
    Tuple[float, float]
        Tuple containing skewness values.

    Examples
    --------
    >>> result = get_skewness(mo=example_mo, binary_image=binary_img,
    ... cx=0.5, cy=0.5, sx=1.0, sy=1.0)
    >>> print(result)
    (skewness_x, skewness_y)  # Example output
    """
    x3, y3 = get_power_dists(binary_image, cx, cy, 3)
    X3, Y3 = np.meshgrid(x3, y3)
    m3x, m3y = get_var(mo, binary_image, X3, Y3)
    return get_skewness_kurtosis(m3x, m3y, sx, sy, 3)

get_skewness_kurtosis(mnx, mny, sx, sy, n)

Calculates skewness and kurtosis of a distribution.

This function computes the skewness and kurtosis from given statistical moments, standard deviations, and order of moments.

Parameters:

Name Type Description Default
mnx float

The third moment about the mean for x.

required
mny float

The fourth moment about the mean for y.

required
sx float

The standard deviation of x.

required
sy float

The standard deviation of y.

required
n int

Order of the moment (3 for skewness, 4 for kurtosis).

required

Returns:

Name Type Description
skewness float

The computed skewness.

kurtosis float

The computed kurtosis.

Notes

This function uses Numba's @njit decorator for performance. Ensure that the values of mnx, mny, sx, and sy are non-zero to avoid division by zero. If n = 3, the function calculates skewness. If n = 4, it calculates kurtosis.

Examples:

>>> skewness, kurtosis = get_skewness_kurtosis(1.5, 2.0, 0.5, 0.75, 3)
>>> print("Skewness:", skewness)
Skewness: 8.0
>>> print("Kurtosis:", kurtosis)
Kurtosis: nan
Source code in src/cellects/utils/formulas.py
@njit()
def get_skewness_kurtosis(mnx: float, mny: float, sx: float, sy: float, n: int) -> Tuple[float, float]:
    """
    Calculates skewness and kurtosis of a distribution.

    This function computes the skewness and kurtosis from given statistical
    moments, standard deviations, and order of moments.

    Parameters
    ----------
    mnx : float
        The third moment about the mean for x.
    mny : float
        The fourth moment about the mean for y.
    sx : float
        The standard deviation of x.
    sy : float
        The standard deviation of y.
    n : int
        Order of the moment (3 for skewness, 4 for kurtosis).

    Returns
    -------
    skewness : float
        The computed skewness.
    kurtosis : float
        The computed kurtosis.

    Notes
    -----
    This function uses Numba's `@njit` decorator for performance.
    Ensure that the values of `mnx`, `mny`, `sx`, and `sy` are non-zero to avoid division by zero.
    If `n = 3`, the function calculates skewness. If `n = 4`, it calculates kurtosis.

    Examples
    --------
    >>> skewness, kurtosis = get_skewness_kurtosis(1.5, 2.0, 0.5, 0.75, 3)
    >>> print("Skewness:", skewness)
    Skewness: 8.0
    >>> print("Kurtosis:", kurtosis)
    Kurtosis: nan

    """
    if sx == 0:
        fx = 0
    else:
        fx = mnx / sx ** n

    if sy == 0:
        fy = 0
    else:
        fy = mny / sy ** n

    return fx, fy

get_standard_deviations(mo, binary_image, cx, cy)

Return spatial standard deviations for a given moment and binary image.

This function computes the square root of variances along x (horizontal) and y (vertical) axes for the given binary image and moment.

Parameters:

Name Type Description Default
mo dict

Dictionary containing moments of binary image.

required
binary_image ndarray of bool or int8

The binary input image where the moments are computed.

required
cx float64

X-coordinate of center of mass (horizontal position).

required
cy float64

Y-coordinate of center of mass (vertical position).

required

Returns:

Type Description
tuple[ndarray of float64, ndarray of float64]

Tuple containing the standard deviations along the x and y axes.

Raises:

Type Description
ValueError

If binary_image is not a binary image or has an invalid datatype.

Notes

This function uses the get_power_dists and get_var functions to compute the distributed variances, which are then transformed into standard deviations.

Examples:

>>> import numpy as np
>>> binary_image = np.array([[0, 1], [1, 0]], dtype=np.int8)
>>> mo = np.array([[2.0], [3.0]])
>>> cx, cy = 1.5, 1.5
>>> stdx, stdy = get_standard_deviations(mo, binary_image, cx, cy)
>>> print(stdx)
[1.1]
>>> print(stdy)
[0.8366600265...]
Source code in src/cellects/utils/formulas.py
def get_standard_deviations(mo: dict, binary_image: NDArray, cx: float, cy: float) -> Tuple[float, float]:
    """
    Return spatial standard deviations for a given moment and binary image.

    This function computes the square root of variances along `x` (horizontal)
    and `y` (vertical) axes for the given binary image and moment.

    Parameters
    ----------
    mo : dict
        Dictionary containing moments of binary image.
    binary_image : ndarray of bool or int8
        The binary input image where the moments are computed.
    cx : float64
        X-coordinate of center of mass (horizontal position).
    cy : float64
        Y-coordinate of center of mass (vertical position).

    Returns
    -------
    tuple[ndarray of float64, ndarray of float64]
        Tuple containing the standard deviations along the x and y axes.

    Raises
    ------
    ValueError
        If `binary_image` is not a binary image or has an invalid datatype.

    Notes
    -----
    This function uses the `get_power_dists` and `get_var` functions to compute
    the distributed variances, which are then transformed into standard deviations.

    Examples
    --------
    >>> import numpy as np
    >>> binary_image = np.array([[0, 1], [1, 0]], dtype=np.int8)
    >>> mo = np.array([[2.0], [3.0]])
    >>> cx, cy = 1.5, 1.5
    >>> stdx, stdy = get_standard_deviations(mo, binary_image, cx, cy)
    >>> print(stdx)
    [1.1]
    >>> print(stdy)
    [0.8366600265...]
    """
    x2, y2 = get_power_dists(binary_image, cx, cy, 2)
    X2, Y2 = np.meshgrid(x2, y2)
    vx, vy = get_var(mo, binary_image, X2, Y2)
    return np.sqrt(vx), np.sqrt(vy)

get_terminations_and_their_connected_nodes(pad_skeleton, cnv4, cnv8)

Get terminations in a skeleton and their connected nodes.

This function identifies termination points in a padded skeleton array based on pixel connectivity, marking them and their connected nodes.

Parameters:

Name Type Description Default
pad_skeleton ndarray of uint8

The padded skeleton array where terminations are to be identified.

required
cnv4 object

Convolution object with 4-connectivity for neighbor comparison.

required
cnv8 object

Convolution object with 8-connectivity for neighbor comparison.

required

Returns:

Name Type Description
out ndarray of uint8

Array containing marked terminations and their connected nodes.

Examples:

>>> result = get_terminations_and_their_connected_nodes(pad_skeleton, cnv4, cnv8)
>>> print(result)
Source code in src/cellects/image_analysis/network_functions.py
def get_terminations_and_their_connected_nodes(pad_skeleton: NDArray[np.uint8], cnv4: object, cnv8: object) -> NDArray[np.uint8]:
    """
    Get terminations in a skeleton and their connected nodes.

    This function identifies termination points in a padded skeleton array
    based on pixel connectivity, marking them and their connected nodes.

    Parameters
    ----------
    pad_skeleton : ndarray of uint8
        The padded skeleton array where terminations are to be identified.
    cnv4 : object
        Convolution object with 4-connectivity for neighbor comparison.
    cnv8 : object
        Convolution object with 8-connectivity for neighbor comparison.

    Returns
    -------
    out : ndarray of uint8
        Array containing marked terminations and their connected nodes.

    Examples
    --------
    >>> result = get_terminations_and_their_connected_nodes(pad_skeleton, cnv4, cnv8)
    >>> print(result)
    """
    # All pixels having only one neighbor, and containing the value 1, are terminations for sure
    potential_tips = np.zeros(pad_skeleton.shape, dtype=np.uint8)
    potential_tips[cnv8.equal_neighbor_nb == 1] = 1
    # Add more terminations using 4-connectivity
    # If a pixel is 1 (in 4) and all its neighbors are neighbors (in 4), it is a termination

    coord1_4 = cnv4.equal_neighbor_nb == 1
    if np.any(coord1_4):
        coord1_4 = np.nonzero(coord1_4)
        for y1, x1 in zip(coord1_4[0], coord1_4[1]): # y1, x1 = 3,5
            # If, in the neighborhood of the 1 (in 4), all (in 8) its neighbors are 4-connected together, and none of them are terminations, the 1 is a termination
            is_4neigh = cnv4.equal_neighbor_nb[(y1 - 1):(y1 + 2), (x1 - 1):(x1 + 2)] != 0
            all_4_connected = pad_skeleton[(y1 - 1):(y1 + 2), (x1 - 1):(x1 + 2)] == is_4neigh
            is_not_term = 1 - potential_tips[y1, x1]
            if np.all(all_4_connected * is_not_term):
                is_4neigh[1, 1] = 0
                is_4neigh = np.pad(is_4neigh, [(1,), (1,)], mode='constant')
                cnv_4con = CompareNeighborsWithValue(is_4neigh, 4)
                cnv_4con.is_equal(1, and_itself=True)
                all_connected = (is_4neigh.sum() - (cnv_4con.equal_neighbor_nb > 0).sum()) == 0
                # If they are connected, it can be a termination
                if all_connected:
                    # If its closest neighbor is above 3 (in 8), this one is also a node
                    is_closest_above_3 = cnv8.equal_neighbor_nb[(y1 - 1):(y1 + 2), (x1 - 1):(x1 + 2)] * cross_33 > 3
                    if np.any(is_closest_above_3):
                        Y, X = np.nonzero(is_closest_above_3)
                        Y += y1 - 1
                        X += x1 - 1
                        potential_tips[Y, X] = 1
                    potential_tips[y1, x1] = 1
    return potential_tips

get_var(mo, binary_image, Xn, Yn)

Compute the center of mass in 2D space.

This function calculates the weighted average position (centroid) of a binary image using given pixel coordinates and moments.

Parameters:

Name Type Description Default
mo dict

Dictionary containing moments of binary image.

required
binary_image ndarray

2D binary image where non-zero pixels are considered.

required
Xn ndarray

Array of x-coordinates for each pixel in binary_image.

required
Yn ndarray

Array of y-coordinates for each pixel in binary_image.

required

Returns:

Type Description
tuple

A tuple of two floats (vx, vy) representing the centroid coordinates.

Raises:

Type Description
ZeroDivisionError

If mo['m00'] is zero, indicating no valid pixels in the image. The function raises a ZeroDivisionError.

Notes

Performance considerations: This function uses Numba's @njit decorator for performance.

Source code in src/cellects/utils/formulas.py
@njit()
def get_var(mo: dict, binary_image: NDArray, Xn: NDArray, Yn: NDArray) -> Tuple[float, float]:
    """
    Compute the center of mass in 2D space.

    This function calculates the weighted average position (centroid) of
    a binary image using given pixel coordinates and moments.

    Parameters
    ----------
    mo : dict
        Dictionary containing moments of binary image.
    binary_image : ndarray
        2D binary image where non-zero pixels are considered.
    Xn : ndarray
        Array of x-coordinates for each pixel in `binary_image`.
    Yn : ndarray
        Array of y-coordinates for each pixel in `binary_image`.

    Returns
    -------
    tuple
        A tuple of two floats `(vx, vy)` representing the centroid coordinates.

    Raises
    ------
    ZeroDivisionError
        If `mo['m00']` is zero, indicating no valid pixels in the image.
        The function raises a `ZeroDivisionError`.

    Notes
    -----
    Performance considerations: This function uses Numba's `@njit` decorator for performance.
    """
    if mo['m00'] == 0:
        vx, vy = 0., 0.
    else:
        vx = np.sum(binary_image * Xn) / mo["m00"]
        vy = np.sum(binary_image * Yn) / mo["m00"]
    return vx, vy

get_vertices_and_tips_from_skeleton(pad_skeleton)

Get vertices and tips from a padded skeleton.

This function identifies the vertices and tips of a skeletonized image. Tips are endpoints of the skeleton while vertices include tips and points where three or more edges meet.

Parameters:

Name Type Description Default
pad_skeleton ndarray of uint8

Input skeleton image that has been padded.

required

Returns:

Name Type Description
out tuple (ndarray of uint8, ndarray of uint8)

Tuple containing arrays of vertex points and tip points.

Source code in src/cellects/image_analysis/network_functions.py
def get_vertices_and_tips_from_skeleton(pad_skeleton: NDArray[np.uint8]) -> Tuple[NDArray[np.uint8], NDArray[np.uint8]]:
    """
    Get vertices and tips from a padded skeleton.

    This function identifies the vertices and tips of a skeletonized image.
    Tips are endpoints of the skeleton while vertices include tips and points where three or more edges meet.

    Parameters
    ----------
    pad_skeleton : ndarray of uint8
        Input skeleton image that has been padded.

    Returns
    -------
    out : tuple (ndarray of uint8, ndarray of uint8)
        Tuple containing arrays of vertex points and tip points.
    """
    cnv4, cnv8 = get_neighbor_comparisons(pad_skeleton)
    potential_tips = get_terminations_and_their_connected_nodes(pad_skeleton, cnv4, cnv8)
    pad_vertices, pad_tips = get_inner_vertices(pad_skeleton, potential_tips, cnv4, cnv8)
    return pad_vertices, pad_tips

insensitive_glob(pattern)

Generates a glob pattern that matches both lowercase and uppercase letters.

Parameters:

Name Type Description Default
pattern str

The glob pattern to be made case-insensitive.

required

Returns:

Type Description
str

A new glob pattern that will match both lowercase and uppercase letters.

Examples:

>>> insensitive_glob('*.TXT')
Source code in src/cellects/utils/utilitarian.py
def insensitive_glob(pattern: str):
    """
    Generates a glob pattern that matches both lowercase and uppercase letters.

    Parameters
    ----------
    pattern : str
        The glob pattern to be made case-insensitive.

    Returns
    -------
    str
        A new glob pattern that will match both lowercase and uppercase letters.

    Examples
    --------
    >>> insensitive_glob('*.TXT')
    """
    def either(c):
        return '[%s%s]' % (c.lower(), c.upper()) if c.isalpha() else c
    return glob(''.join(map(either, pattern)))

is_raw_image(image_path)

Determine if the image path corresponds to a raw image.

Parameters:

Name Type Description Default
image_path str

The file path of the image.

required

Returns:

Type Description
bool

True if the image is considered raw, False otherwise.

Examples:

>>> result = is_raw_image("image.jpg")
>>> print(result)
False
Source code in src/cellects/utils/load_display_save.py
def is_raw_image(image_path) -> bool:
    """
    Determine if the image path corresponds to a raw image.

    Parameters
    ----------
    image_path : str
        The file path of the image.

    Returns
    -------
    bool
        True if the image is considered raw, False otherwise.

    Examples
    --------
    >>> result = is_raw_image("image.jpg")
    >>> print(result)
    False
    """
    ext = image_path.split(".")[-1]
    if np.isin(ext, opencv_accepted_formats):
        raw_image = False
    else:
        raw_image = True
    return raw_image

linear_model(x, a, b)

Perform a linear transformation on input data using slope and intercept.

Parameters:

Name Type Description Default
x array_like

Input data.

required
a float

Slope coefficient.

required
b float

Intercept.

required

Returns:

Type Description
float

Resulting value from linear transformation: a * x + b.

Examples:

>>> result = linear_model(5, 2.0, 1.5)
>>> print(result)
11.5
Notes

This function uses Numba's @njit decorator for performance.

Source code in src/cellects/utils/formulas.py
@njit()
def linear_model(x: NDArray, a: float, b: float) -> float:
    """
    Perform a linear transformation on input data using slope and intercept.

    Parameters
    ----------
    x : array_like
        Input data.
    a : float
        Slope coefficient.
    b : float
        Intercept.

    Returns
    -------
    float
        Resulting value from linear transformation: `a` * `x` + `b`.

    Examples
    --------
    >>> result = linear_model(5, 2.0, 1.5)
    >>> print(result)  # doctest: +SKIP
    11.5

    Notes
    -----
    This function uses Numba's @njit decorator for performance.
    """
    return a * x + b

list_image_dir(path_to_images='', img_extension='', img_radical='')

List files in an image directory based on optional naming patterns (extension and/or radical).

Parameters:

Name Type Description Default
path_to_images optional

The path to the directory containing images. Default is an empty string.

''
img_extension str

The file extension of the images to be listed. Default is an empty string. When let empty, use the extension corresponding to the most numerous image file in the folder.

''
img_radical str

The radical part of the filenames to be listed. Default is an empty string.

''

Returns:

Type Description
list

A list of image filenames that match the specified criteria, sorted in a natural order.

Notes

This function uses the natsorted and insensitive_glob utilities to ensure that filenames are sorted in a human-readable order.

Examples:

>>> pathway = Path(__name__).resolve().parents[0] / "data" / "single_experiment"
>>> image_list = list_image_dir(pathway)
>>> print(image_list)
Source code in src/cellects/utils/load_display_save.py
def list_image_dir(path_to_images='', img_extension: str='', img_radical: str='') -> list:
    """
    List files in an image directory based on optional naming patterns (extension and/or radical).

    Parameters
    ----------
    path_to_images : optional
        The path to the directory containing images. Default is an empty string.
    img_extension : str, optional
        The file extension of the images to be listed. Default is an empty string.
        When let empty, use the extension corresponding to the most numerous image file in the folder.
    img_radical : str, optional
        The radical part of the filenames to be listed. Default is an empty string.

    Returns
    -------
    list
        A list of image filenames that match the specified criteria,
        sorted in a natural order.

    Notes
    -----
    This function uses the `natsorted` and `insensitive_glob` utilities to ensure
    that filenames are sorted in a human-readable order.

    Examples
    --------
    >>> pathway = Path(__name__).resolve().parents[0] / "data" / "single_experiment"
    >>> image_list = list_image_dir(pathway)
    >>> print(image_list)
    """
    if isinstance(path_to_images, str):
        path_to_images = Path(path_to_images)
    os.chdir(path_to_images)
    if len(img_extension) == 0:
        imgs = insensitive_glob(f'{img_radical}*')
        matches = np.zeros(len(opencv_accepted_formats))
        for e_i, ext in enumerate(opencv_accepted_formats):
            matches[e_i] = np.char.endswith(imgs, ext).sum()
        img_extension = opencv_accepted_formats[np.argmax(matches)]
    imgs = insensitive_glob(f'{img_radical}*{img_extension}')
    imgs = natsorted(imgs)
    return imgs

movie(video, increase_contrast=True)

Summary

Processes a video to display each frame with optional contrast increase and resizing.

Parameters:

Name Type Description Default
video ndarray

The input video represented as a 3D NumPy array.

required
increase_contrast bool

Flag to increase the contrast of each frame (default is True).

True

Other Parameters:

Name Type Description
keyboard int

Key to wait for during the display of each frame.

increase_contrast bool

Whether to increase contrast for the displayed frames.

Returns:

Type Description
None

Raises:

Type Description
ValueError

If video is not a 3D NumPy array.

Notes

This function uses OpenCV's imshow to display each frame. Ensure that the required OpenCV dependencies are met.

Examples:

>>> movie(video)
Processes and displays a video with default settings.
>>> movie(video, keyboard=0)
Processes and displays a video waiting for the SPACE key between frames.
>>> movie(video, increase_contrast=False)
Processes and displays a video without increasing contrast.
Source code in src/cellects/utils/load_display_save.py
def movie(video, increase_contrast: bool=True):
    """
    Summary
    -------
    Processes a video to display each frame with optional contrast increase and resizing.

    Parameters
    ----------
    video : numpy.ndarray
        The input video represented as a 3D NumPy array.
    increase_contrast : bool, optional
        Flag to increase the contrast of each frame (default is True).

    Other Parameters
    ----------------
    keyboard : int, optional
        Key to wait for during the display of each frame.
    increase_contrast : bool, optional
        Whether to increase contrast for the displayed frames.

    Returns
    -------
    None

    Raises
    ------
    ValueError
        If `video` is not a 3D NumPy array.

    Notes
    -----
    This function uses OpenCV's `imshow` to display each frame. Ensure that the required
    OpenCV dependencies are met.

    Examples
    --------
    >>> movie(video)
    Processes and displays a video with default settings.
    >>> movie(video, keyboard=0)
    Processes and displays a video waiting for the SPACE key between frames.
    >>> movie(video, increase_contrast=False)
    Processes and displays a video without increasing contrast.

    """
    for i in np.arange(video.shape[0]):
        image = video[i, :, :]
        if np.any(image):
            if increase_contrast:
                image = bracket_to_uint8_image_contrast(image)
            final_img = cv2.resize(image, (500, 500))
            cv2.imshow('Motion analysis', final_img)
            if cv2.waitKey(25) & 0xFF == ord('q'):
                break
    cv2.destroyAllWindows()

moving_average(vector, step)

Calculate the moving average of a given vector with specified step size.

Computes the moving average of input vector using specified step size. NaN values are treated as zeros in the calculation to allow for continuous averaging.

Parameters:

Name Type Description Default
vector ndarray

Input vector for which to calculate the moving average.

required
step int

Size of the window for computing the moving average.

required

Returns:

Type Description
ndarray

Vector containing the moving averages of the input vector.

Raises:

Type Description
ValueError

If step is less than 1.

ValueError

If the input vector has no valid (non-NaN) elements.

Notes
  • The function considers NaN values as zeros during the averaging process.
  • If step is greater than or equal to the length of the vector, a warning will be raised.

Examples:

>>> import numpy as np
>>> vector = np.array([1.0, 2.0, np.nan, 4.0, 5.0])
>>> step = 3
>>> result = moving_average(vector, step)
>>> print(result)
[1.5 2.33333333 3.66666667 4.         nan]
Source code in src/cellects/utils/formulas.py
def moving_average(vector: NDArray, step: int) -> NDArray[float]:
    """
    Calculate the moving average of a given vector with specified step size.

    Computes the moving average of input `vector` using specified `step`
    size. NaN values are treated as zeros in the calculation to allow
    for continuous averaging.

    Parameters
    ----------
    vector : ndarray
        Input vector for which to calculate the moving average.
    step : int
        Size of the window for computing the moving average.

    Returns
    -------
    numpy.ndarray
        Vector containing the moving averages of the input vector.

    Raises
    ------
    ValueError
        If `step` is less than 1.
    ValueError
        If the input vector has no valid (non-NaN) elements.

    Notes
    -----
    - The function considers NaN values as zeros during the averaging process.
    - If `step` is greater than or equal to the length of the vector, a warning will be raised.

    Examples
    --------
    >>> import numpy as np
    >>> vector = np.array([1.0, 2.0, np.nan, 4.0, 5.0])
    >>> step = 3
    >>> result = moving_average(vector, step)
    >>> print(result)
    [1.5 2.33333333 3.66666667 4.         nan]
    """
    substep = np.array((- int(np.floor((step - 1) / 2)), int(np.ceil((step - 1) / 2))))
    sums = np.zeros(vector.shape)
    n_okays = sums.copy()
    true_numbers = np.logical_not(np.isnan(vector))
    vector[np.logical_not(true_numbers)] = 0
    for step_i in np.arange(substep[1] + 1):
        sums[step_i: (sums.size - step_i)] = sums[step_i: (sums.size - step_i)] + vector[(2 * step_i):]
        n_okays[step_i: (sums.size - step_i)] = n_okays[step_i: (sums.size - step_i)] + true_numbers[(2 * step_i):]
        if np.logical_and(step_i > 0, step_i < np.absolute(substep[0])):
            sums[step_i: (sums.size - step_i)] = sums[step_i: (sums.size - step_i)] + vector[:(sums.size - (2 * step_i)):]
            n_okays[step_i: (sums.size - step_i)] = n_okays[step_i: (sums.size - step_i)] + true_numbers[:(
                        true_numbers.size - (2 * step_i))]
    vector = sums / n_okays
    return vector

njit(*args, **kwargs)

numba.njit decorator that can be disabled. Useful for testing.

Source code in src/cellects/utils/decorators.py
def njit(*args, **kwargs):
    """ numba.njit decorator that can be disabled. Useful for testing.
    """
    if USE_NUMBA:
        return _real_njit(*args, **kwargs)
    # test mode: return identity decorator
    def deco(func):
        return func
    return deco

read_and_rotate(image_name, prev_img=None, raw_images=False, is_landscape=True, crop_coord=None)

Read and rotate an image based on specified parameters.

This function reads an image from the given file name, optionally rotates it by 90 degrees clockwise or counterclockwise based on its dimensions and the is_landscape flag, and applies cropping if specified. It also compares rotated images against a previous image to choose the best rotation.

Parameters:

Name Type Description Default
image_name str

Name of the image file to read.

required
prev_img ndarray

Previous image for comparison. Default is None.

None
raw_images bool

Flag to read raw images. Default is False.

False
is_landscape bool

Flag to determine if the image should be considered in landscape mode. Default is True.

True
crop_coord ndarray

Coordinates for cropping the image. Default is None.

None

Returns:

Type Description
ndarray

Rotated and optionally cropped image.

Raises:

Type Description
FileNotFoundError

If the specified image file does not exist.

Examples:

>>> pathway = Path(__name__).resolve().parents[0] / "data" / "single_experiment"
>>> image_name = 'image1.tif'
>>> image = read_and_rotate(pathway /image_name)
>>> print(image.shape)
(245, 300, 3)
Source code in src/cellects/utils/load_display_save.py
def read_and_rotate(image_name, prev_img: NDArray=None, raw_images: bool=False, is_landscape: bool=True, crop_coord: NDArray=None) -> NDArray:
    """
    Read and rotate an image based on specified parameters.

    This function reads an image from the given file name, optionally rotates
    it by 90 degrees clockwise or counterclockwise based on its dimensions and
    the `is_landscape` flag, and applies cropping if specified. It also compares
    rotated images against a previous image to choose the best rotation.

    Parameters
    ----------
    image_name : str
        Name of the image file to read.
    prev_img : ndarray, optional
        Previous image for comparison. Default is `None`.
    raw_images : bool, optional
        Flag to read raw images. Default is `False`.
    is_landscape : bool, optional
        Flag to determine if the image should be considered in landscape mode.
        Default is `True`.
    crop_coord : ndarray, optional
        Coordinates for cropping the image. Default is `None`.

    Returns
    -------
    ndarray
        Rotated and optionally cropped image.

    Raises
    ------
    FileNotFoundError
        If the specified image file does not exist.

    Examples
    ------
    >>> pathway = Path(__name__).resolve().parents[0] / "data" / "single_experiment"
    >>> image_name = 'image1.tif'
    >>> image = read_and_rotate(pathway /image_name)
    >>> print(image.shape)
    (245, 300, 3)
    """
    if not os.path.exists(image_name):
        raise FileNotFoundError(image_name)
    img = readim(image_name, raw_images)
    if (img.shape[0] > img.shape[1] and is_landscape) or (img.shape[0] < img.shape[1] and not is_landscape):
        clockwise = cv2.rotate(img, cv2.ROTATE_90_CLOCKWISE)
        if crop_coord is not None:
            clockwise = clockwise[crop_coord[0]:crop_coord[1], crop_coord[2]:crop_coord[3], ...]
        if prev_img is not None:
            prev_img = np.int16(prev_img)
            clock_diff = sum_of_abs_differences(prev_img, np.int16(clockwise))
            counter_clockwise = cv2.rotate(img, cv2.ROTATE_90_COUNTERCLOCKWISE)
            if crop_coord is not None:
                counter_clockwise = counter_clockwise[crop_coord[0]:crop_coord[1], crop_coord[2]:crop_coord[3], ...]
            counter_clock_diff = sum_of_abs_differences(prev_img, np.int16(counter_clockwise))
            if clock_diff > counter_clock_diff:
                img = counter_clockwise
            else:
                img = clockwise
        else:
            img = clockwise
    else:
        if crop_coord is not None:
            img = img[crop_coord[0]:crop_coord[1], crop_coord[2]:crop_coord[3], ...]
    return img

read_h5(file_name, key='data')

Read data array from an HDF5 file.

This function reads a specific dataset from an HDF5 file using the provided key.

Parameters:

Name Type Description Default
file_name str

The path to the HDF5 file.

required
key str

The dataset name within the HDF5 file.

'data'

Returns:

Type Description
ndarray

The data array from the specified dataset in the HDF5 file.

Source code in src/cellects/utils/load_display_save.py
def read_h5(file_name, key: str="data"):
    """
    Read data array from an HDF5 file.

    This function reads a specific dataset from an HDF5 file using the provided key.

    Parameters
    ----------
    file_name : str
        The path to the HDF5 file.
    key : str, optional, default: 'data'
        The dataset name within the HDF5 file.

    Returns
    -------
    ndarray
        The data array from the specified dataset in the HDF5 file.
    """
    if os.path.isfile(file_name):
        with h5py.File(file_name, 'r') as h5f:
            if key in h5f:
                data = h5f[key][:]
                return data
            else:
                return None
    else:
        return None

read_one_arena(arena_label, already_greyscale, csc_dict, videos_already_in_ram=None, true_frame_width=None, vid_name=None, background=None, background2=None)

Read a single arena's video data, potentially converting it from color to greyscale.

Parameters:

Name Type Description Default
arena_label int

The label of the arena.

required
already_greyscale bool

Whether the video is already in greyscale format.

required
csc_dict dict

Dictionary containing color space conversion settings.

required
videos_already_in_ram ndarray

Pre-loaded video frames in memory. Default is None.

None
true_frame_width int

The true width of the video frames. Default is None.

None
vid_name str

Name of the video file. Default is None.

None
background ndarray

Background image for subtractions. Default is None.

None
background2 ndarray

Second background image for subtractions. Default is None.

None

Returns:

Type Description
tuple

A tuple containing: - visu: np.ndarray or None, the visual frame. - converted_video: np.ndarray or None, the video data converted as needed. - converted_video2: np.ndarray or None, additional video data if necessary.

Raises:

Type Description
FileNotFoundError

If the specified video file does not exist.

ValueError

If the video data shape is invalid.

Notes

This function assumes that video2numpy is a helper function available in the scope. For optimal performance, ensure all video data fits in RAM.

Source code in src/cellects/utils/load_display_save.py
def read_one_arena(arena_label, already_greyscale:bool, csc_dict: dict, videos_already_in_ram=None,
                   true_frame_width=None, vid_name: str=None, background: NDArray=None, background2: NDArray=None):
    """
    Read a single arena's video data, potentially converting it from color to greyscale.

    Parameters
    ----------
    arena_label : int
        The label of the arena.
    already_greyscale : bool
        Whether the video is already in greyscale format.
    csc_dict : dict
        Dictionary containing color space conversion settings.
    videos_already_in_ram : np.ndarray, optional
        Pre-loaded video frames in memory. Default is None.
    true_frame_width : int, optional
        The true width of the video frames. Default is None.
    vid_name : str, optional
        Name of the video file. Default is None.
    background : np.ndarray, optional
        Background image for subtractions. Default is None.
    background2 : np.ndarray, optional
        Second background image for subtractions. Default is None.

    Returns
    -------
    tuple
        A tuple containing:
            - visu: np.ndarray or None, the visual frame.
            - converted_video: np.ndarray or None, the video data converted as needed.
            - converted_video2: np.ndarray or None, additional video data if necessary.

    Raises
    ------
    FileNotFoundError
        If the specified video file does not exist.
    ValueError
        If the video data shape is invalid.

    Notes
    -----
    This function assumes that `video2numpy` is a helper function available in the scope.
    For optimal performance, ensure all video data fits in RAM.
    """
    visu, converted_video, converted_video2 = None, None, None
    logging.info(f"Arena n°{arena_label}. Load images and videos")
    if videos_already_in_ram is not None:
        if already_greyscale:
            converted_video = videos_already_in_ram
        else:
            if csc_dict['logical'] == 'None':
                visu, converted_video = videos_already_in_ram
            else:
                visu, converted_video, converted_video2 = videos_already_in_ram
    else:
        if vid_name is not None:
            if already_greyscale:
                converted_video = video2numpy(vid_name, None, background, background2, true_frame_width)
                if len(converted_video.shape) == 4:
                    converted_video = converted_video[:, :, :, 0]
            else:
                visu = video2numpy(vid_name, None, background, background2, true_frame_width)
        else:
            vid_name = f"ind_{arena_label}.h5"
            h5_keys = get_h5_keys(vid_name)
            if os.path.isfile(vid_name) and 'video' in h5_keys:
                if already_greyscale:
                    converted_video = video2numpy(vid_name, None, background, background2, true_frame_width)
                    if len(converted_video.shape) == 4:
                        converted_video = converted_video[:, :, :, 0]
                else:
                    visu = video2numpy(vid_name, None, background, background2, true_frame_width)
    return visu, converted_video, converted_video2

read_rotate_crop_and_reduce_image(image_name, prev_img=None, crop_coord=None, cr=None, raw_images=False, is_landscape=True, reduce_image_dim=False)

Reads, rotates, crops (if specified), and reduces image dimensionality if required.

Parameters:

Name Type Description Default
image_name str

Name of the image file to read.

required
prev_img NDArray

Previous image array used for rotation reference or state tracking.

None
crop_coord list

List of four integers [x_start, x_end, y_start, y_end] specifying cropping region. If None, no initial crop is applied.

None
cr list

List of four integers [x_start, x_end, y_start, y_end] for final cropping after rotation.

None
raw_images bool

Flag indicating whether to process raw image data (True) or processed image (False).

False
is_landscape bool

Boolean determining if the image is landscape-oriented and requires specific rotation handling.

True
reduce_image_dim bool

Whether to reduce the cropped image to a single channel (e.g., grayscale from RGB).

False

Returns:

Name Type Description
img NDArray

Processed image after rotation, cropping, and optional dimensionality reduction.

prev_img NDArray

Copy of the image immediately after rotation but before any cropping operations.

Examples:

>>> import numpy as np
>>> img = np.random.rand(200, 300, 3)
>>> new_img, prev = read_rotate_crop_and_reduce_image("example.jpg", img, [50, 150, 75, 225], [20, 180, 40, 250], False, True, True)
>>> new_img.shape == (160, 210)
True
>>> prev.shape == (200, 300, 3)
True
Source code in src/cellects/utils/load_display_save.py
def read_rotate_crop_and_reduce_image(image_name: str, prev_img: NDArray=None, crop_coord: list=None, cr: list=None,
                                      raw_images: bool=False, is_landscape: bool=True, reduce_image_dim: bool=False):
    """
    Reads, rotates, crops (if specified), and reduces image dimensionality if required.

    Parameters
    ----------
    image_name : str
        Name of the image file to read.
    prev_img : NDArray
        Previous image array used for rotation reference or state tracking.
    crop_coord : list
        List of four integers [x_start, x_end, y_start, y_end] specifying cropping region. If None, no initial crop is applied.
    cr : list
        List of four integers [x_start, x_end, y_start, y_end] for final cropping after rotation.
    raw_images : bool
        Flag indicating whether to process raw image data (True) or processed image (False).
    is_landscape : bool
        Boolean determining if the image is landscape-oriented and requires specific rotation handling.
    reduce_image_dim : bool
        Whether to reduce the cropped image to a single channel (e.g., grayscale from RGB).

    Returns
    -------
    img : NDArray
        Processed image after rotation, cropping, and optional dimensionality reduction.
    prev_img : NDArray
        Copy of the image immediately after rotation but before any cropping operations.

    Examples
    --------
    >>> import numpy as np
    >>> img = np.random.rand(200, 300, 3)
    >>> new_img, prev = read_rotate_crop_and_reduce_image("example.jpg", img, [50, 150, 75, 225], [20, 180, 40, 250], False, True, True)
    >>> new_img.shape == (160, 210)
    True
    >>> prev.shape == (200, 300, 3)
    True
    """
    img = read_and_rotate(image_name, prev_img, raw_images, is_landscape)
    prev_img = img.copy()
    if crop_coord is not None:
        img = img[crop_coord[0]:crop_coord[1], crop_coord[2]:crop_coord[3], :]
    if cr is not None:
        img = img[cr[0]:cr[1], cr[2]:cr[3], :]
    if reduce_image_dim:
        img = img[:, :, 0]
    return img, prev_img

read_tif_stack(vid_name, expected_channels=1)

Read video array from a tif file.

This function reads a specific dataset from a tif file.

Parameters:

Name Type Description Default
vid_name str

The path to the tif stack file.

required
expected_channels int

The number of channel.

1

Returns:

Type Description
ndarray

The data array from the tif file.

Source code in src/cellects/utils/load_display_save.py
def read_tif_stack(vid_name: str, expected_channels: int=1):
    """
        Read video array from a tif file.

        This function reads a specific dataset from a tif file.

        Parameters
        ----------
        vid_name : str
            The path to the tif stack file.
        expected_channels : int
            The number of channel.

        Returns
        -------
        ndarray
            The data array from the tif file.
    """
    all_frames = None
    if os.path.isfile(vid_name):
        with tifffile.TiffFile(vid_name) as tif:
            # Count the number of pages (frames and channels)
            num_pages = len(tif.pages)

            # Determine the shape of a single frame
            example_page = tif.pages[0]
            height, width = example_page.asarray().shape

            # Calculate the number of frames per channel based on expected_channels parameter
            frames_per_channel = num_pages // expected_channels

            # Initialize an array to hold all frames for each channel
            all_frames = np.zeros((frames_per_channel, height, width, expected_channels),
                                  dtype=example_page.asarray().dtype)

            # Read and store each frame
            for i in range(frames_per_channel):
                for channel in range(expected_channels):
                    page_index = i * expected_channels + channel
                    frame = tif.pages[page_index].asarray()
                    all_frames[i, :, :, channel] = frame
    return all_frames

readim(image_path, raw_image=False)

Read an image from a file and optionally process it.

Parameters:

Name Type Description Default
image_path str

Path to the image file.

required
raw_image bool

If True, logs an error message indicating that the raw image format cannot be processed. Default is False.

False

Returns:

Type Description
ndarray

The decoded image represented as a NumPy array of shape (height, width, channels).

Raises:

Type Description
RuntimeError

If raw_image is set to True, logs an error indicating that the raw image format cannot be processed.

Notes

Although raw_image is set to False by default, currently it does not perform any raw image processing.

Examples:

>>> cv2.imread("example.jpg")
array([[[255, 0, 0],
        [255, 0, 0]],
   [[  0, 255, 0],
    [  0, 255, 0]],

   [[  0,   0, 255],
    [  0,   0, 255]]], dtype=np.uint8)
Source code in src/cellects/utils/load_display_save.py
def readim(image_path, raw_image: bool=False):
    """
    Read an image from a file and optionally process it.

    Parameters
    ----------
    image_path : str
        Path to the image file.
    raw_image : bool, optional
        If True, logs an error message indicating that the raw image format cannot be processed. Default is False.

    Returns
    -------
    ndarray
        The decoded image represented as a NumPy array of shape (height, width, channels).

    Raises
    ------
    RuntimeError
        If `raw_image` is set to True, logs an error indicating that the raw image format cannot be processed.

    Notes
    -----
    Although `raw_image` is set to False by default, currently it does not perform any raw image processing.

    Examples
    --------
    >>> cv2.imread("example.jpg")
    array([[[255, 0, 0],
            [255, 0, 0]],

           [[  0, 255, 0],
            [  0, 255, 0]],

           [[  0,   0, 255],
            [  0,   0, 255]]], dtype=np.uint8)
    """
    if raw_image:
        logging.error("Cannot read this image format. If the rawpy package can, ask for a version of Cellects using it.")
        # import rawpy
        # raw = rawpy.imread(image_path)
        # raw = raw.postprocess()
        # return cv2.cvtColor(raw, COLOR_RGB2BGR)
        return cv2.imread(image_path)
    else:
        return cv2.imread(image_path)

remove_h5_key(file_name, key='data')

Remove a specified key from an HDF5 file.

This function opens an HDF5 file in append mode and deletes the specified key if it exists. It handles exceptions related to file not found and other runtime errors.

Parameters:

Name Type Description Default
file_name str

The path to the HDF5 file from which the key should be removed.

required
key str

The name of the dataset or group to delete from the HDF5 file. Default is "data".

'data'

Returns:

Type Description
None

Raises:

Type Description
FileNotFoundError

If the specified file does not exist.

RuntimeError

If any other error occurs during file operations.

Notes

This function modifies the HDF5 file in place. Ensure you have a backup if necessary.

Source code in src/cellects/utils/load_display_save.py
def remove_h5_key(file_name, key: str="data"):
    """
    Remove a specified key from an HDF5 file.

    This function opens an HDF5 file in append mode and deletes the specified
    key if it exists. It handles exceptions related to file not found
    and other runtime errors.

    Parameters
    ----------
    file_name : str
        The path to the HDF5 file from which the key should be removed.
    key : str, optional
        The name of the dataset or group to delete from the HDF5 file.
        Default is "data".

    Returns
    -------
    None

    Raises
    ------
    FileNotFoundError
        If the specified file does not exist.
    RuntimeError
        If any other error occurs during file operations.

    Notes
    -----
    This function modifies the HDF5 file in place. Ensure you have a backup if necessary.
    """
    if os.path.isfile(file_name):
        with h5py.File(file_name, 'a') as h5f:  # Open in append mode to modify the file
            if key in h5f:
                del h5f[key]

remove_padding(array_list)

Remove padding from a list of 2D arrays.

Parameters:

Name Type Description Default
array_list list of ndarrays

List of 2D NumPy arrays to be processed.

required

Returns:

Name Type Description
out list of ndarrays

List of 2D NumPy arrays with the padding removed.

Examples:

>>> arr1 = np.array([[0, 0, 0], [0, 1, 0], [0, 0, 0]])
>>> arr2 = np.array([[1, 1, 1], [1, 0, 1], [1, 1, 1]])
>>> remove_padding([arr1, arr2])
[array([[1]]), array([[0]])]
Source code in src/cellects/image_analysis/network_functions.py
def remove_padding(array_list: list) -> list:
    """
    Remove padding from a list of 2D arrays.

    Parameters
    ----------
    array_list : list of ndarrays
        List of 2D NumPy arrays to be processed.

    Returns
    -------
    out : list of ndarrays
        List of 2D NumPy arrays with the padding removed.

    Examples
    --------
    >>> arr1 = np.array([[0, 0, 0], [0, 1, 0], [0, 0, 0]])
    >>> arr2 = np.array([[1, 1, 1], [1, 0, 1], [1, 1, 1]])
    >>> remove_padding([arr1, arr2])
    [array([[1]]), array([[0]])]
    """
    new_array_list = []
    for arr in array_list:
        new_array_list.append(un_pad(arr))
    return new_array_list

remove_small_loops(pad_skeleton, pad_distances=None)

Remove small loops from a skeletonized image.

This function identifies and removes small loops in a skeletonized image, returning the modified skeleton. If distance information is provided, it updates that as well.

Parameters:

Name Type Description Default
pad_skeleton ndarray of uint8

The skeletonized image with potential small loops.

required
pad_distances ndarray of float64

The distance map corresponding to the skeleton image. Default is None.

None

Returns:

Name Type Description
out ndarray of uint8 or tuple(ndarray of uint8, ndarray of float64)

If pad_distances is None, returns the modified skeleton. Otherwise, returns a tuple of the modified skeleton and updated distances.

Source code in src/cellects/image_analysis/network_functions.py
def remove_small_loops(pad_skeleton: NDArray[np.uint8], pad_distances: NDArray[np.float64]=None):
    """
    Remove small loops from a skeletonized image.

    This function identifies and removes small loops in a skeletonized image, returning the modified skeleton.
    If distance information is provided, it updates that as well.

    Parameters
    ----------
    pad_skeleton : ndarray of uint8
        The skeletonized image with potential small loops.
    pad_distances : ndarray of float64, optional
        The distance map corresponding to the skeleton image. Default is `None`.

    Returns
    -------
    out : ndarray of uint8 or tuple(ndarray of uint8, ndarray of float64)
        If `pad_distances` is None, returns the modified skeleton. Otherwise,
        returns a tuple of the modified skeleton and updated distances.
    """
    cnv4, cnv8 = get_neighbor_comparisons(pad_skeleton)
    # potential_tips = get_terminations_and_their_connected_nodes(pad_skeleton, cnv4, cnv8)

    cnv_diag_0 = CompareNeighborsWithValue(pad_skeleton, 0)
    cnv_diag_0.is_equal(0, and_itself=True)

    cnv4_false = CompareNeighborsWithValue(pad_skeleton, 4)
    cnv4_false.is_equal(1, and_itself=False)

    loop_centers = np.logical_and((cnv4_false.equal_neighbor_nb == 4), cnv_diag_0.equal_neighbor_nb > 2).astype(np.uint8)

    surrounding = cv2.dilate(loop_centers, kernel=square_33)
    surrounding -= loop_centers
    surrounding = surrounding * cnv8.equal_neighbor_nb

    # Every 2 can be replaced by 0 if the loop center becomes 1
    filled_loops = pad_skeleton.copy()
    filled_loops[surrounding == 2] = 0
    filled_loops += loop_centers

    new_pad_skeleton = morphology.skeletonize(filled_loops, method='lee')

    # Put the new pixels in pad_distances
    new_pixels = new_pad_skeleton * (1 - pad_skeleton)
    pad_skeleton = new_pad_skeleton.astype(np.uint8)
    if pad_distances is None:
        return pad_skeleton
    else:
        pad_distances[np.nonzero(new_pixels)] = np.nan # 2. # Put nearest value instead?
        pad_distances *= pad_skeleton
        # for yi, xi in zip(npY, npX): # yi, xi = npY[0], npX[0]
        #     distances[yi, xi] = 2.
        return pad_skeleton, pad_distances

save_fig(img, full_path, cmap=None)

Save an image figure to a file with specified options.

This function creates a matplotlib figure from the given image, optionally applies a colormap, displays it briefly, saves the figure to disk at high resolution, and closes the figure.

Parameters:

Name Type Description Default
img array_like(M, N, 3)

Input image to be saved as a figure. Expected to be in RGB format.

required
full_path str

The complete file path where the figure will be saved. Must include extension (e.g., '.png', '.jpg').

required
cmap str or None

Colormap to be applied if the image should be displayed with a specific color map. If None, no colormap is applied.

None

Returns:

Type Description
None

This function does not return any value. It saves the figure to disk at the specified location.

Raises:

Type Description
FileNotFoundError

If the directory in full_path does not exist.

Examples:

>>> img = np.random.rand(100, 100, 3) * 255
>>> save_fig(img, 'test.png')
Creates and saves a figure from the random image to 'test.png'.
>>> save_fig(img, 'colored_test.png', cmap='viridis')
Creates and saves a figure from the random image with 'viridis' colormap
to 'colored_test.png'.
Source code in src/cellects/utils/load_display_save.py
def save_fig(img: NDArray, full_path, cmap=None):
    """
    Save an image figure to a file with specified options.

    This function creates a matplotlib figure from the given image,
    optionally applies a colormap, displays it briefly, saves the
    figure to disk at high resolution, and closes the figure.

    Parameters
    ----------
    img : array_like (M, N, 3)
        Input image to be saved as a figure. Expected to be in RGB format.
    full_path : str
        The complete file path where the figure will be saved. Must include
        extension (e.g., '.png', '.jpg').
    cmap : str or None, optional
        Colormap to be applied if the image should be displayed with a specific
        color map. If `None`, no colormap is applied.

    Returns
    -------
    None

        This function does not return any value. It saves the figure to disk
        at the specified location.

    Raises
    ------
    FileNotFoundError
        If the directory in `full_path` does not exist.

    Examples
    --------
    >>> img = np.random.rand(100, 100, 3) * 255
    >>> save_fig(img, 'test.png')
    Creates and saves a figure from the random image to 'test.png'.

    >>> save_fig(img, 'colored_test.png', cmap='viridis')
    Creates and saves a figure from the random image with 'viridis' colormap
    to 'colored_test.png'.
    """
    sizes = img.shape[0] / 100,  img.shape[1] / 100
    fig = plt.figure(figsize=(sizes[0], sizes[1]))
    ax = fig.gca()
    if cmap is None:
        ax.imshow(img, interpolation="none")
    else:
        ax.imshow(img, cmap=cmap, interpolation="none")
    plt.axis('off')
    if np.min(img.shape) > 50:
        fig.tight_layout()

    fig.savefig(full_path, bbox_inches='tight', pad_inches=0., transparent=True, dpi=500)
    plt.close(fig)

scale_coordinates(coord, scale, dims)

Scale coordinates based on given scale factors and dimensions.

Parameters:

Name Type Description Default
coord ndarray

A 2x2 array of coordinates to be scaled.

required
scale tuple of float

Scaling factors for the x and y coordinates, respectively.

required
dims tuple of int

Maximum dimensions (height, width) for the scaled coordinates.

required

Returns:

Type Description
ndarray

Scaled and rounded coordinates.

int

Minimum y-coordinate.

int

Maximum y-coordinate.

int

Minimum x-coordinate.

int

Maximum x-coordinate.

Examples:

>>> coord = np.array(((47, 38), (59, 37)))
>>> scale = (0.92, 0.87)
>>> dims = (245, 300, 3)
>>> scaled_coord, min_y, max_y, min_x, max_x = scale_coordinates(coord, scale, dims)
>>> scaled_coord
array([[43, 33],
       [54, 32]])
>>> min_y, max_y
(np.int64(43), np.int64(54))
>>> min_x, max_x
(np.int64(32), np.int64(33))
Notes

This function assumes that the input coordinates are in a specific format and will fail if not. The scaling factors should be positive.

Source code in src/cellects/utils/formulas.py
def scale_coordinates(coord: NDArray, scale: Tuple, dims: Tuple) -> Tuple[NDArray[np.int64], np.int64, np.int64, np.int64, np.int64]:
    """
    Scale coordinates based on given scale factors and dimensions.

    Parameters
    ----------
    coord : numpy.ndarray
        A 2x2 array of coordinates to be scaled.
    scale : tuple of float
        Scaling factors for the x and y coordinates, respectively.
    dims : tuple of int
        Maximum dimensions (height, width) for the scaled coordinates.

    Returns
    -------
    numpy.ndarray
        Scaled and rounded coordinates.
    int
        Minimum y-coordinate.
    int
        Maximum y-coordinate.
    int
        Minimum x-coordinate.
    int
        Maximum x-coordinate.

    Examples
    --------
    >>> coord = np.array(((47, 38), (59, 37)))
    >>> scale = (0.92, 0.87)
    >>> dims = (245, 300, 3)
    >>> scaled_coord, min_y, max_y, min_x, max_x = scale_coordinates(coord, scale, dims)
    >>> scaled_coord
    array([[43, 33],
           [54, 32]])
    >>> min_y, max_y
    (np.int64(43), np.int64(54))
    >>> min_x, max_x
    (np.int64(32), np.int64(33))

    Notes
    -----
    This function assumes that the input coordinates are in a specific format
    and will fail if not. The scaling factors should be positive.
    """
    coord = np.array(((np.round(coord[0][0] * scale[0]), np.round(coord[0][1] * scale[1])),
                    (np.round(coord[1][0] * scale[0]), np.round(coord[1][1] * scale[1]))), dtype=np.int64)
    min_y = np.max((0, np.min(coord[:, 0])))
    max_y = np.min((dims[0], np.max(coord[:, 0])))
    min_x = np.max((0, np.min(coord[:, 1])))
    max_x = np.min((dims[1], np.max(coord[:, 1])))
    return coord, min_y, max_y, min_x, max_x

show(img, interactive=True, cmap=None, show=True)

Display an image using Matplotlib with optional interactivity and colormap.

Parameters:

Name Type Description Default
img ndarray

The image data to be displayed.

required
interactive bool

If True, turn on interactive mode. Default is True.

True
cmap str or Colormap

The colormap to be used. If None, the default colormap will be used.

None

Other Parameters:

Name Type Description
interactive bool

If True, turn on interactive mode. Default is True.

cmap str or Colormap

The colormap to be used. If None, the default colormap will be used.

Returns:

Name Type Description
fig Figure

The Matplotlib figure object containing the displayed image.

ax AxesSubplot

The axes on which the image is plotted.

Raises:

Type Description
ValueError

If cmap is not a recognized colormap name or object.

Notes

If interactive mode is enabled, the user can manipulate the figure window interactively.

Examples:

>>> img = np.random.rand(100, 50)
>>> fig, ax = show(img)
>>> print(fig)
<Figure size ... with ... Axes>
>>> fig, ax = show(img, interactive=False)
>>> print(fig)
<Figure size ... with ... Axes>
>>> fig, ax = show(img, cmap='gray')
>>> print(fig)
<Figure size ... with .... Axes>
Source code in src/cellects/utils/load_display_save.py
def show(img, interactive: bool=True, cmap=None, show: bool=True):
    """
    Display an image using Matplotlib with optional interactivity and colormap.

    Parameters
    ----------
    img : ndarray
        The image data to be displayed.
    interactive : bool, optional
        If ``True``, turn on interactive mode. Default is ``True``.
    cmap : str or Colormap, optional
        The colormap to be used. If ``None``, the default colormap will
        be used.

    Other Parameters
    ----------------
    interactive : bool, optional
        If ``True``, turn on interactive mode. Default is ``True``.
    cmap : str or Colormap, optional
        The colormap to be used. If ``None``, the default colormap will
        be used.

    Returns
    -------
    fig : Figure
        The Matplotlib figure object containing the displayed image.
    ax : AxesSubplot
        The axes on which the image is plotted.

    Raises
    ------
    ValueError
        If `cmap` is not a recognized colormap name or object.

    Notes
    -----
    If interactive mode is enabled, the user can manipulate the figure
    window interactively.

    Examples
    --------
    >>> img = np.random.rand(100, 50)
    >>> fig, ax = show(img)
    >>> print(fig) # doctest: +SKIP
    <Figure size ... with ... Axes>

    >>> fig, ax = show(img, interactive=False)
    >>> print(fig) # doctest: +SKIP
    <Figure size ... with ... Axes>

    >>> fig, ax = show(img, cmap='gray')
    >>> print(fig) # doctest: +SKIP
    <Figure size ... with .... Axes>
    """
    if interactive:
        plt.ion()
    else:
        plt.ioff()
    sizes = img.shape[0] / 100,  img.shape[1] / 100
    fig = plt.figure(figsize=(sizes[1], sizes[0]))
    ax = fig.gca()
    if cmap is None:
        ax.imshow(img, interpolation="none", extent=(0, sizes[1], 0, sizes[0]))
    else:
        ax.imshow(img, cmap=cmap, interpolation="none", extent=(0, sizes[1], 0, sizes[0]))

    if show:
        fig.tight_layout()
        fig.show()

    return fig, ax

split_dict(c_space_dict)

Split a dictionary into two dictionaries based on specific criteria and return their keys.

Split the input dictionary c_space_dict into two dictionaries: one for items not ending with '2' and another where the key is truncated by removing its last character if it does end with '2'. Additionally, return the keys that have been processed.

Parameters:

Name Type Description Default
c_space_dict dict

The dictionary to be split. Expected keys are strings and values can be any type.

required

Returns:

Name Type Description
first_dict dict

Dictionary containing items from c_space_dict whose keys do not end with '2'.

second_dict dict

Dictionary containing items from c_space_dict whose keys end with '2', where the key is truncated by removing its last character.

c_spaces list

List of keys from c_space_dict that have been processed.

Raises:

Type Description
None
Notes

No critical information to share.

Examples:

>>> c_space_dict = {'key1': 10, 'key2': 20, 'logical': 30}
>>> first_dict, second_dict, c_spaces = split_dict(c_space_dict)
>>> print(first_dict)
{'key1': 10}
>>> print(second_dict)
{'key': 20}
>>> print(c_spaces)
['key1', 'key']
Source code in src/cellects/utils/utilitarian.py
def split_dict(c_space_dict: dict) -> Tuple[Dict, Dict, list]:
    """

    Split a dictionary into two dictionaries based on specific criteria and return their keys.

    Split the input dictionary `c_space_dict` into two dictionaries: one for items not
    ending with '2' and another where the key is truncated by removing its last
    character if it does end with '2'. Additionally, return the keys that have been
    processed.

    Parameters
    ----------
    c_space_dict : dict
        The dictionary to be split. Expected keys are strings and values can be any type.

    Returns
    -------
    first_dict : dict
        Dictionary containing items from `c_space_dict` whose keys do not end with '2'.
    second_dict : dict
        Dictionary containing items from `c_space_dict` whose keys end with '2',
        where the key is truncated by removing its last character.
    c_spaces : list
        List of keys from `c_space_dict` that have been processed.

    Raises
    ------
    None

    Notes
    -----
    No critical information to share.

    Examples
    --------
    >>> c_space_dict = {'key1': 10, 'key2': 20, 'logical': 30}
    >>> first_dict, second_dict, c_spaces = split_dict(c_space_dict)
    >>> print(first_dict)
    {'key1': 10}
    >>> print(second_dict)
    {'key': 20}
    >>> print(c_spaces)
    ['key1', 'key']

    """
    first_dict = Dict()
    second_dict = Dict()
    c_spaces = []
    for k, v in c_space_dict.items():
        if k == 'PCA' or k != 'logical' and np.absolute(v).sum() > 0:
            if k[-1] != '2':
                first_dict[k] = List(v)
                c_spaces.append(k)
            else:
                second_dict[k[:-1]] = List(v)
                c_spaces.append(k[:-1])
    return first_dict, second_dict, c_spaces

sum_of_abs_differences(array1, array2)

Compute the sum of absolute differences between two arrays.

Parameters:

Name Type Description Default
array1 NDArray

The first input array.

required
array2 NDArray

The second input array.

required

Returns:

Type Description
int

Sum of absolute differences between elements of array1 and array2.

Examples:

>>> arr1 = np.array([1.2, 2.5, -3.7])
>>> arr2 = np.array([12, 25, -37])
>>> result = sum_of_abs_differences(arr1, arr2)
>>> print(result)
66.6
Source code in src/cellects/utils/formulas.py
@njit()
def sum_of_abs_differences(array1: NDArray, array2: NDArray):
    """
    Compute the sum of absolute differences between two arrays.

    Parameters
    ----------
    array1 : NDArray
        The first input array.
    array2 : NDArray
        The second input array.

    Returns
    -------
    int
        Sum of absolute differences between elements of `array1` and `array2`.

    Examples
    --------
    >>> arr1 = np.array([1.2, 2.5, -3.7])
    >>> arr2 = np.array([12, 25, -37])
    >>> result = sum_of_abs_differences(arr1, arr2)
    >>> print(result)
    66.6
    """
    return np.sum(np.absolute(array1 - array2))

to_uint8(an_array)

Convert an array to unsigned 8-bit integers.

Parameters:

Name Type Description Default
an_array ndarray

Input array to be converted. It can be of any numeric dtype.

required

Returns:

Type Description
ndarray

The input array rounded to the nearest integer and then cast to unsigned 8-bit integers.

Raises:

Type Description
TypeError

If an_array is not a ndarray.

Notes

This function uses Numba's @njit decorator for performance optimization.

Examples:

>>> result = to_uint8(np.array([1.2, 2.5, -3.7]))
>>> print(result)
[1 3 0]
Source code in src/cellects/utils/formulas.py
@njit()
def to_uint8(an_array: NDArray):
    """
    Convert an array to unsigned 8-bit integers.

    Parameters
    ----------
    an_array : ndarray
        Input array to be converted. It can be of any numeric dtype.

    Returns
    -------
    ndarray
        The input array rounded to the nearest integer and then cast to
        unsigned 8-bit integers.

    Raises
    ------
    TypeError
        If `an_array` is not a ndarray.

    Notes
    -----
    This function uses Numba's `@njit` decorator for performance optimization.

    Examples
    --------
    >>> result = to_uint8(np.array([1.2, 2.5, -3.7]))
    >>> print(result)
    [1 3 0]
    """
    out = np.empty_like(an_array)
    return np.round(an_array, 0, out).astype(np.uint8)

translate_dict(old_dict)

Translate a dictionary to a typed dictionary and filter out non-string values.

Parameters:

Name Type Description Default
old_dict dict

The input dictionary that may contain non-string values

required

Returns:

Name Type Description
numba_dict Dict

A typed dictionary containing only the items from old_dict where the value is not a string

Examples:

>>> result = translate_dict({'a': 1., 'b': 'string', 'c': 2.0})
>>> print(result)
{a: 1.0, c: 2.0}
Source code in src/cellects/utils/utilitarian.py
def translate_dict(old_dict: dict) -> Dict:
    """
    Translate a dictionary to a typed dictionary and filter out non-string values.

    Parameters
    ----------
    old_dict : dict
        The input dictionary that may contain non-string values

    Returns
    -------
    numba_dict : Dict
        A typed dictionary containing only the items from `old_dict` where the value is not a string

    Examples
    --------
    >>> result = translate_dict({'a': 1., 'b': 'string', 'c': 2.0})
    >>> print(result)
    {a: 1.0, c: 2.0}
    """
    numba_dict = Dict()
    for k, v in old_dict.items():
        if not isinstance(v, str):
            if isinstance(v, list):
                v = List(v)
            numba_dict[k] = v
    return numba_dict

un_pad(arr)

Unpads a 2D NumPy array by removing the first and last row/column.

Extended Description

Reduces the size of a 2D array by removing the outermost rows and columns. Useful for trimming boundaries added during padding operations.

Parameters:

Name Type Description Default
arr ndarray

Input 2D array to be unpadded. Shape (n,m) is expected.

required

Returns:

Type Description
ndarray

Unpadded 2D array with shape (n-2, m-2).

Examples:

>>> arr = np.array([[0, 0, 0],
>>>                 [0, 4, 0],
>>>                 [0, 0, 0]])
>>> un_pad(arr)
array([[4]])
Source code in src/cellects/image_analysis/network_functions.py
def un_pad(arr: NDArray) -> NDArray:
    """
    Unpads a 2D NumPy array by removing the first and last row/column.

    Extended Description
    --------------------
    Reduces the size of a 2D array by removing the outermost rows and columns.
    Useful for trimming boundaries added during padding operations.

    Parameters
    ----------
    arr : ndarray
        Input 2D array to be unpadded. Shape (n,m) is expected.

    Returns
    -------
    ndarray
        Unpadded 2D array with shape (n-2, m-2).

    Examples
    --------
    >>> arr = np.array([[0, 0, 0],
    >>>                 [0, 4, 0],
    >>>                 [0, 0, 0]])
    >>> un_pad(arr)
    array([[4]])
    """
    return arr[1:-1, 1:-1]

video2numpy(vid_name, conversion_dict=None, background=None, background2=None, true_frame_width=None)

Convert a video file to a NumPy array.

Parameters:

Name Type Description Default
vid_name str

The path to the video file. Can be a .mp4 or .h5.

required
conversion_dict dict

Dictionary containing color space conversion parameters.

None
background NDArray

Background image for processing.

None
background2 NDArray

Second background image for processing.

None
true_frame_width int

True width of the frame. If specified and the current width is double this value, adjusts to true_frame_width.

None

Returns:

Type Description
NDArray or tuple of NDArrays

If conversion_dict is None, returns the video as a NumPy array. Otherwise, returns a tuple containing the original video and converted video.

Notes

This function uses OpenCV to read the contents of a .mp4 video file.

Source code in src/cellects/utils/load_display_save.py
def video2numpy(vid_name: str, conversion_dict=None, background: NDArray=None, background2: NDArray=None,
                true_frame_width: int=None):
    """
    Convert a video file to a NumPy array.

    Parameters
    ----------
    vid_name : str
        The path to the video file. Can be a `.mp4` or `.h5`.
    conversion_dict : dict, optional
        Dictionary containing color space conversion parameters.
    background : NDArray, optional
        Background image for processing.
    background2 : NDArray, optional
        Second background image for processing.
    true_frame_width : int, optional
        True width of the frame. If specified and the current width is double this value,
        adjusts to true_frame_width.

    Returns
    -------
    NDArray or tuple of NDArrays
        If conversion_dict is None, returns the video as a NumPy array.
        Otherwise, returns a tuple containing the original video and converted video.

    Notes
    -----
    This function uses OpenCV to read the contents of a `.mp4` video file.
    """
    h5_loading = vid_name[-3:] == ".h5"
    tif_loading = vid_name[-3:] == ".tif" or vid_name[-3:] == ".tiff"
    if h5_loading:
        video = read_h5(vid_name, 'video')
        dims = list(video.shape)
    elif tif_loading:
        video = read_tif_stack(vid_name)
        dims = video.shape
    else:
        cap = cv2.VideoCapture(vid_name)
        dims = [int(cap.get(cv2.CAP_PROP_FRAME_COUNT)), int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT)), int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))]

    if true_frame_width is not None:
        if dims[2] == 2 * true_frame_width:
            dims[2] = true_frame_width

    if conversion_dict is not None:
        first_dict, second_dict, c_spaces = split_dict(conversion_dict)
        converted_video = np.empty(dims[:3], dtype=np.uint8)
        if conversion_dict['logical'] == 'None':
            converted_video2 = np.empty(dims[:3], dtype=np.uint8)
        if h5_loading:
            for counter in np.arange(video.shape[0]):
                img = video[counter, :, :dims[2], :]
                greyscale_image, greyscale_image2, all_c_spaces, first_pc_vector = generate_color_space_combination(img, c_spaces,
                                                                                     first_dict, second_dict, background=background,background2=background2,
                                                                                     convert_to_uint8=True)
                converted_video[counter, ...] = greyscale_image
                if conversion_dict['logical'] == 'None':
                    converted_video2[counter, ...] = greyscale_image2
            video = video[:, :, :dims[2], ...]

    if not h5_loading:
        # 2) Create empty arrays to store video analysis data
        video = np.empty((dims[0], dims[1], dims[2], 3), dtype=np.uint8)
        # 3) Read and convert the video frame by frame
        counter = 0
        while cap.isOpened() and counter < dims[0]:
            ret, frame = cap.read()
            frame = frame[:, :dims[2], ...]
            video[counter, ...] = frame
            if conversion_dict is not None:
                greyscale_image, greyscale_image2, all_c_spaces, first_pc_vector = generate_color_space_combination(frame, c_spaces,
                                                                                     first_dict, second_dict, background=background,background2=background2,
                                                                                     convert_to_uint8=True)
                converted_video[counter, ...] = greyscale_image
                if conversion_dict['logical'] == 'None':
                    converted_video2[counter, ...] = greyscale_image2
            counter += 1
        cap.release()

    if conversion_dict is None:
        return video
    else:
        return video, converted_video

video_writing_decision(arena_nb, im_or_vid, overwrite_unaltered_videos)

Determine whether to write videos based on existing files and user preferences.

Parameters:

Name Type Description Default
arena_nb int

Number of arenas to analyze.

required
im_or_vid int

Indicates whether the analysis should be performed on images or videos.

required
overwrite_unaltered_videos bool

Flag indicating whether existing unaltered videos should be overwritten.

required

Returns:

Type Description
bool

True if videos should be written, False otherwise.

Source code in src/cellects/utils/load_display_save.py
def video_writing_decision(arena_nb: int, im_or_vid: int, overwrite_unaltered_videos: bool) -> bool:
    """
    Determine whether to write videos based on existing files and user preferences.

    Parameters
    ----------
    arena_nb : int
        Number of arenas to analyze.
    im_or_vid : int
        Indicates whether the analysis should be performed on images or videos.
    overwrite_unaltered_videos : bool
        Flag indicating whether existing unaltered videos should be overwritten.

    Returns
    -------
    bool
        True if videos should be written, False otherwise.
    """
    look_for_existing_videos = insensitive_glob('ind_' + '*' + '.h5')
    there_already_are_videos = len(look_for_existing_videos) > 0
    if there_already_are_videos:
        all_files_contain_video = np.all(['video' in get_h5_keys(vid_name) for vid_name in look_for_existing_videos])
        there_already_are_videos = all_files_contain_video and there_already_are_videos
        if not there_already_are_videos:
            look_for_existing_videos = []
        there_already_are_videos = len(look_for_existing_videos) == arena_nb and there_already_are_videos
    logging.info(f"Video files (h5) found: {len(look_for_existing_videos)} for {arena_nb} arenas to analyze")
    do_write_videos = not im_or_vid and (not there_already_are_videos or (there_already_are_videos and overwrite_unaltered_videos))
    return do_write_videos

vstack_h5_array(file_name, table, key='data')

Stack tables vertically in an HDF5 file.

This function either appends the input table to an existing dataset in the specified HDF5 file or creates a new dataset if the key doesn't exist.

Parameters:

Name Type Description Default
file_name str

Path to the HDF5 file.

required
table NDArray[uint8]

The table to be stacked vertically with the existing data.

required
key str

Key under which the dataset will be stored. Defaults to 'data'.

'data'

Examples:

>>> table = np.array([[1, 2], [3, 4]], dtype=np.uint8)
>>> vstack_h5_array('example.h5', table)
Source code in src/cellects/utils/load_display_save.py
def vstack_h5_array(file_name, table: NDArray, key: str="data"):
    """
    Stack tables vertically in an HDF5 file.

    This function either appends the input table to an existing dataset
    in the specified HDF5 file or creates a new dataset if the key doesn't exist.

    Parameters
    ----------
    file_name : str
        Path to the HDF5 file.
    table : NDArray[np.uint8]
        The table to be stacked vertically with the existing data.
    key : str, optional
        Key under which the dataset will be stored. Defaults to 'data'.

    Examples
    --------
    >>> table = np.array([[1, 2], [3, 4]], dtype=np.uint8)
    >>> vstack_h5_array('example.h5', table)
    """
    if os.path.exists(file_name):
        # Open the file in append mode
        with h5py.File(file_name, 'a') as h5f:
            if key in h5f:
                # Append to the existing dataset
                existing_data = h5f[key][:]
                new_data = np.vstack((existing_data, table))
                del h5f[key]
                h5f.create_dataset(key, data=new_data)
            else:
                # Create a new dataset if the key doesn't exist
                h5f.create_dataset(key, data=table)
    else:
        with h5py.File(file_name, 'w') as h5f:
            h5f.create_dataset(key, data=table)

write_h5(file_name, table, key='data')

Write a file using the h5 format.

Parameters:

Name Type Description Default
file_name str

Name of the file to write.

required
table NDArray[]

An array.

required
key str

The identifier of the data in this h5 file.

'data'
Source code in src/cellects/utils/load_display_save.py
def write_h5(file_name: str, table: NDArray, key: str="data"):
    """
    Write a file using the h5 format.

    Parameters
    ----------
    file_name : str
        Name of the file to write.
    table : NDArray[]
        An array.
    key: str
        The identifier of the data in this h5 file.
    """
    with h5py.File(file_name, 'a') as h5f:
        if key in h5f:
            del h5f[key]
        h5f.create_dataset(key, data=table)

write_video(np_array, vid_name, is_color=True, fps=40)

Write video from numpy array.

Save a numpy array as a video file. Supports .h5 format for saving raw numpy arrays and various video formats (mp4, avi, mkv) using OpenCV. For video formats, automatically selects a suitable codec and handles file extensions.

Parameters:

Name Type Description Default
np_array ndarray of uint8

Input array containing video frames.

required
vid_name str

Filename for the output video. Can include extension or not (defaults to .mp4).

required
is_color bool

Whether the video should be written in color. Defaults to True.

True
fps int

Frame rate for the video in frames per second. Defaults to 40.

40

Examples:

>>> video_array = np.random.randint(0, 255, size=(10, 100, 100, 3), dtype=np.uint8)
>>> write_video(video_array, 'output.mp4', True, 30)
Saves `video_array` as a color video 'output.mp4' with FPS 30.
>>> video_array = np.random.randint(0, 255, size=(10, 100, 100), dtype=np.uint8)
>>> write_video(video_array, 'raw_data.h5')
Saves `video_array` as a raw numpy array file without frame rate.
Source code in src/cellects/utils/load_display_save.py
def write_video(np_array: NDArray[np.uint8], vid_name: str, is_color: bool=True, fps: int=40):
    """
    Write video from numpy array.

    Save a numpy array as a video file. Supports .h5 format for saving raw
    numpy arrays and various video formats (mp4, avi, mkv) using OpenCV.
    For video formats, automatically selects a suitable codec and handles
    file extensions.

    Parameters
    ----------
    np_array : ndarray of uint8
        Input array containing video frames.
    vid_name : str
        Filename for the output video. Can include extension or not (defaults to .mp4).
    is_color : bool, optional
        Whether the video should be written in color. Defaults to True.
    fps : int, optional
        Frame rate for the video in frames per second. Defaults to 40.

    Examples
    --------
    >>> video_array = np.random.randint(0, 255, size=(10, 100, 100, 3), dtype=np.uint8)
    >>> write_video(video_array, 'output.mp4', True, 30)
    Saves `video_array` as a color video 'output.mp4' with FPS 30.
    >>> video_array = np.random.randint(0, 255, size=(10, 100, 100), dtype=np.uint8)
    >>> write_video(video_array, 'raw_data.h5')
    Saves `video_array` as a raw numpy array file without frame rate.
    """
    #h265 ou h265 (mp4)
    # linux: fourcc = 0x00000021 -> don't forget to change it bellow as well
    if vid_name[-4:] == '.h5':
        write_h5(vid_name, np_array, 'video')
    else:
        valid_extensions = ['.mp4', '.avi', '.mkv']
        vid_ext = vid_name[-4:]
        if vid_ext not in valid_extensions:
            vid_name = vid_name[:-4]
            vid_name += '.mp4'
            vid_ext = '.mp4'
        if vid_ext =='.mp4':
            fourcc = 0x7634706d# VideoWriter_fourcc(*'FMP4') #(*'MP4V') (*'h265') (*'x264') (*'DIVX')
        else:
            fourcc = cv2.VideoWriter_fourcc('F', 'F', 'V', '1')  # lossless
        size = np_array.shape[2], np_array.shape[1]
        vid = cv2.VideoWriter(vid_name, fourcc, float(fps), tuple(size), is_color)
        for image_i in np.arange(np_array.shape[0]):
            image = np_array[image_i, ...]
            vid.write(image)
        vid.release()

write_video_from_images(path_to_images='', vid_name='timelapse.mp4', fps=20, img_extension='', img_radical='', crop_coord=None)

Write a video file from a sequence of images.

Extended Description

This function creates a video from a list of image files in the specified directory. To prevent the most comon issues: - The image list is sorted - mp4 files are removed - If they do not have the same orientation, rotate the images accordingly - Images are cropped - Color vs greyscale is automatically determined

After processing, images are compiled into a video file.

Parameters

path_to_images : str The directory where the images are located. vid_name : str, optional The name of the output video file. Default is 'video.mp4'. fps : int, optional The frames per second for the video. Default is 20. img_extension : str, optional The file extension of the images. Default is an empty string. img_radical : str, optional The common prefix of the image filenames. Default is an empty string. crop_coord : list, optional list containing four crop coordinates: [top, bot, left, right]. Default is None and takes the whole image.

Examples

write_video_from_images('path/to/images', vid_name='timelapse.mp4') This will create a video file named 'timelapse.mp4' from the images in the specified directory.

Source code in src/cellects/utils/load_display_save.py
def write_video_from_images(path_to_images='', vid_name: str='timelapse.mp4', fps: int=20, img_extension: str='',
                            img_radical: str='', crop_coord: list=None):
    """
    Write a video file from a sequence of images.

     Extended Description
     --------------------
     This function creates a video from a list of image files in the specified directory.
     To prevent the most comon issues:
     - The image list is sorted
     - mp4 files are removed
     - If they do not have the same orientation, rotate the images accordingly
     - Images are cropped
     - Color vs greyscale is automatically determined

     After processing, images are compiled into a video file.

     Parameters
     ----------
     path_to_images : str
         The directory where the images are located.
     vid_name : str, optional
         The name of the output video file. Default is 'video.mp4'.
     fps : int, optional
         The frames per second for the video. Default is 20.
     img_extension : str, optional
         The file extension of the images. Default is an empty string.
     img_radical : str, optional
         The common prefix of the image filenames. Default is an empty string.
     crop_coord : list, optional
         list containing four crop coordinates: [top, bot, left, right]. Default is None and takes the whole image.

     Examples
     --------
     >>> write_video_from_images('path/to/images', vid_name='timelapse.mp4')
     This will create a video file named 'timelapse.mp4' from the images in the specified directory.
    """
    if isinstance(path_to_images, str):
        path_to_images = Path(path_to_images)
    os.chdir(path_to_images)
    imgs = list_image_dir(path_to_images, img_extension, img_radical)
    is_raw = is_raw_image(imgs[0])
    image, prev_img = read_rotate_crop_and_reduce_image(imgs[0], crop_coord=crop_coord, raw_images=is_raw)
    is_landscape = image.shape[0] < image.shape[1]
    is_color: bool = True
    if len(image.shape) == 2:
        is_color = False
    video = np.zeros((len(imgs), image.shape[0], image.shape[1], 3), dtype=np.uint8)
    for i_, img in enumerate(imgs):
        video[i_], prev_img = read_rotate_crop_and_reduce_image(img, prev_img=prev_img, crop_coord=crop_coord,
                                                                raw_images=is_raw, is_landscape=is_landscape)
    write_video(video, vid_name=vid_name, is_color=is_color, fps=fps)

write_video_sets(img_list, sizes, vid_names, crop_coord, bounding_boxes, bunch_nb, video_nb_per_bunch, remaining, raw_images, is_landscape, use_list_of_vid, in_colors=False, reduce_image_dim=False, pathway='')

Write video sets from a list of images, applying cropping and optional rotation.

Parameters:

Name Type Description Default
img_list list

List of image file names.

required
sizes NDArray

Array containing the dimensions of each video frame.

required
vid_names list

List of video file names to be saved.

required
crop_coord dict or tuple

Coordinates for cropping regions of interest in images/videos.

required
bounding_boxes tuple

Bounding box coordinates to extract sub-images from the original images.

required
bunch_nb int

Number of bunches to divide the videos into.

required
video_nb_per_bunch int

Number of videos per bunch.

required
remaining int

Number of videos remaining after the last full bunch.

required
raw_images bool

Whether the images are in raw format.

required
is_landscape bool

If true, rotate the images to landscape orientation before processing.

required
use_list_of_vid bool

Flag indicating if the output should be a list of videos.

required
in_colors bool

If true, process images with color information. Default is False.

False
reduce_image_dim bool

If true, reduce image dimensions. Default is False.

False
pathway str

Path where the videos should be saved. Default is an empty string.

''
Source code in src/cellects/utils/load_display_save.py
def write_video_sets(img_list: list, sizes: NDArray, vid_names: list, crop_coord, bounding_boxes,
                      bunch_nb: int, video_nb_per_bunch: int, remaining: int,
                      raw_images: bool, is_landscape: bool, use_list_of_vid: bool,
                      in_colors: bool=False, reduce_image_dim: bool=False, pathway: str=""):
    """
    Write video sets from a list of images, applying cropping and optional rotation.

    Parameters
    ----------
    img_list : list
        List of image file names.
    sizes : NDArray
        Array containing the dimensions of each video frame.
    vid_names : list
        List of video file names to be saved.
    crop_coord : dict or tuple
        Coordinates for cropping regions of interest in images/videos.
    bounding_boxes : tuple
        Bounding box coordinates to extract sub-images from the original images.
    bunch_nb : int
        Number of bunches to divide the videos into.
    video_nb_per_bunch : int
        Number of videos per bunch.
    remaining : int
        Number of videos remaining after the last full bunch.
    raw_images : bool
        Whether the images are in raw format.
    is_landscape : bool
        If true, rotate the images to landscape orientation before processing.
    use_list_of_vid : bool
        Flag indicating if the output should be a list of videos.
    in_colors : bool, optional
        If true, process images with color information. Default is False.
    reduce_image_dim : bool, optional
        If true, reduce image dimensions. Default is False.
    pathway : str, optional
        Path where the videos should be saved. Default is an empty string.
    """
    top, bot, left, right = bounding_boxes
    for bunch in np.arange(bunch_nb):
        print(f'\nSaving the video bunch n°{bunch + 1} (tot={bunch_nb})...', end=' ')
        if bunch == (bunch_nb - 1) and remaining > 0:
            arenas = np.arange(bunch * video_nb_per_bunch, bunch * video_nb_per_bunch + remaining, dtype=np.uint32)
        else:
            arenas = np.arange(bunch * video_nb_per_bunch, (bunch + 1) * video_nb_per_bunch, dtype=np.uint32)
        if use_list_of_vid:
            video_bunch = [np.zeros(sizes[i, :], dtype=np.uint8) for i in arenas]
        else:
            video_bunch = np.zeros(np.append(sizes[0, :], len(arenas)), dtype=np.uint8)
        prev_img = None
        images_done = bunch * len(img_list)
        for image_i, image_name in enumerate(img_list):
            img = read_and_rotate(image_name, prev_img, raw_images, is_landscape, crop_coord)
            prev_img = img.copy()
            if not in_colors and reduce_image_dim:
                img = img[:, :, 0]

            for arena_i, arena_name in enumerate(arenas):
                # arena_i = 0; arena_name = arena[arena_i]
                sub_img = img[top[arena_name]:bot[arena_name], left[arena_name]:right[arena_name], ...]
                if use_list_of_vid:
                    video_bunch[arena_i][image_i, ...] = sub_img
                else:
                    if len(video_bunch.shape) == 5:
                        video_bunch[image_i, :, :, :, arena_i] = sub_img
                    else:
                        video_bunch[image_i, :, :, arena_i] = sub_img
        for arena_i, arena_name in enumerate(arenas):
            if use_list_of_vid:
                 write_h5(pathway + vid_names[arena_name], video_bunch[arena_i], 'video')
            else:
                if len(video_bunch.shape) == 5:
                     write_h5(pathway + vid_names[arena_name], video_bunch[:, :, :, :, arena_i], 'video')
                else:
                     write_h5(pathway + vid_names[arena_name], video_bunch[:, :, :, arena_i], 'video')

zoom_on_nonzero(binary_image, padding=2, return_coord=True)

Crops a binary image around non-zero elements with optional padding and returns either coordinates or cropped region.

Parameters:

Name Type Description Default
binary_image NDArray

2D NumPy array containing binary values (0/1)

required
padding int

Amount of zero-padding to add around the minimum bounding box

2
return_coord bool

If True, return slice coordinates instead of cropped image

True

Returns:

Type Description
If `return_coord` is True: [y_min, y_max, x_min, x_max] as 4-element Tuple.

If False: 2D binary array representing the cropped region defined by non-zero elements plus padding.

Examples:

>>> img = np.zeros((10,10))
>>> img[3:7,4:6] = 1
>>> result = zoom_on_nonzero(img)
>>> print(result)
[1 8 2 7]
>>> cropped = zoom_on_nonzero(img, return_coord=False)
>>> print(cropped.shape)
(6, 5)
Notes
  • Returns empty slice coordinates if input contains no non-zero elements.
  • Coordinate indices are 0-based and compatible with NumPy array slicing syntax.
Source code in src/cellects/utils/load_display_save.py
def zoom_on_nonzero(binary_image:NDArray, padding: int = 2, return_coord: bool=True):
    """
    Crops a binary image around non-zero elements with optional padding and returns either coordinates or cropped region.

    Parameters
    ----------
    binary_image : NDArray
        2D NumPy array containing binary values (0/1)
    padding : int, default=2
        Amount of zero-padding to add around the minimum bounding box
    return_coord : bool, default=True
        If True, return slice coordinates instead of cropped image

    Returns
    -------
        If `return_coord` is True: [y_min, y_max, x_min, x_max] as 4-element Tuple.
        If False: 2D binary array representing the cropped region defined by non-zero elements plus padding.

    Examples
    --------
    >>> img = np.zeros((10,10))
    >>> img[3:7,4:6] = 1
    >>> result = zoom_on_nonzero(img)
    >>> print(result)
    [1 8 2 7]
    >>> cropped = zoom_on_nonzero(img, return_coord=False)
    >>> print(cropped.shape)
    (6, 5)

    Notes
    -----
    - Returns empty slice coordinates if input contains no non-zero elements.
    - Coordinate indices are 0-based and compatible with NumPy array slicing syntax.
    """
    y, x = np.nonzero(binary_image)
    cy_min = np.max((0, y.min() - padding))
    cy_max = np.min((binary_image.shape[0], y.max() + padding + 1))
    cx_min = np.max((0, x.min() - padding))
    cx_max = np.min((binary_image.shape[1], x.max() + padding + 1))
    if return_coord:
        return cy_min, cy_max, cx_min, cx_max
    else:
        return binary_image[cy_min:cy_max, cx_min:cx_max]