cellects.core.motion_analysis
cellects.core.motion_analysis
Module for analyzing motion, growth patterns, and structural properties of biological specimens in video data.
This module provides comprehensive tools to analyze videos of biological samples (e.g., cell colonies) by: 1. Loading and converting RGB videos to grayscale using configurable color space combinations 2. Performing multi-strategy segmentation (frame-by-frame, intensity thresholding, derivative-based detection) 3. Applying post-processing steps including error correction algorithms for shape continuity 4. Computing morphological descriptors over time (area, perimeter, fractal dimension, etc.) 5. Detecting network structures and oscillatory behavior in dynamic biological systems
Classes:
| Name | Description |
|---|---|
MotionAnalysis : Processes video data to analyze specimen motion, growth patterns, and structural properties. |
Provides methods for loading videos, performing segmentation using multiple algorithms, post-processing results with error correction, extracting morphological descriptors, detecting network structures, analyzing oscillations, and saving processed outputs. |
Functions:
| Name | Description |
|---|---|
load_images_and_videos : Loads and converts video files to appropriate format for analysis. |
|
get_converted_video : Converts RGB video to grayscale based on specified color space parameters. |
|
detection : Performs multi-strategy segmentation of the specimen across all frames. |
|
update_shape : Updates segmented shape with post-processing steps like noise filtering and hole filling. |
|
save_results : Saves processed data, efficiency tests, and annotated videos. |
|
Notes
The features of this module include: - Processes large video datasets with memory optimization strategies including typed arrays (NumPy) and progressive processing techniques. - The module supports both single-specimen and multi-specimen analysis through configurable parameters. - Segmentation strategies include intensity-based thresholding, gradient detection, and combinations thereof. - Post-processing includes morphological operations to refine segmented regions and error correction for specific use cases (e.g., Physarum polycephalum). - Biological network detection and graph extraction is available to represent network structures as vertex-edge tables. - Biological oscillatory pattern detection - Fractal dimension calculation
CompareNeighborsWithValue
CompareNeighborsWithValue class to summarize each pixel by comparing its neighbors to a value.
This class analyzes pixels in a 2D array, comparing each pixel's neighbors to a specified value. The comparison can be equality, superiority, or inferiority, and neighbors can be the 4 or 8 nearest pixels based on the connectivity parameter.
Source code in src/cellects/image_analysis/morphological_operations.py
50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 | |
__init__(array, connectivity=None, data_type=np.int8)
Initialize a class for array connectivity processing.
This class processes arrays based on given connectivities, creating windows around the original data for both 1D and 2D arrays. Depending on the connectivity value (4 or 8), it creates different windows with borders.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
array
|
ndarray
|
Input array to process, can be 1D or 2D. |
required |
connectivity
|
int
|
Connectivity type for processing (4 or 8), by default None. |
None
|
data_type
|
dtype
|
Data type for the array elements, by default np.int8. |
int8
|
Attributes:
| Name | Type | Description |
|---|---|---|
array |
ndarray
|
The processed array based on the given data type. |
connectivity |
int
|
Connectivity value used for processing. |
on_the_right |
ndarray
|
Array with shifted elements to the right. |
on_the_left |
ndarray
|
Array with shifted elements to the left. |
on_the_bot |
(ndarray, optional)
|
Array with shifted elements to the bottom (for 2D arrays). |
on_the_top |
(ndarray, optional)
|
Array with shifted elements to the top (for 2D arrays). |
on_the_topleft |
(ndarray, optional)
|
Array with shifted elements to the top left (for 2D arrays). |
on_the_topright |
(ndarray, optional)
|
Array with shifted elements to the top right (for 2D arrays). |
on_the_botleft |
(ndarray, optional)
|
Array with shifted elements to the bottom left (for 2D arrays). |
on_the_botright |
(ndarray, optional)
|
Array with shifted elements to the bottom right (for 2D arrays). |
Source code in src/cellects/image_analysis/morphological_operations.py
is_equal(value, and_itself=False)
Check equality of neighboring values in an array.
This method compares the neighbors of each element in self.array to a given value.
Depending on the dimensionality and connectivity settings, it checks different neighboring
elements.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
value
|
int or float
|
The value to check equality with neighboring elements. |
required |
and_itself
|
bool
|
If True, also check equality with the element itself. Defaults to False. |
False
|
Returns:
| Type | Description |
|---|---|
None
|
|
Attributes (not standard Qt properties)
equal_neighbor_nb : ndarray of uint8 Array that holds the number of equal neighbors for each element.
Examples:
>>> matrix = np.array([[9, 0, 4, 6], [4, 9, 1, 3], [7, 2, 1, 4], [9, 0, 8, 5]], dtype=np.int8)
>>> compare = CompareNeighborsWithValue(matrix, connectivity=4)
>>> compare.is_equal(1)
>>> print(compare.equal_neighbor_nb)
[[0 0 1 0]
[0 1 1 1]
[0 1 1 1]
[0 0 1 0]]
Source code in src/cellects/image_analysis/morphological_operations.py
is_inf(value, and_itself=False)
is_inf(value and_itself=False)
Determine the number of neighbors that are infinitely small relative to a given value, considering optional connectivity and exclusion of the element itself.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
value
|
numeric
|
The value to compare neighbor elements against. |
required |
and_itself
|
bool
|
If True, excludes the element itself from being counted. Default is False. |
False
|
Examples:
>>> matrix = np.array([[9, 0, 4, 6], [4, 9, 1, 3], [7, 2, 1, 4], [9, 0, 8, 5]], dtype=np.int8)
>>> compare = CompareNeighborsWithValue(matrix, connectivity=4)
>>> compare.is_inf(1)
>>> print(compare.inf_neighbor_nb)
[[1 1 1 0]
[0 1 0 0]
[0 1 0 0]
[1 1 1 0]]
Source code in src/cellects/image_analysis/morphological_operations.py
is_sup(value, and_itself=False)
Determine if pixels have more neighbors with higher values than a given threshold.
This method computes the number of neighboring pixels that have values greater
than a specified value for each pixel in the array. Optionally, it can exclude
the pixel itself if its value is less than or equal to value.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
value
|
int
|
The threshold value used to determine if a neighboring pixel's value is greater. |
required |
and_itself
|
bool
|
If True, exclude the pixel itself if its value is less than or equal to |
False
|
Examples:
>>> matrix = np.array([[9, 0, 4, 6], [4, 9, 1, 3], [7, 2, 1, 4], [9, 0, 8, 5]], dtype=np.int8)
>>> compare = CompareNeighborsWithValue(matrix, connectivity=4)
>>> compare.is_sup(1)
>>> print(compare.sup_neighbor_nb)
[[3 3 2 4]
[4 2 3 3]
[4 2 3 3]
[3 3 2 4]]
Source code in src/cellects/image_analysis/morphological_operations.py
EdgeIdentification
Initialize the class with skeleton and distance arrays.
This class is used to identify edges within a skeleton structure based on provided skeleton and distance arrays. It performs various operations to refine and label edges, ultimately producing a fully identified network.
Source code in src/cellects/image_analysis/network_functions.py
1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 | |
__init__(pad_skeleton, pad_distances, t=0)
Initialize the class with skeleton and distance arrays.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
pad_skeleton
|
ndarray of uint8
|
Array representing the skeleton to pad. |
required |
pad_distances
|
ndarray of float64
|
Array representing distances corresponding to the skeleton. |
required |
Attributes:
| Name | Type | Description |
|---|---|---|
remaining_vertices |
None
|
Remaining vertices. Initialized as |
vertices |
None
|
Vertices. Initialized as |
growing_vertices |
None
|
Growing vertices. Initialized as |
im_shape |
tuple of ints
|
Shape of the skeleton array. |
Source code in src/cellects/image_analysis/network_functions.py
clear_areas_of_1_or_2_unidentified_pixels()
Removes 1 or 2 pixel size non-identified areas from the skeleton.
This function checks whether small non-identified areas (1 or 2 pixels) can be removed without breaking the skeleton structure. It performs a series of operations to ensure only safe removals are made, logging errors if the final skeleton is not fully connected or if some unidentified pixels remain.
Source code in src/cellects/image_analysis/network_functions.py
clear_edge_duplicates()
Remove duplicate edges by checking vertices and coordinates.
This method identifies and removes duplicate edges based on their vertex labels and pixel coordinates. It scans through the edge attributes, compares them, and removes duplicates if they are found.
Source code in src/cellects/image_analysis/network_functions.py
clear_vertices_connecting_2_edges()
Remove vertices connecting exactly two edges and update edge-related attributes.
This method identifies vertices that are connected to exactly 2 edges, renames edges, updates edge lengths and vertex coordinates accordingly. It also removes the corresponding vertices from non-tip vertices list.
Source code in src/cellects/image_analysis/network_functions.py
get_tipped_edges()
get_tipped_edges : method to extract skeleton edges connecting branching points and tips.
Makes sure that there is only one connected component constituting the skeleton of the network and identifies all edges that are connected to a tip.
Attributes:
| Name | Type | Description |
|---|---|---|
pad_skeleton |
ndarray of bool, modified
|
Boolean mask representing the pruned skeleton after isolating the largest connected component. |
vertices_branching_tips |
ndarray of int, shape (N, 2)
|
Coordinates of branching points that connect to tips in the skeleton structure. |
edge_lengths |
ndarray of float, shape (M,)
|
Lengths of edges connecting non-tip vertices to identified tip locations. |
edge_pix_coord |
list of array of int
|
Pixel coordinates for each edge path between connected skeleton elements. |
Source code in src/cellects/image_analysis/network_functions.py
get_vertices_and_tips_coord()
Process skeleton data to extract non-tip vertices and tip coordinates.
This method processes the skeleton stored in self.pad_skeleton by first
extracting all vertices and tips. It then separates these into branch points
(non-tip vertices) and specific tip coordinates using internal processing.
Attributes:
| Name | Type | Description |
|---|---|---|
self.non_tip_vertices |
array - like
|
Coordinates of non-tip (branch) vertices. |
self.tips_coord |
array - like
|
Coordinates of identified tips in the skeleton. |
Source code in src/cellects/image_analysis/network_functions.py
label_edges_connected_with_vertex_clusters()
Identify edges connected to touching vertices by processing vertex clusters.
This function processes the skeleton to identify edges connecting vertices that are part of touching clusters. It creates a cropped version of the skeleton by removing already detected edges and their tips, then iterates through vertex clusters to explore and identify nearby edges.
Source code in src/cellects/image_analysis/network_functions.py
label_edges_connecting_vertex_clusters()
Label edges connecting vertex clusters.
This method identifies the connections between connected vertices within vertex clusters and labels these edges. It uses the previously found connected vertices, creates an image of the connections, and then identifies and labels the edges between these touching vertices.
Source code in src/cellects/image_analysis/network_functions.py
label_edges_from_known_vertices_iteratively()
Label edges iteratively from known vertices.
This method labels edges in an iterative process starting from known vertices. It handles the removal of detected edges and updates the skeleton accordingly, to avoid detecting edges twice.
Source code in src/cellects/image_analysis/network_functions.py
label_edges_looping_on_1_vertex()
Identify and handle edges that form loops around a single vertex. This method processes the skeleton image to find looping edges and updates the edge data structure accordingly.
Source code in src/cellects/image_analysis/network_functions.py
label_tipped_edges_and_their_vertices()
Label edges connecting tip vertices to branching vertices and assign unique labels to all relevant vertices.
Processes vertex coordinates by stacking tips, vertices branching from tips, and remaining non-tip vertices. Assigns unique sequential identifiers to these vertices in a new array. Constructs an array of edge-label information, where each row contains the edge label (starting at 1), corresponding tip label, and connected vertex label.
Attributes:
| Name | Type | Description |
|---|---|---|
tip_number |
int
|
The number of tip coordinates available in |
ordered_v_coord |
ndarray of float
|
Stack of unique vertex coordinates ordered by: tips first, vertices branching tips second, non-tip vertices third. |
numbered_vertices |
ndarray of uint32
|
2D array where each coordinate position is labeled with a sequential integer (starting at 1) based on the order in |
edges_labels |
ndarray of uint32
|
Array of shape (n_edges, 3). Each row contains: - Edge label (sequential from 1 to n_edges) - Label of the tip vertex for that edge. - Label of the vertex branching the tip. |
vertices_branching_tips |
ndarray of float
|
Unique coordinates of vertices directly connected to tips after removing duplicates. |
Source code in src/cellects/image_analysis/network_functions.py
make_edge_table(greyscale, compute_BC=False)
Generate edge table with length and average intensity information.
This method processes the vertex coordinates, calculates lengths between vertices for each edge, and computes average width and intensity along the edges. Additionally, it computes edge betweenness centrality for each vertex pair.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
greyscale
|
ndarray of uint8
|
Grayscale image. |
required |
Source code in src/cellects/image_analysis/network_functions.py
1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 | |
make_vertex_table(origin_contours=None, growing_areas=None)
Generate a table for the vertices.
This method constructs and returns a 2D NumPy array holding information
about all vertices. Each row corresponds to one vertex identified either
by its coordinates in self.tips_coord or self.non_tip_vertices. The
array includes additional information about each vertex, including whether
they are food vertices, growing areas, and connected components.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
origin_contours
|
ndarray of uint8
|
Binary map to identify food vertices. Default is |
None
|
growing_areas
|
ndarray
|
Binary map to identify growing regions. Default is |
None
|
Notes
The method updates the instance attribute `self.vertex_table` with
the generated vertex information.
Source code in src/cellects/image_analysis/network_functions.py
remove_tipped_edge_smaller_than_branch_width()
Remove very short edges from the skeleton.
This method focuses on edges connecting tips. When too short, they are considered are noise and removed from the skeleton and distances matrices. These edges are considered too short when their length is smaller than the width of the nearest network branch (an information included in pad_distances). This method also updates internal data structures (skeleton, edge coordinates, vertex/tip positions) accordingly through pixel-wise analysis and connectivity checks.
Source code in src/cellects/image_analysis/network_functions.py
1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 | |
run_edge_identification()
Run the edge identification process.
This method orchestrates a series of steps to identify and label edges within the graph structure. Each step handles a specific aspect of edge identification, ultimately leading to a clearer and more refined edge network.
Steps involved: 1. Get vertices and tips coordinates. 2. Identify tipped edges. 3. Remove tipped edges smaller than branch width. 4. Label tipped edges and their vertices. 5. Label edges connected with vertex clusters. 6. Label edges connecting vertex clusters. 7. Label edges from known vertices iteratively. 8. Label edges looping on 1 vertex. 9. Clear areas with 1 or 2 unidentified pixels. 10. Clear edge duplicates. 11. Clear vertices connecting 2 edges.
Source code in src/cellects/image_analysis/network_functions.py
MotionAnalysis
Source code in src/cellects/core/motion_analysis.py
56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 | |
__init__(l)
Analyzes motion in a given arena using video data.
This class processes video frames to analyze motion within a specified area, detecting shapes, covering durations, and generating descriptors for further analysis.
Args: l (list): A list containing various parameters and flags necessary for the motion analysis.
Args: l[0] (int): Arena index. l[1] (str): Arena identifier or name, stored in one_descriptor_per_arena['arena']. l[2] (dict): Variables required for the analysis, stored in vars. l[3] (bool): Flag to detect shape. l[4] (bool): Flag to analyze shape. l[5] (bool): Flag to show segmentation. l[6] (None or list): Videos already in RAM.
Attributes: vars (dict): Variables required for the analysis. visu (None): Placeholder for visualization data. binary (None): Placeholder for binary segmentation data. origin_idx (None): Placeholder for the index of the first frame. smoothing_flag (bool): Flag to indicate if smoothing should be applied. dims (tuple): Dimensions of the converted video. segmentation (ndarray): Array to store segmentation data. covering_intensity (ndarray): Intensity values for covering analysis. mean_intensity_per_frame (ndarray): Mean intensity per frame. borders (object): Borders of the arena. pixel_ring_depth (int): Depth of the pixel ring for analysis, default is 9. step (int): Step size for processing, default is 10. lost_frames (int): Number of lost frames to account for, default is 10. start (None or int): Starting frame index for the analysis.
Methods: load_images_and_videos(videos_already_in_ram, arena_idx): Loads images and videos for the specified arena index. update_ring_width(): Updates the width of the pixel ring for analysis. get_origin_shape(): Detects the origin shape in the video frames. get_covering_duration(step): Calculates the covering duration based on a step size. detection(): Performs motion detection within the arena. initialize_post_processing(): Initializes post-processing steps. update_shape(show_seg): Updates the shape based on segmentation and visualization flags. get_descriptors_from_binary(): Extracts descriptors from binary data. detect_growth_transitions(): Detects growth transitions in the data. networks_analysis(show_seg): Detected networks within the arena based on segmentation visualization. study_cytoscillations(show_seg): Studies cytoscillations within the arena with segmentation visualization. fractal_descriptions(): Generates fractal descriptions of the analyzed data. get_descriptors_summary(): Summarizes the descriptors obtained from the analysis. save_results(): Saves the results of the analysis.
Source code in src/cellects/core/motion_analysis.py
58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 | |
assess_motion_detection()
Assess if a motion can be detected using the current parameters.
Validate the specimen(s) detected in the first frame and evaluate roughly how growth occurs during the video.
Source code in src/cellects/core/motion_analysis.py
change_results_of_one_arena(save_video=True)
Manages the saving and updating of CSV files based on data extracted from analyzed one arena. Specifically handles three CSV files: "one_row_per_arena.csv", "one_row_per_frame.csv". Each file is updated or created based on the presence of existing data. The method ensures that each CSV file contains the relevant information for the given arena, frame, and oscillator cluster data.
Source code in src/cellects/core/motion_analysis.py
1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 | |
check_converted_video_type()
Check if the converted video type is uint8 and normalize it if necessary.
Source code in src/cellects/core/motion_analysis.py
detect_growth_transitions()
Detect growth transitions in a biological image processing context.
Analyzes the growth transitions of a shape within an arena, determining whether growth is isotropic and identifying any breaking points.
Notes:
This method modifies the one_descriptor_per_arena dictionary in place
to include growth transition information.
Source code in src/cellects/core/motion_analysis.py
detection(compute_all_possibilities=False)
Perform frame-by-frame or luminosity-based segmentation on video data to detect cell motion and growth.
This function processes video frames using either frame-by-frame segmentation or luminosity-based segmentation algorithms to detect cell motion and growth. It handles drift correction, adjusts parameters based on configuration settings, and applies logical operations to combine results from different segmentation methods.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
compute_all_possibilities
|
bool
|
Flag to determine if all segmentation possibilities should be computed, by default False |
False
|
Returns:
| Type | Description |
|---|---|
None
|
|
Notes
This function modifies the instance variables self.segmented, self.converted_video,
and potentially self.luminosity_segmentation and self.gradient_segmentation.
Depending on the configuration settings, it performs various segmentation algorithms and updates
the instance variables accordingly.
Source code in src/cellects/core/motion_analysis.py
419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 | |
fractal_descriptions()
Method for analyzing fractal patterns in binary data.
Fractal analysis is performed on the binary representation of the data, optionally considering network dynamics if specified. The results include fractal dimensions, R-values, and box counts for the data.
If network analysis is enabled, additional fractal dimensions, R-values, and box counts are calculated for the inner network. If 'output_in_mm' is True, then values in mm can be obtained.
Source code in src/cellects/core/motion_analysis.py
1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 | |
frame_by_frame_segmentation(t, previous_binary_image=None)
Frame-by-frame segmentation of a video.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
t
|
int
|
The time index of the frame to process. |
required |
previous_binary_image
|
NDArray
|
The binary image from the previous frame. Default is |
None
|
Returns:
| Type | Description |
|---|---|
OneImageAnalysis
|
An object containing the analysis of the current frame. |
Source code in src/cellects/core/motion_analysis.py
get_covering_duration(step)
Determine the number of frames necessary for a pixel to get covered.
This function identifies the time when significant growth or motion occurs in a video and calculates the number of frames needed for a pixel to be completely covered. It also handles noise and ensures that the calculated step value is reasonable.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
step
|
int
|
The initial step size for frame analysis. |
required |
Raises:
| Type | Description |
|---|---|
Exception
|
If an error occurs during the calculation process. |
Notes
This function may modify several instance attributes including
substantial_time, step, and start.
Source code in src/cellects/core/motion_analysis.py
320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 | |
get_descriptors_from_binary(release_memory=True)
Methods: get_descriptors_from_binary
Summary
Generates shape descriptors for binary images, computes these descriptors for each frame and handles colony tracking. This method can optionally release memory to reduce usage, apply scaling factors to descriptors in millimeters and computes solidity separately if requested.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
release_memory
|
bool
|
Flag to determine whether memory should be released after computation. Default is True. |
True
|
Other Parameters:
| Name | Type | Description |
|---|---|---|
**self |
DataFrame to store one row of descriptors per frame. - 'arena': Arena identifier, repeated for each frame. - 'time': Array of time values corresponding to frames. |
|
**self |
3D array representing binary images over time. - t,x,y: Time index, x-coordinate, and y-coordinate. |
|
**self |
Tuple containing image dimensions. - 0: Number of time frames. - 1,2: Image width and height respectively. |
|
**self |
Array containing surface areas for each frame. |
|
**self |
Time interval between frames, calculated only if provided timings are non-zero. |
Notes
This method uses various helper methods and classes like ShapeDescriptors for computing shape descriptors,
PercentAndTimeTracker for progress tracking, and other image processing techniques such as connected components analysis.
Source code in src/cellects/core/motion_analysis.py
1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 | |
get_origin_shape()
Determine the origin shape and initialize variables based on the state of the current analysis.
This method analyzes the initial frame or frames to determine the origin shape of an object in a video, initializing necessary variables and matrices for further processing.
Attributes Modified: start: (int) Indicates the starting frame index. origin_idx: (np.ndarray) The indices of non-zero values in the origin matrix. covering_intensity: (np.ndarray) Matrix used for pixel fading intensity. substantial_growth: (int) Represents a significant growth measure based on the origin.
Notes: - The method behavior varies if 'origin_state' is set to "constant" or not. - If the background is lighter, 'covering_intensity' matrix is initialized. - Uses connected components to determine which shape is closest to the center or largest, based on 'appearance_detection_method'.
Source code in src/cellects/core/motion_analysis.py
initialize_post_processing()
Initialize post-processing for video analysis.
This function initializes various parameters and prepares the binary representation used in post-processing of video data. It logs information about the settings, handles initial origin states, sets up segmentation data, calculates surface areas, and optionally corrects errors around the initial shape or prevents fast growth near the periphery.
Notes
This function performs several initialization steps and logs relevant information, including handling different origin states, updating segmentation data, and calculating the gravity field based on binary representation.
Source code in src/cellects/core/motion_analysis.py
880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 | |
load_images_and_videos(videos_already_in_ram, i)
Load images and videos from disk or RAM.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
videos_already_in_ram
|
ndarray or None
|
Video data that is already loaded into RAM. If |
required |
i
|
int
|
Index used to select the origin and background data. |
required |
Notes
This method logs information about the arena number and loads necessary data
from disk or RAM based on whether videos are already in memory. It sets various
attributes like self.origin, self.background, and self.converted_video.
Source code in src/cellects/core/motion_analysis.py
lum_slope_segmentation(converted_video)
Perform lum slope segmentation on the given video.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
converted_video
|
NDArray
|
The input video array for segmentation processing. |
required |
Returns:
| Type | Description |
|---|---|
NDArray
|
Segmented gradient array of the video. If segmentation fails,
returns |
Notes
This function may consume significant memory and adjusts data types (float32 or float64) based on available RAM.
Examples:
Source code in src/cellects/core/motion_analysis.py
744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 | |
lum_value_segmentation(converted_video, do_threshold_segmentation)
Perform segmentation based on luminosity values from a video.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
converted_video
|
NDArray
|
The input video data in a NumPy array format. |
required |
do_threshold_segmentation
|
bool
|
Flag to determine whether threshold segmentation should be applied. |
required |
Returns:
| Type | Description |
|---|---|
Tuple[NDArray, NDArray]
|
A tuple containing two NumPy arrays: - The first array is the luminosity segmentation of the video. - The second array represents the luminosity threshold over time. |
Notes
This function operates under the assumption that there is sufficient motion in the video data.
If no valid thresholds are found for segmentation, the function returns None for
luminosity_segmentation.
Source code in src/cellects/core/motion_analysis.py
581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 | |
networks_analysis(show_seg=False)
Perform network detection within a given arena.
This function carries out the task of detecting networks in an arena
based on several parameters and variables. It involves checking video
type, performing network detection over time, potentially detecting
pseudopods, and smoothing segmentation. The results can be visualized or saved.
Extract and analyze graphs from a binary representation of network dynamics, producing vertex and edge tables that represent the graph structure over time.
Args:
None
Attributes:
vars (dict): Dictionary of variables that control the graph extraction process.
- 'save_graph': Boolean indicating if graph extraction should be performed.
- 'save_coord_network': Boolean indicating if the coordinate network should be saved.
one_descriptor_per_arena (dict): Dictionary containing descriptors for each arena.
dims (tuple): Tuple containing dimension information.
- [0]: Integer representing the number of time steps.
- [1]: Integer representing the y-dimension size.
- [2]: Integer representing the x-dimension size.
origin (np.ndarray): Binary image representing the origin of the network.
binary (np.ndarray): Binary representation of network dynamics over time.
Shape: (time_steps, y_dimension, x_dimension).
converted_video (np.ndarray): Converted video data.
Shape: (y_dimension, x_dimension, time_steps).
network_dynamics (np.ndarray): Network dynamics representation.
Shape: (time_steps, y_dimension, x_dimension).
Notes:
- This method performs graph extraction and saves the vertex and edge tables to CSV files.
- The CSV files are named according to the arena, time steps, and dimensions.
Args:
show_seg: bool = False
A flag that determines whether to display the segmentation visually.
Source code in src/cellects/core/motion_analysis.py
1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 | |
save_efficiency_tests()
Provide images allowing to assess the analysis efficiency
This method generates two test images used for assessing the efficiency of the analysis. It performs various operations on video frames to create these images, including copying and manipulating frames from the video, detecting contours on binary images, and drawing the arena label on the left of the frames.
Source code in src/cellects/core/motion_analysis.py
save_results(with_efficiency_tests=True, with_video=True)
Save the results of testing and video processing.
This method handles the saving of efficiency tests, video files, and CSV data related to test results. It checks for existing files before writing new data. Additionally, it cleans up temporary files if configured to do so.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
with_efficiency_tests
|
bool
|
Also save two images showing the analysis efficiency. |
True
|
with_video
|
bool
|
Also save a video showing the analysis efficiency. |
True
|
Source code in src/cellects/core/motion_analysis.py
save_video()
Save processed video with contours and other annotations.
This method processes the binary image to extract contours, overlay them on a video, and save the resulting video file.
Notes:
- This method uses OpenCV for image processing and contour extraction.
- The processed video includes contours colored according to the
contour_color specified in the variables.
- Additional annotations such as time in minutes are added to each
frame if applicable.
Source code in src/cellects/core/motion_analysis.py
1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 | |
smooth_pixel_slopes(converted_video)
Apply smoothing to pixel slopes in a video by convolving with a moving average kernel.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
converted_video
|
NDArray
|
The input video array to be smoothed. |
required |
Returns:
| Type | Description |
|---|---|
NDArray
|
Smoothed video array with pixel slopes averaged using a moving average kernel. |
Raises:
| Type | Description |
|---|---|
MemoryError
|
If there is not enough RAM available to perform the smoothing operation. |
Notes
This function applies a moving average kernel to each pixel across the frames of the input video. The smoothing operation can be repeated based on user-defined settings. The precision of the output array is controlled by a flag that determines whether to save memory at the cost of accuracy.
Examples:
>>> smoothed = smooth_pixel_slopes(converted_video)
>>> print(smoothed.shape) # Expected output will vary depending on the input video shape
Source code in src/cellects/core/motion_analysis.py
study_cytoscillations(show_seg=False)
Study the cytoskeletal oscillations within a video frame by frame.
This method performs an analysis of cytoskeletal oscillations in the video, identifying regions of influx and efflux based on pixel connectivity. It also handles memory allocation for the oscillations video, computes connected components, and optionally displays the segmented regions.
Args: show_seg (bool): If True, display the segmentation results.
Source code in src/cellects/core/motion_analysis.py
update_ring_width()
Update the pixel_ring_depth and create an erodila disk.
This method ensures that the pixel ring depth is odd and at least 3, then creates an erodila disk of that size.
Source code in src/cellects/core/motion_analysis.py
update_shape(show_seg)
Update the shape of detected objects in the current frame by analyzing segmentation potentials and applying morphological operations.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
show_seg
|
bool
|
Flag indicating whether to display segmentation results. |
required |
Notes
This function performs several operations to update the shape of detected objects:
- Analyzes segmentation potentials from previous frames.
- Applies morphological operations to refine the shape.
- Updates internal state variables such as binary and covering_intensity.
Source code in src/cellects/core/motion_analysis.py
958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 | |
NetworkDetection
NetworkDetection
Class for detecting vessels in images using Frangi and Sato filters with various parameter sets. It applies different thresholding methods, calculates quality metrics, and selects the best detection method.
Source code in src/cellects/image_analysis/network_functions.py
195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 | |
__init__(greyscale_image, possibly_filled_pixels=None, add_rolling_window=False, origin_to_add=None, edge_max_width=5, best_result=None)
Initialize the object with given parameters.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
greyscale_image
|
NDArray[uint8]
|
The input greyscale image. |
required |
possibly_filled_pixels
|
NDArray[uint8]
|
Image containing possibly filled pixels. Defaults to None. |
None
|
add_rolling_window
|
bool
|
Flag to add rolling window. Defaults to False. |
False
|
origin_to_add
|
NDArray[uint8]
|
Origin to add. Defaults to None. |
None
|
edge_max_width
|
int
|
Maximal width of network edges. Defaults to 5. |
5
|
best_result
|
dict
|
Best result dictionary. Defaults to None. |
None
|
Source code in src/cellects/image_analysis/network_functions.py
apply_frangi_variations()
Applies various Frangi filter variations with different sigma values and thresholding methods.
This method applies the Frangi vesselness filter with multiple sets of sigma values to detect vessels at different scales. It applies both Otsu thresholding and rolling window segmentation to the filtered results and calculates binary quality indices.
Returns:
| Name | Type | Description |
|---|---|---|
results |
list of dict
|
A list containing dictionaries with the method name, binary result, quality index, filtered image, filter type, rolling window flag, and sigma values used. |
Source code in src/cellects/image_analysis/network_functions.py
apply_sato_variations()
Apply various Sato filter variations to an image and store the results.
This function applies different parameter sets for the Sato vesselness filter to an image, applies two thresholding methods (Otsu and rolling window), and stores the results. The function supports optional rolling window segmentation based on a configuration flag.
Returns:
| Type | Description |
|---|---|
list of dict
|
A list containing dictionaries with the results for each filter variation. Each dictionary includes method name, binary image, quality index, filtered result, filter type, rolling window flag, and sigma values. |
Source code in src/cellects/image_analysis/network_functions.py
change_greyscale(img, first_dict)
Change the image to greyscale using color space combinations.
This function converts an input image to greyscale by generating and applying a combination of color spaces specified in the dictionary. The resulting greyscale image is stored as an attribute of the instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
img
|
ndarray of uint8
|
The input image to be converted to greyscale. |
required |
Source code in src/cellects/image_analysis/network_functions.py
detect_network()
Process and detect network features in the greyscale image.
This method applies a frangi or sato filter based on the best result and
performs segmentation using either rolling window or Otsu's thresholding.
The final network detection result is stored in self.incomplete_network.
Source code in src/cellects/image_analysis/network_functions.py
detect_pseudopods(lighter_background, pseudopod_min_size=50, only_one_connected_component=True)
Detect pseudopods in a binary image.
Identify and process regions that resemble pseudopods based on width, size, and connectivity criteria. This function is used to detect and label areas that are indicative of pseudopod-like structures within a binary image.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
lighter_background
|
bool
|
Boolean flag to indicate if the background should be considered lighter. |
required |
pseudopod_min_size
|
int
|
Minimum size for pseudopods to be considered valid. Default is 50. |
50
|
only_one_connected_component
|
bool
|
Flag to ensure only one connected component is kept. Default is True. |
True
|
Returns:
| Type | Description |
|---|---|
None
|
|
Notes
This function modifies internal attributes of the object, specifically setting self.pseudopods to an array indicating pseudopod regions.
Examples:
>>> result = detect_pseudopods(True, 5, 50)
>>> print(self.pseudopods)
array([[0, 1, ..., 0],
[0, 0, ..., 0],
...,
[0, 1, ..., 0]], dtype=uint8)
Source code in src/cellects/image_analysis/network_functions.py
468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 | |
get_best_network_detection_method()
Get the best network detection method based on quality metrics.
This function applies Frangi and Sato variations, combines their results, calculates quality metrics for each result, and selects the best method.
Attributes:
| Name | Type | Description |
|---|---|---|
all_results |
list of dicts
|
Combined results from Frangi and Sato variations. |
quality_metrics |
ndarray of float64
|
Quality metrics for each detection result. |
best_idx |
int
|
Index of the best detection method based on quality metrics. |
best_result |
dict
|
The best detection result from all possible methods. |
incomplete_network |
ndarray of bool
|
Binary representation of the best detection result. |
Examples:
>>> possibly_filled_pixels = np.zeros((9, 9), dtype=np.uint8)
>>> possibly_filled_pixels[3:6, 3:6] = 1
>>> possibly_filled_pixels[1:6, 3] = 1
>>> possibly_filled_pixels[6:-1, 5] = 1
>>> possibly_filled_pixels[4, 1:-1] = 1
>>> greyscale_image = possibly_filled_pixels.copy()
>>> greyscale_image[greyscale_image > 0] = np.random.randint(170, 255, possibly_filled_pixels.sum())
>>> greyscale_image[greyscale_image == 0] = np.random.randint(0, 120, possibly_filled_pixels.size - possibly_filled_pixels.sum())
>>> add_rolling_window=False
>>> origin_to_add = np.zeros((9, 9), dtype=np.uint8)
>>> origin_to_add[3:6, 3:6] = 1
>>> NetDet = NetworkDetection(greyscale_image, possibly_filled_pixels, add_rolling_window, origin_to_add)
>>> NetDet.get_best_network_detection_method()
>>> print(NetDet.best_result['method'])
>>> print(NetDet.best_result['binary'])
>>> print(NetDet.best_result['quality'])
>>> print(NetDet.best_result['filtered'])
>>> print(NetDet.best_result['filter'])
>>> print(NetDet.best_result['rolling_window'])
>>> print(NetDet.best_result['sigmas'])
bgr_image = np.random.randint(0, 256, (100, 100, 3), dtype=np.uint8)
Source code in src/cellects/image_analysis/network_functions.py
PickleRick
A class to handle safe file reading and writing operations using pickle.
This class ensures that files are not being accessed concurrently by creating a lock file (PickleRickX.pkl) to signal that the file is open. It includes methods to check for the lock file, write data safely, and read data safely.
Source code in src/cellects/utils/load_display_save.py
30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 | |
__init__(pickle_rick_number='')
Initialize a new instance of the class.
This constructor sets up initial attributes for tracking Rick's state, including a boolean flag for waiting for Pickle Rick, a counter, the provided pickle Rick number, and the time when the first check was performed.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
pickle_rick_number
|
str
|
The number associated with Pickle Rick. Defaults to an empty string. |
''
|
Source code in src/cellects/utils/load_display_save.py
read_file(file_name)
Reads the contents of a file using pickle and returns it.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
file_name
|
str
|
The name of the file to be read. |
required |
Returns:
| Type | Description |
|---|---|
Union[Any, None]
|
The content of the file if successfully read; otherwise, |
Raises:
| Type | Description |
|---|---|
Exception
|
If there is an error reading the file. |
Notes
This function attempts to read a file multiple times if it fails.
If the number of attempts exceeds 1000, it logs an error and returns None.
Examples:
Source code in src/cellects/utils/load_display_save.py
write_file(file_content, file_name)
Write content to a file with error handling and retry logic.
This function attempts to write the provided content into a file. If it fails, it retries up to 100 times with some additional checks and delays. Note that the content is serialized using pickle.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
file_content
|
Any
|
The data to be written into the file. This will be pickled. |
required |
file_name
|
str
|
The name of the file where data should be written. |
required |
Returns:
| Type | Description |
|---|---|
None
|
|
Raises:
| Type | Description |
|---|---|
Exception
|
If the file cannot be written after 100 attempts, an error is logged. |
Notes
This function uses pickle to serialize the data, which can introduce security risks
if untrusted content is being written. It performs some internal state checks,
such as verifying that the target file isn't open and whether it should delete
some internal state, represented by _delete_pickle_rick.
The function implements a retry mechanism with a backoff strategy that can include random delays, though the example code does not specify these details explicitly.
Examples:
Source code in src/cellects/utils/load_display_save.py
ad_pad(arr)
Pad the input array with a single layer of zeros around its edges.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
arr
|
ndarray
|
The input array to pad. Must be at least 2-dimensional. |
required |
Returns:
| Name | Type | Description |
|---|---|---|
padded_arr |
ndarray
|
The output array with a single 0-padded layer around its edges. |
Notes
This function uses NumPy's pad with mode='constant' to add a single layer
of zeros around the edges of the input array.
Examples:
>>> arr = np.array([[1, 2], [3, 4]])
>>> ad_pad(arr)
array([[0, 0, 0, 0],
[0, 1, 2, 0],
[0, 3, 4, 0],
[0, 0, 0, 0]])
Source code in src/cellects/image_analysis/network_functions.py
add_padding(array_list)
Add padding to each 2D array in a list.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
array_list
|
list of ndarrays
|
List of 2D NumPy arrays to be processed. |
required |
Returns:
| Name | Type | Description |
|---|---|---|
out |
list of ndarrays
|
List of 2D NumPy arrays with the padding removed. |
Examples:
>>> array_list = [np.array([[1, 2], [3, 4]])]
>>> padded_list = add_padding(array_list)
>>> print(padded_list[0])
[[0 0 0]
[0 1 2 0]
[0 3 4 0]
[0 0 0]]
Source code in src/cellects/image_analysis/network_functions.py
binary_quality_index(binary_img)
Calculate the binary quality index for a binary image.
The binary quality index is computed based on the perimeter of the largest connected component in the binary image, normalized by the total number of pixels.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
binary_img
|
ndarray of uint8
|
Input binary image array. |
required |
Returns:
| Name | Type | Description |
|---|---|---|
out |
float
|
The binary quality index value. |
Source code in src/cellects/image_analysis/image_segmentation.py
bracket_to_uint8_image_contrast(image)
Convert an image with bracket contrast values to uint8 type.
This function normalizes an input image by scaling the minimum and maximum values of the image to the range [0, 255] and then converts it to uint8 data type.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
image
|
ndarray
|
Input image as a numpy array with floating-point values. |
required |
Returns:
| Type | Description |
|---|---|
ndarray of uint8
|
Output image converted to uint8 type after normalization. |
Examples:
>>> image = np.random.randint(0, 255, (10, 10), dtype=np.uint8)
>>> res = bracket_to_uint8_image_contrast(image)
>>> print(res)
>>> image = np.zeros((10, 10), dtype=np.uint8)
>>> res = bracket_to_uint8_image_contrast(image)
>>> print(res)
Source code in src/cellects/utils/formulas.py
close_holes(binary_img)
Close holes in a binary image using connected components analysis.
This function identifies and closes small holes within the foreground objects of a binary image. It uses connected component analysis to find and fill holes that are smaller than the main object.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
binary_img
|
ndarray of uint8
|
Binary input image where holes need to be closed. |
required |
Returns:
| Name | Type | Description |
|---|---|---|
out |
ndarray of uint8
|
Binary image with closed holes. |
Examples:
>>> binary_img = np.zeros((10, 10), dtype=np.uint8)
>>> binary_img[2:8, 2:8] = 1
>>> binary_img[4:6, 4:6] = 0 # Creating a hole
>>> result = close_holes(binary_img)
>>> print(result)
[[0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 0 0 0 0 0 0]
[0 0 1 1 1 1 1 1 0 0]
[0 0 1 1 1 1 1 1 0 0]
[0 0 1 1 1 1 1 1 0 0]
[0 0 1 1 1 1 1 1 0 0]
[0 0 1 1 1 1 1 1 0 0]
[0 0 1 1 1 1 1 1 0 0]
[0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 0 0 0 0 0 0]]
Source code in src/cellects/image_analysis/morphological_operations.py
create_empty_videos(image_list, cr, lose_accuracy_to_save_memory, already_greyscale, csc_dict)
Create empty video arrays based on input parameters.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
image_list
|
list
|
List of images. |
required |
cr
|
list
|
Crop region defined by [x_start, y_start, x_end, y_end]. |
required |
lose_accuracy_to_save_memory
|
bool
|
Boolean flag to determine if memory should be saved by using uint8 data type. |
required |
already_greyscale
|
bool
|
Boolean flag indicating if the images are already in greyscale format. |
required |
csc_dict
|
dict
|
Dictionary containing color space conversion settings, including 'logical' key. |
required |
Returns:
| Type | Description |
|---|---|
tuple
|
A tuple containing three elements:
- |
Notes
Performance considerations:
- If lose_accuracy_to_save_memory is True, the function uses np.uint8 for memory efficiency.
- If already_greyscale is False, additional arrays are created to store RGB data.
Source code in src/cellects/utils/load_display_save.py
detect_network_dynamics(converted_video, binary, arena_label=1, starting_time=0, visu=None, origin=None, smooth_segmentation_over_time=True, edge_max_width=5, detect_pseudopods=True, save_coord_network=True, show_seg=False)
Detects and tracks dynamic features (e.g., pseudopods) in a biological network over time from video data.
Analyzes spatiotemporal dynamics of a network structure using binary masks and grayscale video data. Processes each frame to detect network components, optionally identifies pseudopods, applies temporal smoothing, and generates visualization overlays. Saves coordinate data for detected networks if enabled.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
converted_video
|
NDArray
|
Input video data array with shape (time x y x z) representing grayscale intensities. |
required |
binary
|
NDArray[uint8]
|
Binary mask array with shape (time x y x z) indicating filled regions in each frame. |
required |
arena_label
|
int
|
Unique identifier for the current processing arena/session to name saved output files. |
1
|
starting_time
|
int
|
Zero-based index of the first frame to begin network detection and analysis from. |
0
|
visu
|
NDArray
|
Visualization video array (time x y x z) with RGB channels for overlay rendering. |
None
|
origin
|
NDArray[uint8]
|
Binary mask defining a central region of interest to exclude from network detection. |
None
|
smooth_segmentation_over_time
|
(bool, optional(default=True))
|
Flag indicating whether to apply temporal smoothing using adjacent frame data. |
True
|
edge_max_width
|
int
|
Maximal width of network edges. Defaults to 5. |
5
|
detect_pseudopods
|
(bool, optional(default=True))
|
Determines if pseudopod regions should be detected and merged with the network. |
True
|
save_coord_network
|
(bool, optional(default=True))
|
Controls saving of detected network/pseudopod coordinates as NumPy arrays. |
True
|
show_seg
|
(bool, optional(default=False))
|
Enables real-time visualization display during processing. |
False
|
Returns:
| Type | Description |
|---|---|
NDArray[uint8]
|
3D array containing detected network structures with shape (time x y x z). Uses:
- |
Notes
- Memory-intensive operations on large arrays may require system resources.
- Temporal smoothing effectiveness depends on network dynamics consistency between frames.
- Pseudopod detection requires sufficient contrast with the background in grayscale images.
Source code in src/cellects/image_analysis/network_functions.py
50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 | |
display_boxes(binary_image, box_diameter, show=True)
Display grid lines on a binary image at specified box diameter intervals.
This function displays the given binary image with vertical and horizontal
grid lines drawn at regular intervals defined by box_diameter. The function
returns the total number of grid lines drawn.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
binary_image
|
ndarray
|
Binary image on which to draw the grid lines. |
required |
box_diameter
|
int
|
Diameter of each box in pixels. |
required |
Returns:
| Name | Type | Description |
|---|---|---|
line_nb |
int
|
Number of grid lines drawn, both vertical and horizontal. |
Examples:
>>> import numpy as np
>>> binary_image = np.random.randint(0, 2, (100, 100), dtype=np.uint8)
>>> display_boxes(binary_image, box_diameter=25)
Source code in src/cellects/utils/load_display_save.py
display_network_methods(network_detection, save_path=None)
Display segmentation results from a network detection object.
Extended Description
Plots the binary segmentation results for various methods stored in network_detection.all_results.
Highlights the best result based on quality metrics and allows for saving the figure to a file.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
network_detection
|
object
|
An object containing segmentation results and quality metrics. |
required |
save_path
|
str
|
Path to save the figure. If |
None
|
Source code in src/cellects/utils/load_display_save.py
eudist(v1, v2)
Calculate the Euclidean distance between two points in n-dimensional space.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
v1
|
iterable of float
|
The coordinates of the first point. |
required |
v2
|
iterable of float
|
The coordinates of the second point. |
required |
Returns:
| Type | Description |
|---|---|
float
|
The Euclidean distance between |
Raises:
| Type | Description |
|---|---|
ValueError
|
If |
Notes
The Euclidean distance is calculated using the standard formula: √((x2 − x1)^2 + (y2 − y1)^2 + ...).
Examples:
Source code in src/cellects/utils/formulas.py
extract_graph_dynamics(converted_video, coord_network, arena_label, starting_time=0, origin=None, coord_pseudopods=None)
Extracts dynamic graph data from video frames based on network dynamics.
This function processes time-series binary network structures to extract evolving vertices and edges over time. It computes spatial relationships between networks and an origin point through image processing steps including contour detection, padding for alignment, skeleton extraction, and morphological analysis. Vertex and edge attributes like position, connectivity, width, intensity, and betweenness are compiled into tables saved as CSV files.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
converted_video
|
NDArray
|
3D video data array (t x y x) containing pixel intensities used for calculating edge intensity attributes during table generation. |
required |
coord_network
|
NDArray[uint8]
|
3D binary network mask array (t x y x) representing connectivity structures across time points. |
required |
arena_label
|
int
|
Unique identifier to prefix output filenames corresponding to specific experimental arenas. |
required |
starting_time
|
(int, optional(default=0))
|
Time index within |
0
|
origin
|
(NDArray[uint8], optional(default=None))
|
Binary mask identifying the region of interest's central origin for spatial reference during network comparison. |
None
|
Returns:
| Type | Description |
|---|---|
None
|
|
Saves two CSV files in working directory:
|
|
1. `vertex_table{arena_label}_t{T}_y{Y}_x{X}.csv` - Vertex table with time, coordinates, and connectivity information
|
|
2. `edge_table{arena_label}_t{T}_y{Y}_x{X}.csv` - Edge table containing attributes like length, width, intensity, and betweenness
|
|
Notes
Output CSVs use NumPy arrays converted to pandas DataFrames with columns: - Vertex table includes timestamps (t), coordinates (y,x), and connectivity flags. - Edge table contains betweenness centrality calculated during skeleton processing. Origin contours are spatially aligned through padding operations to maintain coordinate consistency across time points.
Source code in src/cellects/image_analysis/network_functions.py
550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 | |
extract_time(pathway='', image_list=None, raw_images=False)
Extract timestamps from a list of images.
This function extracts the DateTimeOriginal or datetime values from the EXIF data of a list of image files, and computes the total time in seconds.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
pathway
|
str
|
Path to the directory containing the images. Default is an empty string. |
''
|
image_list
|
list of str
|
List of image file names. |
None
|
raw_images
|
bool
|
If True, use the exifread library. Otherwise, use the exif library. Default is False. |
False
|
Returns:
| Name | Type | Description |
|---|---|---|
time |
ndarray of int64
|
Array containing the total time in seconds for each image. |
Examples:
>>> pathway = Path(__name__).resolve().parents[0] / "data" / "single_experiment"
>>> image_list = ['image1.tif', 'image2.tif']
>>> time = extract_time(pathway, image_list)
>>> print(time)
array([0, 0])
Source code in src/cellects/utils/load_display_save.py
1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 | |
find_common_coord(array1, array2)
Find common coordinates between two arrays.
This function compares the given 2D array1 and array2
to determine if there are any common coordinates.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
array1
|
ndarray of int
|
A 2D numpy ndarray. |
required |
array2
|
ndarray of int
|
Another 2D numpy ndarray. |
required |
Returns:
| Name | Type | Description |
|---|---|---|
out |
ndarray of bool
|
A boolean numpy ndarray where True indicates common coordinates. |
Examples:
>>> array1 = np.array([[1, 2], [3, 4]])
>>> array2 = np.array([[5, 6], [1, 2]])
>>> result = find_common_coord(array1, array2)
>>> print(result)
array([ True, False])
Source code in src/cellects/utils/formulas.py
find_duplicates_coord(array1)
Find duplicate rows in a 2D array and return their coordinate indices.
Given a 2D NumPy array, this function identifies rows that are duplicated (i.e., appear more than once) and returns a boolean array indicating their positions.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
array1
|
ndarray of int
|
Input 2D array of shape (n_rows, n_columns) from which to find duplicate rows. |
required |
Returns:
| Name | Type | Description |
|---|---|---|
duplicates |
ndarray of bool
|
Boolean array of shape (n_rows,), where |
Examples:
>>> import numpy as np
>>> array1 = np.array([[1, 2], [3, 4], [1, 2], [5, 6]])
>>> find_duplicates_coord(array1)
array([ True, False, True, False])
Source code in src/cellects/utils/formulas.py
find_threshold_given_mask(greyscale, mask, min_threshold=0)
Find the optimal threshold value for a greyscale image given a mask.
This function performs a binary search to find the optimal threshold that maximizes the separation between two regions defined by the mask. The search is bounded by a minimum threshold value.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
greyscale
|
ndarray of uint8
|
The greyscale image array. |
required |
mask
|
ndarray of uint8
|
The binary mask array where positive values define region A and zero values define region B. |
required |
min_threshold
|
uint8
|
The minimum threshold value for the search. Defaults to 0. |
0
|
Returns:
| Name | Type | Description |
|---|---|---|
out |
uint8
|
The optimal threshold value found. |
Examples:
>>> greyscale = np.array([[255, 128, 54], [0, 64, 20]], dtype=np.uint8)
>>> mask = np.array([[1, 1, 0], [0, 0, 0]], dtype=np.uint8)
>>> find_threshold_given_mask(greyscale, mask)
54
Source code in src/cellects/image_analysis/image_segmentation.py
generate_color_space_combination(bgr_image, c_spaces, first_dict, second_dict=Dict(), background=None, background2=None, convert_to_uint8=False, all_c_spaces={})
Generate color space combinations for an input image.
This function generates a grayscale image by combining multiple color spaces from an input BGR image and provided dictionaries. Optionally, it can also generate a second grayscale image using another dictionary.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
bgr_image
|
ndarray of uint8
|
The input image in BGR color space. |
required |
c_spaces
|
list
|
List of color spaces to consider for combination. |
required |
first_dict
|
Dict
|
Dictionary containing color space and transformation details for the first grayscale image. |
required |
second_dict
|
Dict
|
Dictionary containing color space and transformation details for the second grayscale image. |
Dict()
|
background
|
ndarray
|
Background image to be used. Default is None. |
None
|
background2
|
ndarray
|
Second background image to be used for the second grayscale image. Default is None. |
None
|
convert_to_uint8
|
bool
|
Flag indicating whether to convert the output images to uint8. Default is False. |
False
|
Returns:
| Name | Type | Description |
|---|---|---|
out |
tuple of ndarray of uint8
|
A tuple containing the first and second grayscale images. |
Examples:
>>> bgr_image = np.random.randint(0, 256, (100, 100, 3), dtype=np.uint8)
>>> c_spaces = ['bgr', 'hsv']
>>> first_dict = Dict()
>>> first_dict['bgr'] = [0, 1, 1]
>>> second_dict = Dict()
>>> second_dict['hsv'] = [0, 0, 1]
>>> greyscale_image1, greyscale_image2 = generate_color_space_combination(bgr_image, c_spaces, first_dict, second_dict)
>>> print(greyscale_image1.shape)
(100, 100)
Source code in src/cellects/image_analysis/image_segmentation.py
get_all_line_coordinates(start_point, end_points)
Get all line coordinates between start point and end points.
This function computes the coordinates of lines connecting a start point to multiple end points, converting input arrays to float if necessary before processing.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
start_point
|
NDArray[float]
|
Starting coordinate point for the lines. Can be of any numeric type, will be converted to float if needed. |
required |
end_points
|
NDArray[float]
|
Array of end coordinate points for the lines. Can be of any numeric type, will be converted to float if needed. |
required |
Returns:
| Name | Type | Description |
|---|---|---|
out |
List[NDArray[int]]
|
A list of numpy arrays containing the coordinates of each line as integer values. |
Examples:
>>> start_point = np.array([0, 0])
>>> end_points = np.array([[1, 2], [3, 4]])
>>> get_all_line_coordinates(start_point, end_points)
[array([[0, 0],
[0, 1],
[1, 2]], dtype=uint64), array([[0, 0],
[1, 1],
[1, 2],
[2, 3],
[3, 4]], dtype=uint64)]
Source code in src/cellects/image_analysis/morphological_operations.py
get_branches_and_tips_coord(pad_vertices, pad_tips)
Extracts the coordinates of branches and tips from vertices and tips binary images.
This function calculates branch coordinates by subtracting tips from vertices. Then it finds and outputs the non-zero indices of branches and tips separatly.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
pad_vertices
|
ndarray
|
Array containing the vertices to be padded. |
required |
pad_tips
|
ndarray
|
Array containing the tips of the padding. |
required |
Returns:
| Name | Type | Description |
|---|---|---|
branch_v_coord |
ndarray
|
Coordinates of branches derived from subtracting tips from vertices. |
tips_coord |
ndarray
|
Coordinates of the tips. |
Examples:
Source code in src/cellects/image_analysis/network_functions.py
get_contour_width_from_im_shape(im_shape)
Calculate the contour width based on image shape.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
im_shape
|
tuple of int, two items
|
The dimensions of the image. |
required |
Returns:
| Type | Description |
|---|---|
int
|
The calculated contour width. |
Source code in src/cellects/utils/formulas.py
get_contours(binary_image)
Find and return the contours of a binary image.
This function erodes the input binary image using a 3x3 cross-shaped structuring element and then subtracts the eroded image from the original to obtain the contours.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
binary_image
|
ndarray of uint8
|
Input binary image from which to extract contours. |
required |
Returns:
| Name | Type | Description |
|---|---|---|
out |
ndarray of uint8
|
Image containing only the contours extracted from |
Examples:
>>> binary_image = np.zeros((10, 10), dtype=np.uint8)
>>> binary_image[2:8, 2:8] = 1
>>> result = get_contours(binary_image)
>>> print(result)
[[0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 0 0 0 0 0 0]
[0 0 1 1 1 1 1 1 0 0]
[0 0 1 0 0 0 0 1 0 0]
[0 0 1 0 0 0 0 1 0 0]
[0 0 1 0 0 0 0 1 0 0]
[0 0 1 0 0 0 0 1 0 0]
[0 0 1 1 1 1 1 1 0 0]
[0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 0 0 0 0 0 0]]
Source code in src/cellects/image_analysis/morphological_operations.py
get_h5_keys(file_name)
Retrieve all keys from a given HDF5 file.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
file_name
|
str
|
The path to the HDF5 file from which keys are to be retrieved. |
required |
Returns:
| Type | Description |
|---|---|
list of str
|
A list containing all the keys present in the specified HDF5 file. |
Raises:
| Type | Description |
|---|---|
FileNotFoundError
|
If the specified HDF5 file does not exist. |
Source code in src/cellects/utils/load_display_save.py
get_inertia_axes(mo)
Calculate the inertia axes of a moment object.
This function computes the barycenters, central moments, and the lengths of the major and minor axes, as well as their orientation.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
mo
|
dict
|
Dictionary containing moments, which should include keys: 'm00', 'm10', 'm01', 'm20', and 'm11'. |
required |
Returns:
| Type | Description |
|---|---|
tuple
|
A tuple containing: - cx : float The x-coordinate of the barycenter. - cy : float The y-coordinate of the barycenter. - major_axis_len : float The length of the major axis. - minor_axis_len : float The length of the minor axis. - axes_orientation : float The orientation of the axes in radians. |
Notes
This function uses Numba's @njit decorator for performance. The moments in the input dictionary should be computed from the same image region.
Examples:
>>> mo = {'m00': 1.0, 'm10': 2.0, 'm01': 3.0, 'm20': 4.0, 'm11': 5.0}
>>> get_inertia_axes(mo)
(2.0, 3.0, 9.165151389911677, 0.8421875803239, 0.7853981633974483)
Source code in src/cellects/utils/formulas.py
get_inner_vertices(pad_skeleton, potential_tips, cnv4, cnv8)
Get inner vertices from skeleton image.
This function identifies and returns the inner vertices of a skeletonized image. It processes potential tips to determine which pixels should be considered as vertices based on their neighbor count and connectivity.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
pad_skeleton
|
ndarray of uint8
|
The padded skeleton image. |
required |
potential_tips
|
ndarray of uint8
|
Potential tip points in the skeleton. Defaults to pad_tips. |
required |
cnv4
|
object
|
Object for handling 4-connections. |
required |
cnv8
|
object
|
Object for handling 8-connections. |
required |
Returns:
| Name | Type | Description |
|---|---|---|
out |
tuple of ndarray of uint8, ndarray of uint8
|
A tuple containing the final vertices matrix and the updated potential tips. |
Examples:
>>> pad_vertices, potential_tips = get_inner_vertices(pad_skeleton, potential_tips)
>>> print(pad_vertices)
Source code in src/cellects/image_analysis/network_functions.py
852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 | |
get_kurtosis(mo, binary_image, cx, cy, sx, sy)
Calculate the kurtosis of a binary image.
The function calculates the fourth moment (kurtosis) of the given binary image around the specified center coordinates with an option to specify the size of the square window.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
mo
|
dict
|
Dictionary containing moments of binary image. |
required |
binary_image
|
ndarray
|
A 2D numpy ndarray representing a binary image. |
required |
cx
|
int or float
|
The x-coordinate of the center point of the square window. |
required |
cy
|
int or float
|
The y-coordinate of the center point of the square window. |
required |
sx
|
int or float
|
The x-length of the square window (width). |
required |
sy
|
int or float
|
The y-length of the square window (height). |
required |
Returns:
| Type | Description |
|---|---|
float
|
The kurtosis value calculated from the moments. |
Examples:
>>> mo = np.array([[0, 1], [2, 3]])
>>> binary_image = np.array([[1, 0], [0, 1]])
>>> cx = 2
>>> cy = 3
>>> sx = 5
>>> sy = 6
>>> result = get_kurtosis(mo, binary_image, cx, cy, sx, sy)
>>> print(result)
expected output
Source code in src/cellects/utils/formulas.py
get_min_or_max_euclidean_pair(coords, min_or_max='max')
Find the pair of points in a given set with the minimum or maximum Euclidean distance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
coords
|
Union[ndarray, Tuple]
|
An Nx2 numpy array or a tuple of two arrays, each containing the x and y coordinates of points. |
required |
min_or_max
|
str
|
Whether to find the 'min' or 'max' distance pair. Default is 'max'. |
'max'
|
Returns:
| Type | Description |
|---|---|
Tuple[ndarray, ndarray]
|
A tuple containing the coordinates of the two points that form the minimum or maximum distance pair. |
Raises:
| Type | Description |
|---|---|
ValueError
|
If |
Notes
- The function first computes all pairwise distances in condensed form using
pdist. - Then, it finds the index of the minimum or maximum distance.
- Finally, it maps this index to the actual point indices using a binary search method.
Examples:
>>> coords = np.array([[0, 1], [2, 3], [4, 5]])
>>> point1, point2 = get_min_or_max_euclidean_pair(coords, min_or_max="max")
>>> print(point1)
[0 1]
>>> print(point2)
[4 5]
>>> coords = (np.array([0, 2, 4, 8, 1, 5]), np.array([0, 2, 4, 8, 0, 5]))
>>> point1, point2 = get_min_or_max_euclidean_pair(coords, min_or_max="min")
>>> print(point1)
[0 0]
>>> print(point2)
[1 0]
Source code in src/cellects/image_analysis/morphological_operations.py
935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 | |
get_mpl_colormap(cmap_name)
Returns a linear color range array for the given matplotlib colormap.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
cmap_name
|
str
|
The name of the colormap to get. |
required |
Returns:
| Type | Description |
|---|---|
ndarray
|
A 256x1x3 array of bytes representing the linear color range. |
Examples:
Source code in src/cellects/utils/load_display_save.py
get_neighbor_comparisons(pad_skeleton)
Get neighbor comparisons for a padded skeleton.
This function creates two CompareNeighborsWithValue objects with different
neighborhood sizes (4 and 8) and checks if the neighbors are equal to 1. It
returns both comparison objects.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
pad_skeleton
|
ndarray of uint8
|
The input padded skeleton array. |
required |
Returns:
| Name | Type | Description |
|---|---|---|
out |
tuple of CompareNeighborsWithValue, CompareNeighborsWithValue
|
Two comparison objects for 4 and 8 neighbors. |
Examples:
Source code in src/cellects/image_analysis/network_functions.py
get_newly_explored_area(binary_vid)
Get newly explored area in a binary video.
Calculate the number of new pixels that have become active (==1) from the previous frame in a binary video representation.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
binary_vid
|
ndarray
|
The current frame of the binary video. |
required |
Returns:
| Type | Description |
|---|---|
ndarray
|
An array containing the number of new active pixels for each row. |
Notes
This function uses Numba's @njit decorator for performance.
Examples:
>>> binary_vid=np.zeros((4, 5, 5), dtype=np.uint8)
>>> binary_vid[:2, 3, 3] = 1
>>> binary_vid[1, 4, 3] = 1
>>> binary_vid[2, 3, 4] = 1
>>> binary_vid[3, 2, 3] = 1
>>> get_newly_explored_area(binary_vid)
array([0, 1, 1, 1])
>>> binary_vid=np.zeros((5, 5), dtype=np.uint8)[None, :, :]
>>> get_newly_explored_area(binary_vid)
array([0])
Source code in src/cellects/utils/formulas.py
get_power_dists(binary_image, cx, cy, n)
Calculate the power distributions based on the given center coordinates and exponent.
This function computes the nth powers of x and y distances from
a given center point (cx, cy) for each pixel in the binary image.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
binary_image
|
ndarray
|
A 2D array (binary image) where the power distributions are calculated. |
required |
cx
|
float
|
The x-coordinate of the center point. |
required |
cy
|
float
|
The y-coordinate of the center point. |
required |
n
|
int
|
The exponent for power distribution calculation. |
required |
Returns:
| Type | Description |
|---|---|
tuple[ndarray, ndarray]
|
A tuple containing two arrays:
- The first array contains the |
Notes
This function uses Numba's @njit decorator for performance optimization.
Ensure that binary_image is a NumPy ndarray to avoid type issues.
Examples:
>>> binary_image = np.zeros((10, 10))
>>> xn, yn = get_power_dists(binary_image, 5.0, 5.0, 2)
>>> print(xn.shape), print(yn.shape)
(10,) (10,)
Source code in src/cellects/utils/formulas.py
get_skeleton_and_widths(pad_network, pad_origin=None, pad_origin_centroid=None)
Get skeleton and widths from a network.
This function computes the morphological skeleton of a network and calculates the distances to the closest zero pixel for each non-zero pixel using medial_axis. If pad_origin is provided, it adds a central contour. Finally, the function removes small loops and keeps only one connected component.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
pad_network
|
ndarray of uint8
|
The binary pad network image. |
required |
pad_origin
|
ndarray of uint8
|
An array indicating the origin for adding central contour. |
None
|
pad_origin_centroid
|
ndarray
|
The centroid of the pad origin. Defaults to None. |
None
|
Returns:
| Name | Type | Description |
|---|---|---|
out |
tuple(ndarray of uint8, ndarray of uint8, ndarray of uint8)
|
A tuple containing: - pad_skeleton: The skeletonized image. - pad_distances: The distances to the closest zero pixel. - pad_origin_contours: The contours of the central origin, or None if not used. |
Examples:
>>> pad_network = np.array([[0, 1], [1, 0]])
>>> skeleton, distances, contours = get_skeleton_and_widths(pad_network)
>>> print(skeleton)
Source code in src/cellects/image_analysis/network_functions.py
get_skewness(mo, binary_image, cx, cy, sx, sy)
Calculate skewness of the given moment.
This function computes the skewness based on the third moments and the central moments of a binary image.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
mo
|
dict
|
Dictionary containing moments of binary image. |
required |
binary_image
|
ndarray
|
Binary image as a 2D numpy array. |
required |
cx
|
float
|
Description of parameter |
required |
cy
|
float
|
Description of parameter |
required |
sx
|
float
|
Description of parameter |
required |
sy
|
float
|
Description of parameter |
required |
Returns:
| Type | Description |
|---|---|
Tuple[float, float]
|
Tuple containing skewness values. |
Examples:
>>> result = get_skewness(mo=example_mo, binary_image=binary_img,
... cx=0.5, cy=0.5, sx=1.0, sy=1.0)
>>> print(result)
(skewness_x, skewness_y) # Example output
Source code in src/cellects/utils/formulas.py
get_skewness_kurtosis(mnx, mny, sx, sy, n)
Calculates skewness and kurtosis of a distribution.
This function computes the skewness and kurtosis from given statistical moments, standard deviations, and order of moments.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
mnx
|
float
|
The third moment about the mean for x. |
required |
mny
|
float
|
The fourth moment about the mean for y. |
required |
sx
|
float
|
The standard deviation of x. |
required |
sy
|
float
|
The standard deviation of y. |
required |
n
|
int
|
Order of the moment (3 for skewness, 4 for kurtosis). |
required |
Returns:
| Name | Type | Description |
|---|---|---|
skewness |
float
|
The computed skewness. |
kurtosis |
float
|
The computed kurtosis. |
Notes
This function uses Numba's @njit decorator for performance.
Ensure that the values of mnx, mny, sx, and sy are non-zero to avoid division by zero.
If n = 3, the function calculates skewness. If n = 4, it calculates kurtosis.
Examples:
>>> skewness, kurtosis = get_skewness_kurtosis(1.5, 2.0, 0.5, 0.75, 3)
>>> print("Skewness:", skewness)
Skewness: 8.0
>>> print("Kurtosis:", kurtosis)
Kurtosis: nan
Source code in src/cellects/utils/formulas.py
get_standard_deviations(mo, binary_image, cx, cy)
Return spatial standard deviations for a given moment and binary image.
This function computes the square root of variances along x (horizontal)
and y (vertical) axes for the given binary image and moment.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
mo
|
dict
|
Dictionary containing moments of binary image. |
required |
binary_image
|
ndarray of bool or int8
|
The binary input image where the moments are computed. |
required |
cx
|
float64
|
X-coordinate of center of mass (horizontal position). |
required |
cy
|
float64
|
Y-coordinate of center of mass (vertical position). |
required |
Returns:
| Type | Description |
|---|---|
tuple[ndarray of float64, ndarray of float64]
|
Tuple containing the standard deviations along the x and y axes. |
Raises:
| Type | Description |
|---|---|
ValueError
|
If |
Notes
This function uses the get_power_dists and get_var functions to compute
the distributed variances, which are then transformed into standard deviations.
Examples:
>>> import numpy as np
>>> binary_image = np.array([[0, 1], [1, 0]], dtype=np.int8)
>>> mo = np.array([[2.0], [3.0]])
>>> cx, cy = 1.5, 1.5
>>> stdx, stdy = get_standard_deviations(mo, binary_image, cx, cy)
>>> print(stdx)
[1.1]
>>> print(stdy)
[0.8366600265...]
Source code in src/cellects/utils/formulas.py
get_terminations_and_their_connected_nodes(pad_skeleton, cnv4, cnv8)
Get terminations in a skeleton and their connected nodes.
This function identifies termination points in a padded skeleton array based on pixel connectivity, marking them and their connected nodes.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
pad_skeleton
|
ndarray of uint8
|
The padded skeleton array where terminations are to be identified. |
required |
cnv4
|
object
|
Convolution object with 4-connectivity for neighbor comparison. |
required |
cnv8
|
object
|
Convolution object with 8-connectivity for neighbor comparison. |
required |
Returns:
| Name | Type | Description |
|---|---|---|
out |
ndarray of uint8
|
Array containing marked terminations and their connected nodes. |
Examples:
Source code in src/cellects/image_analysis/network_functions.py
get_var(mo, binary_image, Xn, Yn)
Compute the center of mass in 2D space.
This function calculates the weighted average position (centroid) of a binary image using given pixel coordinates and moments.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
mo
|
dict
|
Dictionary containing moments of binary image. |
required |
binary_image
|
ndarray
|
2D binary image where non-zero pixels are considered. |
required |
Xn
|
ndarray
|
Array of x-coordinates for each pixel in |
required |
Yn
|
ndarray
|
Array of y-coordinates for each pixel in |
required |
Returns:
| Type | Description |
|---|---|
tuple
|
A tuple of two floats |
Raises:
| Type | Description |
|---|---|
ZeroDivisionError
|
If |
Notes
Performance considerations: This function uses Numba's @njit decorator for performance.
Source code in src/cellects/utils/formulas.py
get_vertices_and_tips_from_skeleton(pad_skeleton)
Get vertices and tips from a padded skeleton.
This function identifies the vertices and tips of a skeletonized image. Tips are endpoints of the skeleton while vertices include tips and points where three or more edges meet.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
pad_skeleton
|
ndarray of uint8
|
Input skeleton image that has been padded. |
required |
Returns:
| Name | Type | Description |
|---|---|---|
out |
tuple (ndarray of uint8, ndarray of uint8)
|
Tuple containing arrays of vertex points and tip points. |
Source code in src/cellects/image_analysis/network_functions.py
image_borders(dimensions, shape='rectangular')
Create an image with borders, either rectangular or circular.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
dimensions
|
tuple
|
The dimensions of the image (height, width). |
required |
shape
|
str
|
The shape of the borders. Options are "rectangular" or "circular". Defaults to "rectangular". |
'rectangular'
|
Returns:
| Name | Type | Description |
|---|---|---|
out |
ndarray of uint8
|
The image with borders. If the shape is "circular", an ellipse border; if "rectangular", a rectangular border. |
Examples:
Source code in src/cellects/image_analysis/morphological_operations.py
insensitive_glob(pattern)
Generates a glob pattern that matches both lowercase and uppercase letters.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
pattern
|
str
|
The glob pattern to be made case-insensitive. |
required |
Returns:
| Type | Description |
|---|---|
str
|
A new glob pattern that will match both lowercase and uppercase letters. |
Examples:
Source code in src/cellects/utils/utilitarian.py
is_raw_image(image_path)
Determine if the image path corresponds to a raw image.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
image_path
|
str
|
The file path of the image. |
required |
Returns:
| Type | Description |
|---|---|
bool
|
True if the image is considered raw, False otherwise. |
Examples:
Source code in src/cellects/utils/load_display_save.py
keep_one_connected_component(binary_image)
Keep only one connected component in a binary image.
This function filters out all but the largest connected component in a binary image, effectively isolating it from other noise or objects. The function ensures the input is in uint8 format before processing.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
binary_image
|
ndarray of uint8
|
Binary image containing one or more connected components. |
required |
Returns:
| Type | Description |
|---|---|
ndarray of uint8
|
Image with only the largest connected component retained. |
Examples:
>>> all_shapes = np.zeros((5, 5), dtype=np.uint8)
>>> all_shapes[0:2, 0:2] = 1
>>> all_shapes[3:4, 3:4] = 1
>>> res = keep_one_connected_component(all_shapes)
>>> print(res)
[[1 1 0 0 0]
[1 1 0 0 0]
[0 0 0 0 0]
[0 0 0 0 0]
[0 0 0 0 0]]
Source code in src/cellects/image_analysis/morphological_operations.py
linear_model(x, a, b)
Perform a linear transformation on input data using slope and intercept.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
array_like
|
Input data. |
required |
a
|
float
|
Slope coefficient. |
required |
b
|
float
|
Intercept. |
required |
Returns:
| Type | Description |
|---|---|
float
|
Resulting value from linear transformation: |
Examples:
Notes
This function uses Numba's @njit decorator for performance.
Source code in src/cellects/utils/formulas.py
list_image_dir(path_to_images='', img_extension='', img_radical='')
List files in an image directory based on optional naming patterns (extension and/or radical).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
path_to_images
|
optional
|
The path to the directory containing images. Default is an empty string. |
''
|
img_extension
|
str
|
The file extension of the images to be listed. Default is an empty string. When let empty, use the extension corresponding to the most numerous image file in the folder. |
''
|
img_radical
|
str
|
The radical part of the filenames to be listed. Default is an empty string. |
''
|
Returns:
| Type | Description |
|---|---|
list
|
A list of image filenames that match the specified criteria, sorted in a natural order. |
Notes
This function uses the natsorted and insensitive_glob utilities to ensure
that filenames are sorted in a human-readable order.
Examples:
>>> pathway = Path(__name__).resolve().parents[0] / "data" / "single_experiment"
>>> image_list = list_image_dir(pathway)
>>> print(image_list)
Source code in src/cellects/utils/load_display_save.py
movie(video, increase_contrast=True)
Summary
Processes a video to display each frame with optional contrast increase and resizing.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
video
|
ndarray
|
The input video represented as a 3D NumPy array. |
required |
increase_contrast
|
bool
|
Flag to increase the contrast of each frame (default is True). |
True
|
Other Parameters:
| Name | Type | Description |
|---|---|---|
keyboard |
int
|
Key to wait for during the display of each frame. |
increase_contrast |
bool
|
Whether to increase contrast for the displayed frames. |
Returns:
| Type | Description |
|---|---|
None
|
|
Raises:
| Type | Description |
|---|---|
ValueError
|
If |
Notes
This function uses OpenCV's imshow to display each frame. Ensure that the required
OpenCV dependencies are met.
Examples:
>>> movie(video)
Processes and displays a video with default settings.
>>> movie(video, keyboard=0)
Processes and displays a video waiting for the SPACE key between frames.
>>> movie(video, increase_contrast=False)
Processes and displays a video without increasing contrast.
Source code in src/cellects/utils/load_display_save.py
moving_average(vector, step)
Calculate the moving average of a given vector with specified step size.
Computes the moving average of input vector using specified step
size. NaN values are treated as zeros in the calculation to allow
for continuous averaging.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
vector
|
ndarray
|
Input vector for which to calculate the moving average. |
required |
step
|
int
|
Size of the window for computing the moving average. |
required |
Returns:
| Type | Description |
|---|---|
ndarray
|
Vector containing the moving averages of the input vector. |
Raises:
| Type | Description |
|---|---|
ValueError
|
If |
ValueError
|
If the input vector has no valid (non-NaN) elements. |
Notes
- The function considers NaN values as zeros during the averaging process.
- If
stepis greater than or equal to the length of the vector, a warning will be raised.
Examples:
>>> import numpy as np
>>> vector = np.array([1.0, 2.0, np.nan, 4.0, 5.0])
>>> step = 3
>>> result = moving_average(vector, step)
>>> print(result)
[1.5 2.33333333 3.66666667 4. nan]
Source code in src/cellects/utils/formulas.py
njit(*args, **kwargs)
numba.njit decorator that can be disabled. Useful for testing.
read_and_rotate(image_name, prev_img=None, raw_images=False, is_landscape=True, crop_coord=None)
Read and rotate an image based on specified parameters.
This function reads an image from the given file name, optionally rotates
it by 90 degrees clockwise or counterclockwise based on its dimensions and
the is_landscape flag, and applies cropping if specified. It also compares
rotated images against a previous image to choose the best rotation.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
image_name
|
str
|
Name of the image file to read. |
required |
prev_img
|
ndarray
|
Previous image for comparison. Default is |
None
|
raw_images
|
bool
|
Flag to read raw images. Default is |
False
|
is_landscape
|
bool
|
Flag to determine if the image should be considered in landscape mode.
Default is |
True
|
crop_coord
|
ndarray
|
Coordinates for cropping the image. Default is |
None
|
Returns:
| Type | Description |
|---|---|
ndarray
|
Rotated and optionally cropped image. |
Raises:
| Type | Description |
|---|---|
FileNotFoundError
|
If the specified image file does not exist. |
Examples:
>>> pathway = Path(__name__).resolve().parents[0] / "data" / "single_experiment"
>>> image_name = 'image1.tif'
>>> image = read_and_rotate(pathway /image_name)
>>> print(image.shape)
(245, 300, 3)
Source code in src/cellects/utils/load_display_save.py
read_h5(file_name, key='data')
Read data array from an HDF5 file.
This function reads a specific dataset from an HDF5 file using the provided key.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
file_name
|
str
|
The path to the HDF5 file. |
required |
key
|
str
|
The dataset name within the HDF5 file. |
'data'
|
Returns:
| Type | Description |
|---|---|
ndarray
|
The data array from the specified dataset in the HDF5 file. |
Source code in src/cellects/utils/load_display_save.py
read_one_arena(arena_label, already_greyscale, csc_dict, videos_already_in_ram=None, true_frame_width=None, vid_name=None, background=None, background2=None)
Read a single arena's video data, potentially converting it from color to greyscale.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
arena_label
|
int
|
The label of the arena. |
required |
already_greyscale
|
bool
|
Whether the video is already in greyscale format. |
required |
csc_dict
|
dict
|
Dictionary containing color space conversion settings. |
required |
videos_already_in_ram
|
ndarray
|
Pre-loaded video frames in memory. Default is None. |
None
|
true_frame_width
|
int
|
The true width of the video frames. Default is None. |
None
|
vid_name
|
str
|
Name of the video file. Default is None. |
None
|
background
|
ndarray
|
Background image for subtractions. Default is None. |
None
|
background2
|
ndarray
|
Second background image for subtractions. Default is None. |
None
|
Returns:
| Type | Description |
|---|---|
tuple
|
A tuple containing: - visu: np.ndarray or None, the visual frame. - converted_video: np.ndarray or None, the video data converted as needed. - converted_video2: np.ndarray or None, additional video data if necessary. |
Raises:
| Type | Description |
|---|---|
FileNotFoundError
|
If the specified video file does not exist. |
ValueError
|
If the video data shape is invalid. |
Notes
This function assumes that video2numpy is a helper function available in the scope.
For optimal performance, ensure all video data fits in RAM.
Source code in src/cellects/utils/load_display_save.py
1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 | |
read_rotate_crop_and_reduce_image(image_name, prev_img=None, crop_coord=None, cr=None, raw_images=False, is_landscape=True, reduce_image_dim=False)
Reads, rotates, crops (if specified), and reduces image dimensionality if required.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
image_name
|
str
|
Name of the image file to read. |
required |
prev_img
|
NDArray
|
Previous image array used for rotation reference or state tracking. |
None
|
crop_coord
|
list
|
List of four integers [x_start, x_end, y_start, y_end] specifying cropping region. If None, no initial crop is applied. |
None
|
cr
|
list
|
List of four integers [x_start, x_end, y_start, y_end] for final cropping after rotation. |
None
|
raw_images
|
bool
|
Flag indicating whether to process raw image data (True) or processed image (False). |
False
|
is_landscape
|
bool
|
Boolean determining if the image is landscape-oriented and requires specific rotation handling. |
True
|
reduce_image_dim
|
bool
|
Whether to reduce the cropped image to a single channel (e.g., grayscale from RGB). |
False
|
Returns:
| Name | Type | Description |
|---|---|---|
img |
NDArray
|
Processed image after rotation, cropping, and optional dimensionality reduction. |
prev_img |
NDArray
|
Copy of the image immediately after rotation but before any cropping operations. |
Examples:
>>> import numpy as np
>>> img = np.random.rand(200, 300, 3)
>>> new_img, prev = read_rotate_crop_and_reduce_image("example.jpg", img, [50, 150, 75, 225], [20, 180, 40, 250], False, True, True)
>>> new_img.shape == (160, 210)
True
>>> prev.shape == (200, 300, 3)
True
Source code in src/cellects/utils/load_display_save.py
read_tif_stack(vid_name, expected_channels=1)
Read video array from a tif file.
This function reads a specific dataset from a tif file.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
vid_name
|
str
|
The path to the tif stack file. |
required |
expected_channels
|
int
|
The number of channel. |
1
|
Returns:
| Type | Description |
|---|---|
ndarray
|
The data array from the tif file. |
Source code in src/cellects/utils/load_display_save.py
readim(image_path, raw_image=False)
Read an image from a file and optionally process it.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
image_path
|
str
|
Path to the image file. |
required |
raw_image
|
bool
|
If True, logs an error message indicating that the raw image format cannot be processed. Default is False. |
False
|
Returns:
| Type | Description |
|---|---|
ndarray
|
The decoded image represented as a NumPy array of shape (height, width, channels). |
Raises:
| Type | Description |
|---|---|
RuntimeError
|
If |
Notes
Although raw_image is set to False by default, currently it does not perform any raw image processing.
Examples:
[[ 0, 255, 0],
[ 0, 255, 0]],
[[ 0, 0, 255],
[ 0, 0, 255]]], dtype=np.uint8)
Source code in src/cellects/utils/load_display_save.py
remove_coordinates(arr1, arr2)
Remove coordinates from arr1 that are present in arr2.
Given two arrays of coordinates, remove rows from the first array that match any row in the second array.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
arr1
|
ndarray of shape (n, 2)
|
Array containing coordinates to filter. |
required |
arr2
|
ndarray of shape (m, 2)
|
Array containing coordinates to match for removal. |
required |
Returns:
| Type | Description |
|---|---|
ndarray of shape (k, 2)
|
Array with coordinates from |
Examples:
>>> arr1 = np.array([[1, 2], [3, 4]])
>>> arr2 = np.array([[3, 4]])
>>> remove_coordinates(arr1, arr2)
array([[1, 2],
[3, 4]])
>>> arr1 = np.array([[1, 2], [3, 4]])
>>> arr2 = np.array([[3, 2], [1, 4]])
>>> remove_coordinates(arr1, arr2)
array([[1, 2],
[3, 4]])
>>> arr1 = np.array([[1, 2], [3, 4]])
>>> arr2 = np.array([[3, 2], [1, 2]])
>>> remove_coordinates(arr1, arr2)
array([[3, 4]])
>>> arr1 = np.arange(200).reshape(100, 2)
>>> arr2 = np.array([[196, 197], [198, 199]])
>>> new_arr1 = remove_coordinates(arr1, arr2)
>>> new_arr1.shape
(98, 2)
Source code in src/cellects/utils/utilitarian.py
remove_h5_key(file_name, key='data')
Remove a specified key from an HDF5 file.
This function opens an HDF5 file in append mode and deletes the specified key if it exists. It handles exceptions related to file not found and other runtime errors.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
file_name
|
str
|
The path to the HDF5 file from which the key should be removed. |
required |
key
|
str
|
The name of the dataset or group to delete from the HDF5 file. Default is "data". |
'data'
|
Returns:
| Type | Description |
|---|---|
None
|
|
Raises:
| Type | Description |
|---|---|
FileNotFoundError
|
If the specified file does not exist. |
RuntimeError
|
If any other error occurs during file operations. |
Notes
This function modifies the HDF5 file in place. Ensure you have a backup if necessary.
Source code in src/cellects/utils/load_display_save.py
remove_padding(array_list)
Remove padding from a list of 2D arrays.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
array_list
|
list of ndarrays
|
List of 2D NumPy arrays to be processed. |
required |
Returns:
| Name | Type | Description |
|---|---|---|
out |
list of ndarrays
|
List of 2D NumPy arrays with the padding removed. |
Examples:
>>> arr1 = np.array([[0, 0, 0], [0, 1, 0], [0, 0, 0]])
>>> arr2 = np.array([[1, 1, 1], [1, 0, 1], [1, 1, 1]])
>>> remove_padding([arr1, arr2])
[array([[1]]), array([[0]])]
Source code in src/cellects/image_analysis/network_functions.py
remove_small_loops(pad_skeleton, pad_distances=None)
Remove small loops from a skeletonized image.
This function identifies and removes small loops in a skeletonized image, returning the modified skeleton. If distance information is provided, it updates that as well.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
pad_skeleton
|
ndarray of uint8
|
The skeletonized image with potential small loops. |
required |
pad_distances
|
ndarray of float64
|
The distance map corresponding to the skeleton image. Default is |
None
|
Returns:
| Name | Type | Description |
|---|---|---|
out |
ndarray of uint8 or tuple(ndarray of uint8, ndarray of float64)
|
If |
Source code in src/cellects/image_analysis/network_functions.py
rolling_window_segmentation(greyscale_image, possibly_filled_pixels, patch_size=(10, 10))
Perform rolling window segmentation on a greyscale image, using potentially filled pixels and a specified patch size.
The function divides the input greyscale image into overlapping patches defined by patch_size,
and applies Otsu's thresholding method to each patch. The thresholds can be optionally
refined using a minimization algorithm.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
greyscale_image
|
ndarray of uint8
|
The input greyscale image to segment. |
required |
possibly_filled_pixels
|
ndarray of uint8
|
An array indicating which pixels are possibly filled. |
required |
patch_size
|
tuple
|
The dimensions of the patches to segment. Default is (10, 10). Must be superior to (1, 1). |
(10, 10)
|
Returns:
| Name | Type | Description |
|---|---|---|
output |
ndarray of uint8
|
The segmented binary image where the network is marked as True. |
Examples:
>>> greyscale_image = np.array([[1, 2, 1, 1], [1, 3, 4, 1], [2, 4, 3, 1], [2, 1, 2, 1]])
>>> possibly_filled_pixels = greyscale_image > 1
>>> patch_size = (2, 2)
>>> result = rolling_window_segmentation(greyscale_image, possibly_filled_pixels, patch_size)
>>> print(result)
[[0 1 0 0]
[0 1 1 0]
[0 1 1 0]
[0 0 1 0]]
Source code in src/cellects/image_analysis/image_segmentation.py
720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 | |
save_fig(img, full_path, cmap=None)
Save an image figure to a file with specified options.
This function creates a matplotlib figure from the given image, optionally applies a colormap, displays it briefly, saves the figure to disk at high resolution, and closes the figure.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
img
|
array_like(M, N, 3)
|
Input image to be saved as a figure. Expected to be in RGB format. |
required |
full_path
|
str
|
The complete file path where the figure will be saved. Must include extension (e.g., '.png', '.jpg'). |
required |
cmap
|
str or None
|
Colormap to be applied if the image should be displayed with a specific
color map. If |
None
|
Returns:
| Type | Description |
|---|---|
None
|
This function does not return any value. It saves the figure to disk at the specified location. |
Raises:
| Type | Description |
|---|---|
FileNotFoundError
|
If the directory in |
Examples:
>>> img = np.random.rand(100, 100, 3) * 255
>>> save_fig(img, 'test.png')
Creates and saves a figure from the random image to 'test.png'.
>>> save_fig(img, 'colored_test.png', cmap='viridis')
Creates and saves a figure from the random image with 'viridis' colormap
to 'colored_test.png'.
Source code in src/cellects/utils/load_display_save.py
scale_coordinates(coord, scale, dims)
Scale coordinates based on given scale factors and dimensions.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
coord
|
ndarray
|
A 2x2 array of coordinates to be scaled. |
required |
scale
|
tuple of float
|
Scaling factors for the x and y coordinates, respectively. |
required |
dims
|
tuple of int
|
Maximum dimensions (height, width) for the scaled coordinates. |
required |
Returns:
| Type | Description |
|---|---|
ndarray
|
Scaled and rounded coordinates. |
int
|
Minimum y-coordinate. |
int
|
Maximum y-coordinate. |
int
|
Minimum x-coordinate. |
int
|
Maximum x-coordinate. |
Examples:
>>> coord = np.array(((47, 38), (59, 37)))
>>> scale = (0.92, 0.87)
>>> dims = (245, 300, 3)
>>> scaled_coord, min_y, max_y, min_x, max_x = scale_coordinates(coord, scale, dims)
>>> scaled_coord
array([[43, 33],
[54, 32]])
>>> min_y, max_y
(np.int64(43), np.int64(54))
>>> min_x, max_x
(np.int64(32), np.int64(33))
Notes
This function assumes that the input coordinates are in a specific format and will fail if not. The scaling factors should be positive.
Source code in src/cellects/utils/formulas.py
show(img, interactive=True, cmap=None, show=True)
Display an image using Matplotlib with optional interactivity and colormap.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
img
|
ndarray
|
The image data to be displayed. |
required |
interactive
|
bool
|
If |
True
|
cmap
|
str or Colormap
|
The colormap to be used. If |
None
|
Other Parameters:
| Name | Type | Description |
|---|---|---|
interactive |
bool
|
If |
cmap |
str or Colormap
|
The colormap to be used. If |
Returns:
| Name | Type | Description |
|---|---|---|
fig |
Figure
|
The Matplotlib figure object containing the displayed image. |
ax |
AxesSubplot
|
The axes on which the image is plotted. |
Raises:
| Type | Description |
|---|---|
ValueError
|
If |
Notes
If interactive mode is enabled, the user can manipulate the figure window interactively.
Examples:
>>> img = np.random.rand(100, 50)
>>> fig, ax = show(img)
>>> print(fig)
<Figure size ... with ... Axes>
Source code in src/cellects/utils/load_display_save.py
967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 | |
smallest_memory_array(array_object, array_type='uint')
Convert input data to the smallest possible NumPy array type that can hold it.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
array_object
|
ndarray or list of lists
|
The input data to be converted. |
required |
array_type
|
str
|
The type of NumPy data type to use ('uint'). |
is 'uint'
|
Returns:
| Type | Description |
|---|---|
ndarray
|
A NumPy array of the smallest data type that can hold all values in |
Examples:
>>> import numpy as np
>>> array = [[1, 2], [3, 4]]
>>> smallest_memory_array(array)
array([[1, 2],
[3, 4]], dtype=np.uint8)
>>> array = [[1000, 2000], [3000, 4000]]
>>> smallest_memory_array(array)
array([[1000, 2000],
[3000, 4000]], dtype=uint16)
>>> array = [[2**31, 2**32], [2**33, 2**34]]
>>> smallest_memory_array(array)
array([[ 2147483648, 4294967296],
[ 8589934592, 17179869184]], dtype=uint64)
Source code in src/cellects/utils/utilitarian.py
split_dict(c_space_dict)
Split a dictionary into two dictionaries based on specific criteria and return their keys.
Split the input dictionary c_space_dict into two dictionaries: one for items not
ending with '2' and another where the key is truncated by removing its last
character if it does end with '2'. Additionally, return the keys that have been
processed.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
c_space_dict
|
dict
|
The dictionary to be split. Expected keys are strings and values can be any type. |
required |
Returns:
| Name | Type | Description |
|---|---|---|
first_dict |
dict
|
Dictionary containing items from |
second_dict |
dict
|
Dictionary containing items from |
c_spaces |
list
|
List of keys from |
Raises:
| Type | Description |
|---|---|
None
|
|
Notes
No critical information to share.
Examples:
>>> c_space_dict = {'key1': 10, 'key2': 20, 'logical': 30}
>>> first_dict, second_dict, c_spaces = split_dict(c_space_dict)
>>> print(first_dict)
{'key1': 10}
>>> print(second_dict)
{'key': 20}
>>> print(c_spaces)
['key1', 'key']
Source code in src/cellects/utils/utilitarian.py
sum_of_abs_differences(array1, array2)
Compute the sum of absolute differences between two arrays.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
array1
|
NDArray
|
The first input array. |
required |
array2
|
NDArray
|
The second input array. |
required |
Returns:
| Type | Description |
|---|---|
int
|
Sum of absolute differences between elements of |
Examples:
>>> arr1 = np.array([1.2, 2.5, -3.7])
>>> arr2 = np.array([12, 25, -37])
>>> result = sum_of_abs_differences(arr1, arr2)
>>> print(result)
66.6
Source code in src/cellects/utils/formulas.py
to_uint8(an_array)
Convert an array to unsigned 8-bit integers.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
an_array
|
ndarray
|
Input array to be converted. It can be of any numeric dtype. |
required |
Returns:
| Type | Description |
|---|---|
ndarray
|
The input array rounded to the nearest integer and then cast to unsigned 8-bit integers. |
Raises:
| Type | Description |
|---|---|
TypeError
|
If |
Notes
This function uses Numba's @njit decorator for performance optimization.
Examples:
Source code in src/cellects/utils/formulas.py
translate_dict(old_dict)
Translate a dictionary to a typed dictionary and filter out non-string values.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
old_dict
|
dict
|
The input dictionary that may contain non-string values |
required |
Returns:
| Name | Type | Description |
|---|---|---|
numba_dict |
Dict
|
A typed dictionary containing only the items from |
Examples:
Source code in src/cellects/utils/utilitarian.py
un_pad(arr)
Unpads a 2D NumPy array by removing the first and last row/column.
Extended Description
Reduces the size of a 2D array by removing the outermost rows and columns. Useful for trimming boundaries added during padding operations.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
arr
|
ndarray
|
Input 2D array to be unpadded. Shape (n,m) is expected. |
required |
Returns:
| Type | Description |
|---|---|
ndarray
|
Unpadded 2D array with shape (n-2, m-2). |
Examples:
Source code in src/cellects/image_analysis/network_functions.py
video2numpy(vid_name, conversion_dict=None, background=None, background2=None, true_frame_width=None)
Convert a video file to a NumPy array.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
vid_name
|
str
|
The path to the video file. Can be a |
required |
conversion_dict
|
dict
|
Dictionary containing color space conversion parameters. |
None
|
background
|
NDArray
|
Background image for processing. |
None
|
background2
|
NDArray
|
Second background image for processing. |
None
|
true_frame_width
|
int
|
True width of the frame. If specified and the current width is double this value, adjusts to true_frame_width. |
None
|
Returns:
| Type | Description |
|---|---|
NDArray or tuple of NDArrays
|
If conversion_dict is None, returns the video as a NumPy array. Otherwise, returns a tuple containing the original video and converted video. |
Notes
This function uses OpenCV to read the contents of a .mp4 video file.
Source code in src/cellects/utils/load_display_save.py
382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 | |
video_writing_decision(arena_nb, im_or_vid, overwrite_unaltered_videos)
Determine whether to write videos based on existing files and user preferences.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
arena_nb
|
int
|
Number of arenas to analyze. |
required |
im_or_vid
|
int
|
Indicates whether the analysis should be performed on images or videos. |
required |
overwrite_unaltered_videos
|
bool
|
Flag indicating whether existing unaltered videos should be overwritten. |
required |
Returns:
| Type | Description |
|---|---|
bool
|
True if videos should be written, False otherwise. |
Source code in src/cellects/utils/load_display_save.py
vstack_h5_array(file_name, table, key='data')
Stack tables vertically in an HDF5 file.
This function either appends the input table to an existing dataset in the specified HDF5 file or creates a new dataset if the key doesn't exist.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
file_name
|
str
|
Path to the HDF5 file. |
required |
table
|
NDArray[uint8]
|
The table to be stacked vertically with the existing data. |
required |
key
|
str
|
Key under which the dataset will be stored. Defaults to 'data'. |
'data'
|
Examples:
Source code in src/cellects/utils/load_display_save.py
write_h5(file_name, table, key='data')
Write a file using the h5 format.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
file_name
|
str
|
Name of the file to write. |
required |
table
|
NDArray[]
|
An array. |
required |
key
|
str
|
The identifier of the data in this h5 file. |
'data'
|
Source code in src/cellects/utils/load_display_save.py
write_video(np_array, vid_name, is_color=True, fps=40)
Write video from numpy array.
Save a numpy array as a video file. Supports .h5 format for saving raw numpy arrays and various video formats (mp4, avi, mkv) using OpenCV. For video formats, automatically selects a suitable codec and handles file extensions.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
np_array
|
ndarray of uint8
|
Input array containing video frames. |
required |
vid_name
|
str
|
Filename for the output video. Can include extension or not (defaults to .mp4). |
required |
is_color
|
bool
|
Whether the video should be written in color. Defaults to True. |
True
|
fps
|
int
|
Frame rate for the video in frames per second. Defaults to 40. |
40
|
Examples:
>>> video_array = np.random.randint(0, 255, size=(10, 100, 100, 3), dtype=np.uint8)
>>> write_video(video_array, 'output.mp4', True, 30)
Saves `video_array` as a color video 'output.mp4' with FPS 30.
>>> video_array = np.random.randint(0, 255, size=(10, 100, 100), dtype=np.uint8)
>>> write_video(video_array, 'raw_data.h5')
Saves `video_array` as a raw numpy array file without frame rate.
Source code in src/cellects/utils/load_display_save.py
write_video_from_images(path_to_images='', vid_name='timelapse.mp4', fps=20, img_extension='', img_radical='', crop_coord=None)
Write a video file from a sequence of images.
Extended Description
This function creates a video from a list of image files in the specified directory. To prevent the most comon issues: - The image list is sorted - mp4 files are removed - If they do not have the same orientation, rotate the images accordingly - Images are cropped - Color vs greyscale is automatically determined
After processing, images are compiled into a video file.
Parameters
path_to_images : str The directory where the images are located. vid_name : str, optional The name of the output video file. Default is 'video.mp4'. fps : int, optional The frames per second for the video. Default is 20. img_extension : str, optional The file extension of the images. Default is an empty string. img_radical : str, optional The common prefix of the image filenames. Default is an empty string. crop_coord : list, optional list containing four crop coordinates: [top, bot, left, right]. Default is None and takes the whole image.
Examples
write_video_from_images('path/to/images', vid_name='timelapse.mp4') This will create a video file named 'timelapse.mp4' from the images in the specified directory.
Source code in src/cellects/utils/load_display_save.py
write_video_sets(img_list, sizes, vid_names, crop_coord, bounding_boxes, bunch_nb, video_nb_per_bunch, remaining, raw_images, is_landscape, use_list_of_vid, in_colors=False, reduce_image_dim=False, pathway='')
Write video sets from a list of images, applying cropping and optional rotation.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
img_list
|
list
|
List of image file names. |
required |
sizes
|
NDArray
|
Array containing the dimensions of each video frame. |
required |
vid_names
|
list
|
List of video file names to be saved. |
required |
crop_coord
|
dict or tuple
|
Coordinates for cropping regions of interest in images/videos. |
required |
bounding_boxes
|
tuple
|
Bounding box coordinates to extract sub-images from the original images. |
required |
bunch_nb
|
int
|
Number of bunches to divide the videos into. |
required |
video_nb_per_bunch
|
int
|
Number of videos per bunch. |
required |
remaining
|
int
|
Number of videos remaining after the last full bunch. |
required |
raw_images
|
bool
|
Whether the images are in raw format. |
required |
is_landscape
|
bool
|
If true, rotate the images to landscape orientation before processing. |
required |
use_list_of_vid
|
bool
|
Flag indicating if the output should be a list of videos. |
required |
in_colors
|
bool
|
If true, process images with color information. Default is False. |
False
|
reduce_image_dim
|
bool
|
If true, reduce image dimensions. Default is False. |
False
|
pathway
|
str
|
Path where the videos should be saved. Default is an empty string. |
''
|
Source code in src/cellects/utils/load_display_save.py
zoom_on_nonzero(binary_image, padding=2, return_coord=True)
Crops a binary image around non-zero elements with optional padding and returns either coordinates or cropped region.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
binary_image
|
NDArray
|
2D NumPy array containing binary values (0/1) |
required |
padding
|
int
|
Amount of zero-padding to add around the minimum bounding box |
2
|
return_coord
|
bool
|
If True, return slice coordinates instead of cropped image |
True
|
Returns:
| Type | Description |
|---|---|
If `return_coord` is True: [y_min, y_max, x_min, x_max] as 4-element Tuple.
|
If False: 2D binary array representing the cropped region defined by non-zero elements plus padding. |
Examples:
>>> img = np.zeros((10,10))
>>> img[3:7,4:6] = 1
>>> result = zoom_on_nonzero(img)
>>> print(result)
[1 8 2 7]
>>> cropped = zoom_on_nonzero(img, return_coord=False)
>>> print(cropped.shape)
(6, 5)
Notes
- Returns empty slice coordinates if input contains no non-zero elements.
- Coordinate indices are 0-based and compatible with NumPy array slicing syntax.