Evaluation Metric [code]
- Metric: As widely used for 2D instance segmentation, we choose the metric AP-75, requiring at least 75% intersection over union (IoU) with the ground truth for a detection to be a true positive. We implement an efficient 3D version based on the MS COCO evaluation API.
- Leaderboard Score: The ranking score for the challenge is the average of the AP-75 on both volumes (AVR_AP@75 in the leaderboard).
Submission Format [code]
The challenge accepts HDF5 files for submission. (1) A valid submission should have two separate HDF5 files, each containing the mitochondria instance segmentation result for the test split (slice 500-999) of one volume. (2) Optionally, two other HDF5 files can be submitted for the confidence scores for each volume, of the size (N, 2) where the two columns are the segment ids and the prediction confidence. Otherwise, we use the segment size as the confidence score to rank the prediction.
Please name your files as follows:
MUST: instance segmentation result for both volumes (500x4096x4096 each):
- 0_human_instance_seg_pred.h5: H5 file containing the instance segmentation results on Human volume
- 1_rat_instance_seg_pred.h5 : H5 file containing the instance segmentation results on Rat volume
Optional: confidence scores for both volumes:
- 2_human_pred_score.h5: H5 file containing the confidence score on Human volume
- 3_rat_pred_score.h5: H5 file containing the confidence score on Rat volume
Note: as only one file needs to be submmited you can zip all .h5 files into just one .zip file.