Submissions and Evaluation

Submissions

Phase 1: Upload output file for model targeting all 6 countries! Submission link can be accessed after you log in. The task is described in the Overview section. The submission file is required to follow the following guidelines: For each image in the test dataset, your algorithm needs to predict a list of labels, and the corresponding bounding boxes. The output is expected to contain the following two columns:

  1. ImageId: the id of the test image, for example, India_00001
  2. PredictionString: the prediction string should be space-delimited of 5 integers. For example, 2 240 170 260 240 means it's label 2, with a bounding box of coordinates (x_min, y_min, x_max, y_max). We accept up to 5 predictions. For example, if you submit 3 42 24 170 186 1 292 28 430 198 4 168 24 292 190 5 299 238 443 374 2 160 195 294 357 6 1 224 135 356 which contains 6 bounding boxes, we will only take the first 5 into consideration.

Phase 2: Tentative format ( Details to be finlaized in due course): Following files need to be submitted:

  1. Saved model file (.pt or any other format),
  2. Please use this form to provide the details of libraries required to run your model (to be pre-installed in the Organizers' side system). You may also contact the organizers directly for any suggestions for this phase.

Phase 3: Report and Source Code submission (Details to be shared in due course). All the participants need to submit a detailed report and source code for their proposed model. The submitted report will serve as main criteria for the following:

  1. Finalizing the winners of the competition.
    1. Those who have top scores in the leaderboard but do not submit the detailed report will be disqualified from the competition.
  2. Invitation to submit papers.
    1. Generally, the top 10 participants are invited, but based on the content of the submitted report, up to 20 participants may be considered.
  3. Invitation for collaboration with organizers for the paper writing.
    1. Based on the report content, selected teams will be invited to write a joint paper coauthored with the challenge organizers.
Participants may refer to the following guidelines for winning model documentation (https://www.kaggle.com/WinningModelDocumentationGuidelines).

Paper submission
  • After the competition phase is completed, a link for submitting the accompanying academic paper will be provided to the top 10 participants the number may change based on quality of submissions) as ranked by the leaderboard described above.
  • Peer reviewers will review the academic papers.
  • The papers are expected to conform to the format set by the conference, which can be found at IEEE BigData CFP.
Contents in the technical paper and report (Required):
  • Explanation of your method and the tools used.
  • Evaluation of your method (you can use results obtained on the site of road damage detection challenge)
  • Detailed evaluation of your results based on factors like inference speed, model size, training time etc.
  • Code and trained model links.
  • Error Analysis: Examples of failed attempts, efforts that did not go well.

Your Code
Source code will also be required to be submitted, through a publicly available repository on a Git-based version control hosting service such as GitHub for the final evaluation. All source codes are expected to be released as open-source software, utilizing some generally accepted licensing such as Apache License 2.0, GNU General Public License, MIT license, or others of similar acceptance by the Open-Source Initiative.

Evaluation

Abstract: Participants will be evaluated at the following three stages:

  • Phase 1: Qualifying round: All teams with F1-score > 70% qualify for Phase 2.
  • Phase 2: Ranking Round: Teams ranked according to the inference speed of their models.
  • Phase 3: Evaluation of the submitted Report and Source Code.

Details:

Phase 1: The results of the model proposed by the participants are evaluated by "F-Measure." The prediction is correct only when IoU (Intersection over Union, see Fig.1) is 0.5 or more, and the predicted label and the ground truth label match. Dividing the area of overlap by the area of the union yields intersection over union (Fig.2).

Figure 1: Example of Intersection over Union
Figure 2: The definition of Intersection over Union (IoU)

Phase 2: Inference Speed of the models proposed by the participants would be the evaluation criteria. Participants need to submit their proposed model file (saved checkpoint, .pb file or some other format), along with the inference script (.py file) and corresponding information of implementation requirements (.txt).

Phase 3: Subjective evaluation of the submitted Report and Source Code will be carried out to finalize the winners of the competition. Also, based on the report content, selected teams will be invited to collaborate with the organizers for paper writing.