Lesson 6: First detector - finding cracks on the road

Approx. reading time: 25-40 minutes

Congratulations! You got to the point where we can dig into the AI-part of ATLAS.

In this lesson, we will use a dataset from Lesson 4. And will learn how to make annotations, and train your first detector.

Typical AI training workflow is the following:

  1. Annotate manually part of the data to indicate for AI what is the object of interest and what is a background
  2. Train detector and evaluate results
  3. If results are good, then STOP and enjoy the power of AI
  4. If results are not good, improve annotations and re-train existing detector

Let’s implement it in practice:

  1. Open dataset and check that you are on Images tab (left vertical bar)
  2. Open image DJI_0609. For fast navigation, you can use search string above the image list
  3. Open annotation toolbar on the right side of an image viewer. Enable “Edit annotations”


  1. Now we need to label big bad areas on asphalt. To do that, select rectangle tool and use it for annotation (red shape). After drawing, a dialog appears. You need to define an object type name, for example, “big_crack”. Just type it in the box.

  2. Then define a background with this tool (red dashed rectangle). After drawing, a dialogue box appears. Select “big_crack” from the list of object types. This tells AI that this background is tied with this object type

  3. Final annotation for this image should look like this

Big Crack defining

  1. Now, let's move to image DJI_0628 (hint: to do it fast use search or fast scrolling, SHIFT+mouse shell) and add annotation like on the picture below. For crack annotation now use the polygonal tool. Do not forget to specify a background. Also assign “big_crack” object type for both, object annotation and background.
  1. Very important concept #1! Background defines an area that will be used by AI to distinguish useful objects from a background. This brings us to a very important conclusion. Within the background area (red dashed rectangle) all objects of interest of a specified object type should be properly and fully annotated.
  2. Very important concept #2 ! Try to keep the balance between annotation speed and accuracy. It is always a tradeoff either to use simple and quick shapes, like a rectangle or circle or to draw accurately and slowly with polygons. The general rule of thumb is the following - try to have NOT MORE than 20% of background within your object annotation shape.
  3. Very important concept #3 ! You may have different object types annotated on the image. Like, “big_crack”, and “grass”, or “road_sign”. What is important to know is that each object type has its own set of manual annotations andbackground areas.
  4. Now, when you know all these important concepts you can fire most of your data scientists from your company annotate several more images. Typically it is recommended to annotate 2-10% of your dataset. The exact number depends on the dataset size, of course. The bigger dataset, the smaller percentage we can annotate.
  5. You did a great job and now let’s train. Open your annotation toolbar (refer to step 3 in this lesson) and press the “Train” button. Dialogue will appear


  1. There are 2 options:
    a. “Create new”- trains new detector from scratch
    b. “Select existing” - allows to select an existing, train it again with improved annotation to achieve better detection performance
  2. Our choice for now is “Create new”. Select object type for a detector. Then name the detector.
  3. Choose your object type- if your objects are clear and repeating form then select "Simple objects", but if you have objects with different forms, sizes and patterns, select "Complex objects"
  4. Select Detect on entire image set to run training trough all images, but if training should be performed only in the image, then select "Detect on selected images"


  1. Press “Train”. Training may take around 30 minutes.
  2. When the process completes you’ll get an email notification.
  3. The detection result may look like this.


  1. Notice that images in a list may have a blue or green circle in the bottom left corner of a card. Blue means that there is a manual annotation for an image. Green means that annotation, generated by AI, exists.
  2. Notice the filter button above the list. It allows you to filter images by object type. When the filter is active, it is green.


Also keep in mind, that if you just need to run detection without training on this or another dataset, then just press “Detect” and select a proper detector name.

The last important topic in this lesson. Check yourself, try to decide which annotation is good and which is bad.

Bad - Not all big cracks have annotations within the background.

Bad - All object annotations are in place. But the background is not defined.

Bad - Added too much of a background (good asphalt) to crack annotation.

Good - All crack areas annotated properly. Separation from normal road is good and background is defined.

Possible - Because annotation is outside of the background, it will not be used during AI training. But the background is properly defined, so AI will use it for definition.

The main take away from this case is that when we need to improve background detection, we can add just empty background and do not waste time for additional object annotation.