Configurable image annotation on the Web

Authors: Matthieu Pizenberg, Axel Carlier, Emmanuel Faure, Vincent Charvillat, Marco Bertini, Mathias Lux
Affiliation: University of Toulouse, CNRS - IRIT, University of Klagenfurt, University of Florence;
Editors: Mathias Lux and Marco Bertini


Image annotations are required in a wide range of applications including image classification (which requires textual labels), object detection (bounding boxes), or image segmentation (pixel-wise classification). The application we show here was presented in the ACM Multimedia 2018 Open Source Software Competition track, and provides a number of configurable manual image annotation tools for detection and segmentation. It is available online, but in case you wish to run it offline, the simplest way is using the prepacked docker container. 

docker run -d -p 80:8003 mpizenberg/annotation-app:app

Getting started with bounding boxes

The steps to draw a bounding box in an image and retrieve its coordinates are:

  • Open a new tab at the annotation application address:
  • Click on the top right image icon in the toolbar to load your image.
  • Click on the config icon (next to image icon) in the toolbar and open a JSON file containing { “annotations”: [ “bbox” ], “classes”: [] } like this one: config-simple.json
  • Click on the bounding box icon in the toolbar.
  • Draw your bounding box by click-dragging.
  • Click on the save icon to retrieve the bounding box coordinates.

Saving generates an annotations.json file containing your annotations. You can copy and paste its content in JSONLint for a more human readable version. In our previous case, the resulting saved file will look like the example below (indented differently).

{ "config": { "classes":[], "annotations":["bbox"] }, 
  "images": [ { "image":"chill-boss-cat.jpg", 
                "annotations": [ { "type":"bbox", 
                               "annotations": [ {"minX":675,"maxX":839,"minY":868,"maxY":1103} ]

As you can see, the bounding box coordinates are given as minX, maxX, minY, maxY values, and the config used during the annotations is prepended before actual annotation data. We only have one image (“chill-boss-cat.jpg”), one type of annotation (“bbox”), and one annotation here but the syntax support many of each.

Annotating multiple images

To annotate many images, simply click again on the top right image icon, and import one or multiple images. All loaded images become selectable on the right sidebar. You can change one image at any time by clicking on another one, annotations are preserved for each image independently. If you wish to clear the images list, simply reload the Web page.

  • Beware that the browser cannot access the disk without your consent. So images are loaded in memory, since this application is purely client side (without server interaction). Consequently, you should avoid loading GB worth of images at the same time. You can instead work with batches, and reload the page between each batch.

The annotation tools available

The annotation tools available in this application with their configuration name are:

  • Points:“point”
  • Bounding boxes (rectangles): “bbox”
  • Strokes (lines): “stroke”
  • Outlines (free draw closed shapes): “outline”
  • Polygons: “polygon”

We can choose which annotation tool to provide in the JSON config file (config-all-tools.json).

{ "classes": [], 
   "annotations": [ "point", "bbox", "stroke", "outline", "polygon" ]

With a config file such as the one above, we get the following interface:

Click on any tool of your choosing to start annotating with it. The usage of each tool should be intuitive. However, this interface has an issue in that it does not fully supports undo/redo history and annotations are not editable once finished. The only option to remove the last annotation is to use the delete icon, and to redo it. 

Using classes (or “labels”)

For most annotation tasks, we also need to differentiate objects in the images. Typically each annotated area is attributed a “class” sometimes also called “label”. The PASCAL VOC dataset for example is composed of 20 classes, grouped by category. Those classes are:

  • Person: person
  • Animal: bird, cat, cow, dog, horse, sheep
  • Vehicle: aeroplane, bicycle, boat, bus, car, motorbike, train
  • Indoor: bottle, chair, dining table, potted plant, sofa, tv/monitor

In our application, classes are specified in the JSON config file. A one-to-one config for PASCAL VOC classes is represented by this config-pascal.json file:

{ "classes":
  [ { "category": "Person", "classes": [ "person" ] }, 
    { "category": "Animal", "classes": [ "bird", "cat", "cow", "dog", "horse", "sheep" ] }, 
    { "category": "Vehicle", "classes": [ "aeroplane", "bicycle", "boat", "bus", "car", "motorbike", "train" ] },
    { "category": "Indoor", "classes": [ "bottle", "chair", "dining table", "potted plant", "sofa", "tv/monitor" ] }
  "annotations": [ "point", "bbox", "stroke", "outline", "polygon" ]

To attribute a class to an annotation, first select the class in the left sidebar, then use your tool to create an annotation. Repeat the process with all the objects you want to annotate in the image. When selecting a class in the left sidebar, it also highlights the annotations corresponding to this class. See for example in the image below how annotations of the class “person” are highlighted even if they are of different types (bounding box and free hand outline).


Colors are not used to differentiate classes for multiple reasons. 20 classes, like in the Pascal VOC case, would not have great color differences and might not be color-blind friendly. Letting the user customize colors would also add a burden in the setup of the configuration and prevent the users from reaching their goal of simple annotations. Instead the authors have strived for the best defaults possible, by relying more on contrasts than hues.

Classes can have any number of hierarchical levels in the configuration. Simply replace a class by an object with two keys“category”and“classes”like in the example below(config-subclasses.json).

{ "classes": [ "Class of level 1", 
               { "category": "Category of level 1", 
                 "classes": [ "Class of level 2", 
                 { "category": "Category of level 2", 
                   "classes": [ "Class of level 3" ]}
  "annotations": [ "point", "bbox", "stroke", "outline", "polygon" ]


This annotation application has been designed with three objectives in mind: 1) easy to access, 2) easy to use, and 3) easy to customize. For more information on what is possible, such as how to integrate with crowdsourcing platforms like Amazon Mechanical Turk, please visit the Github repository of the project and its guide. Many improvements in UI and additional features are on the roadmap. If you are having trouble with the application, or wish to contribute in any way, the authors are available, it is simply required to open an issue in the repository issue tracker.

Bookmark the permalink.