COCO Annotator – Web-based Image Segmentation Tool for Object Detection, Localization, and Keypoints

Authors: Daniela Stefanics, Markus Fox
Affiliation: University of Klagenfurt
Editors: Mathias Lux and Marco Bertini



COCO Annotator is an image annotation tool that allows the labelling of images to create training data for object detection and localization. It provides many features, including the ability to label an image segment by drawing, label objects with disconnected visible parts, efficiently store and export annotations in the well-known COCO format as well as importing existing publicly available datasets in COCO format. Once installed, or started with Docker, the interface is web-based and customizable, and provides different tools for creating datasets. The exported annotations can be used for the training of modern deep learning algorithms (Mask R-CNN, YOLO, etc.). Import of the COCO format is supported by several machine learning frameworks. COCO Annotator is developed by Justin Brooks and is supported by a lively community on Github [1]. 

Getting Started

  • Provide that Docker is installed, otherwise install it [2
  • Clone the project from the repository [3]
  • Navigate to the directory and run:
cd path/coco-annotator
docker-compose up

This starts the application on localhost (default port: 5000).

After the initial registration, you can immediately create your first dataset with categories and start annotating the images.

COCO Annotator dataset page

COCO Annotator dataset page

To import your own images, after creating the dataset on the web page, simply copy the files into the dataset folder:


COCO Annotator will automatically import your images and you can start with annotations. In the example below, we have already annotated one instance of an apple, two strawberries, two pears, and one lemon.

COCO Annotator image annotation

In the annotation view, the right navigation bar shows all selectable categories; on the left side, you find all tools that can be used for annotating, such as bounding box selection, polygon, eraser, etc. For a quick way of annotating we recommend using the bounding box tool, as many of the current object detection algorithms (e.g. YOLO, Faster R-CNN) are based on bounding box detection. For more fine-grained annotation (instance segmentation) the polygon tool is the easiest to use.

COCO Annotator offers easy and fast navigation between the images, as well as an export function when you are done annotating all images.

Export Format: COCO

The image annotations with associated categories are exported as JSON in the widely-known COCO format. The format originated from Microsoft’s COCO: Common objects in context dataset [4]. According to the official website:

“COCO is large-scale object detection, segmentation, and captioning dataset. COCO has several features: Object segmentation, Recognition in context, Superpixel stuff segmentation, 330K images (>200K labelled), 1.5 million object instances, 80 object categories, 91 stuff categories, 5 captions per image, 250,000 people with keypoints”

The great benefit we have experienced in our work with COCO Annotator and using the COCO format is that there are many pre-trained state-of-the-art deep learning models available online that support (or are based on) the COCO format. Therefore, adapting the original files to take your custom dataset as input for training/fine-tuning the model requires minimal work.


The software is MIT [5] licensed, which means users are allowed to do anything with the code as long as they include the original copyright and license notice in any copy of the software/source.


Good reasons to use the COCO Annotator are:

  • Free and easy to download and use,
  • quick upload of the images,
  • everything is running on your local machine,
  • few but efficient tools for precise annotations, and
  • the possibility to import and export data easily.

COCO Annotator is an excellent application to annotate images quickly and thoroughly with the associated classifications. Due to its ease of use and wide acceptance of the COCO format, it lowers the barrier of annotation for training a deep learning model significantly.


[1] Brooks, J. (2019). COCO Annotator., last accessed 2021-09-08

[2] Install Docker Engine., last accessed 2021-09-08

[3] jsbroks/coco-annotator., last accessed 2021-09-08

[4] Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P. and Zitnick, C.L. (2014). Microsoft COCO: Common Objects in Context. European Conference on Computer Vision, 740-755., last accessed 2021-09-08

[5] MIT License (Expat) Explained in Plain English., last accessed 2021-09-08

Bookmark the permalink.