We list some face databases widely used for facial landmark
studies, and summarize the specifications of these databases as below.
1. Caltech Occluded Face in the Wild (COFW).
o Source:
The COFW face dataset is built by
o Purpose: COFW face dataset contains images with severe facial occlusion. The images are collected from the internet.
o Properties:
Properties |
Descriptions |
# of subjects |
- |
# of images/videos |
1345 images in the training set and 507 images in the testing set. |
Static/Videos |
Static images. |
Single/Multiple faces |
Single |
Gray/Color |
color |
Resolution |
- |
Face pose |
Various poses |
Facial expression |
Various expressions. |
Illumination |
Various illuminations |
3D data |
- |
Ground truth |
29 facial landmark and landmark occlusion annotations |
o Reference: refer to the paper: X. P. Burgos-Artizzu, P. Perona and P. Dollar, "Robust face landmark estimation under occlusion", ICCV 2013, Sydney, Australia, December 2013.
2. Ibug 300 Faces In-the-Wild (ibug 300W) Challenge database.
o Source:
The ibug 300W face dataset is built by
o Purpose: The ibug 300W face dataset contains ''in-the-wild'' images collected from the internet.
o Properties:
Properties |
Descriptions |
# of subjects |
- |
# of images/videos |
About 4000+ images. |
Static/Videos |
Static images. |
Single/Multiple faces |
Single |
Gray/Color |
color |
Resolution |
- |
Face pose |
Various poses |
Facial expression |
Various expressions. |
Illumination |
Various illuminations |
3D data |
- |
Ground truth |
68 facial landmark annotations |
o Reference: refer to the paper: C. Sagonas, E. Antonakos, G, Tzimiropoulos, S. Zafeiriou, M. Pantic, ''300 faces In-the-wild challenge: Database and results'', Image and Vision Computing (IMAVIS), 2016.
3. Ibug 300 Videos in the Wild (ibug 300-VW) Challenge dataset.
o Source:
The ibug 300VW face dataset is built by
o Purpose: The ibug 300VW face dataset contains ''in-the-wild'' videos collected from the internet.
o Properties:
Properties |
Descriptions |
# of subjects |
- |
# of images/videos |
114 videos. |
Static/Videos |
Videos. |
Single/Multiple faces |
Single |
Gray/Color |
color |
Resolution |
- |
Face pose |
Various poses |
Facial expression |
Various expressions. |
Illumination |
Various illuminations |
3D data |
- |
Ground truth |
68 facial landmark annotations |
o Reference: refer to the paper: J.Shen, S.Zafeiriou, G. S. Chrysos, J.Kossaifi, G.Tzimiropoulos, and M. Pantic. The first facial landmark tracking in-the-wild challenge: Benchmark and results. In IEEE International Conference on Computer Vision Workshops (ICCVW), 2015.
4. 3D Face Alignment in the Wild (3DFAW) Challenge dataset.
o Source:
The 3DFAW dataset is built by
o Purpose: The 3DFAW face dataset contains real and synthetic facial images with 3D facial landmark annotations.
o Properties:
Properties |
Descriptions |
# of subjects |
- |
# of images/videos |
10K+ images. |
Static/Videos |
Static images. |
Single/Multiple faces |
Single |
Gray/Color |
color |
Resolution |
- |
Face pose |
Various poses |
Facial expression |
Various expressions. |
Illumination |
Various illuminations |
3D data |
3D facial landmark annotations |
Ground truth |
66 3D facial landmark annotations |
o Reference: refer to the website: http://mhug.disi.unitn.it/workshop/3dfaw/.
5. Annotated Facial Landmarks in the Wild (AFLW) database
o Source:
The AFLW is built by
o Purpose: Annotated Facial Landmarks in the Wild (AFLW) provides a large-scale collection of annotated face images gathered from Flickr, exhibiting a large variety in appearance (e.g., pose, expression, ethnicity, age, gender) as well as general imaging and environmental conditions.
o Properties:
Properties |
Descriptions |
# of subjects |
- |
# of images/videos |
25,993 |
Static/Videos |
Static |
Single/Multiple faces |
Multiple |
Gray/Color |
color |
Resolution |
High resolution |
Face pose |
Various poses |
Facial expression |
Various expressions |
Illumination |
Various illuminations |
3D data |
coarse head pose estimation |
Ground truth |
21 point markup |
o Reference: refer to the paper: Martin Koestinger, Paul Wohlhart, Peter M. Roth, and Horst Bischof, "Annotated Facial Landmarks in the Wild: A Large-scale, Real-world Database for Facial Landmark Localization", In First IEEE International Workshop on Benchmarking Facial Image Analysis Technologies, 2011.
6. Labeled Face Parts in the Wild (LFPW) Dataset
o Source:
The LFPW database is built by
o Purpose: LFPW was used to evaluate a face part (facial fiducial point) detection method. Release 1 of LFPW consists of 1,432 faces from images downloaded from the web using simple text queries on sites such as google.com, flickr.com, and yahoo.com. Each image was labeled by three MTURK workers, and 29 fiducial points are included in dataset.
o Properties:
Properties |
Descriptions |
# of subjects |
- |
# of images/videos |
1432 |
Static/Videos |
Static |
Single/Multiple faces |
Single |
Gray/Color |
color |
Resolution |
- |
Face pose |
Various poses |
Facial expression |
Various expressions |
Illumination |
Various illuminations |
3D data |
N/A |
Ground truth |
Annotated 29 fiducial points |
o Reference: refer to the paper : Peter N. Belhumeur, David W. Jacobs, David J. Kriegman, and Neeraj Kumar, Localizing Parts of Faces Using a Consensus of Exemplars, Proceedings of the 24th IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Additional annotations can be found here: "http://ibug.doc.ic.ac.uk/resources".
o Source:
The Helen database is built by
o Purpose: Helen database provides a large-scale collection of annotated facial images gathered from Flickr, exhibiting a large variety in appearance (e.g., pose, expression, ethnicity, age, gender) as well as general imaging and environmental conditions..
o Properties:
Properties |
Descriptions |
# of subjects |
- |
# of images/videos |
2000 training and 330 testing |
Static/Videos |
Static |
Single/Multiple faces |
Single |
Gray/Color |
color |
Resolution |
High resolution |
Face pose |
Various poses |
Facial expression |
Various expressions |
Illumination |
Various illuminations |
3D data |
N/A |
Ground truth |
Annotated 194 facial landmarks |
o Reference: refer to the paper:Vuong Le, Jonathan Brandt, Zhe Lin, Lubomir Boudev, and Thomas S. Huang, Interactive Facial Feature Localization, ECCV2012. Additional annotations can be found here: "http://ibug.doc.ic.ac.uk/resources".