Due to the growing popularity of low-cost scanners, such as Microsoft Kinect, several RGB-D object datasets have been emerging in the research community. The recent advent of Virtual Reality technologies and applications, increase the demand for 3D objects, which can now be easily and efficiently captured with such sensing technology, which further enable their exploitation in real-time recognition scenarios. For these, it is essential to identify which 3D shape descriptors, provide better matching and retrieval of such digitalized objects [1].
One such datasets is the “RGB-D ISCTE-IUL Dataset” (http://dataset.mldc.pt/) [2]. , whose objects were captured with the Microsoft Kinect One sensor, which is freely available for the scientific community to run their experiments in different scenarios and applications, ranging from computer vision, computer graphics, image/object query and retrieval, machine learning and other domains.
The “RGB-D ISCTE-IUL Dataset” provides for each object, 90 frame pairs of RGB and Depth images, a segmented and registered point cloud and a polygon mesh model. Each captured object will also have a matching “high-quality” synthetic 3D model, acquired from other 3D dataset with rights for R&D use, such as the Princeton Shape Benchmark [3]. and the Sketch-Based 3D introduced by Li et al. [4]. . See http://dataset.mldc.pt/index.html#overview for more details on the datatset. In this track, that follows on previous SHRECK 2015 competition [5], we aim to evaluate objectively the performance of 3D shape retrieval techniques on the “RGB-D ISCTE-IUL Dataset”, which is now populated with more than 200 objects.
Considering the SHREC track requirements, participants will be able to describe their object queries by means of raw data of the captured object, segmented point clouds of each camera view, registered point cloud of the full object, or triangle meshes of the full object too. With such possibilities, participants can use the most appropriate format for his retrieval algorithm.
The following table, describes the raw data in the dataset.
Per each frame capture by Kinect One:
| |||
---|---|---|---|
Data | Description | Filetype | Reason |
Color image |
RGB format | .png | Lossless and community standard |
Depth image |
In Millimeters |
.png | Lossless and community standard |
Bounding Box | The 2 bound-box corners in color image coordinates |
.txt | Simple to Read/Write |
The following table, describes the processed data in the dataset.
Processed Data available in the dataset 10 object instances per class For each object instance:
|
|||
---|---|---|---|
Data | Description | Filetype | Reason |
Segmented Point-Clouds |
PCD Format. In Millimeters. |
.pcd | Community standard |
Camera Pose |
4x4 transformation matrix with float/double precision |
.txt | Simple to read/write |
Registered Point-Cloud |
PCD Format. In Millimeters.,PLY Format. |
.pcd,.ply | Community standard |
3D Mesh Poisson Reconstruction |
PLY Format.,OFF Format. |
.ply,.off | Contains Vertex Color,Simple to read/Write |
3D Mesh Basic Triangulation |
PLY Format.,OFF Format. |
.ply,.off | Contains Vertex Color,Simple to read/Write |
144 | 1.00000 |
24 | 0.87221 |
45 | 0.79915 |
201 | 0.59102 |
203 | 0.54902 |
32 | 0.51241 |
February 01 | A subset of the final test set will be available on line. |
February 08 | Please register before this date. |
February 08 | Distribution of the final test sets. Participants can start the retrieval contest. |
March 04 | Submission of results (ranked lists) and a one page description of their method(s). |
March 07 | Release evaluation results. |
March 07 | Each track is finished, results ready for a track paper. |
March 07-11 | Review of track paper by the participants. |
March 15 | Submit track papers for review. |
March 22 | All reviews due, feedback and notifications. |
April 01 | Submission of camera-ready track papers. |
May 07 & 08 | EG 3DOR Workshop, including SHREC'2016. |
[1] P. F. Proença, F. Gaspar, and M. S. Dias, Good Appearance and 3D Shape Descriptors for Object Category Recognition, International Journal on Artificial Intelligence Tools, August 2015, Vol. 24, No. 04, (2015), DOI: 10.1142/s0218213015400175
[2] “RGB-D ISCTE-IUL Dataset” http://dataset.mldc.pt/index.html#overview
[3] P. Shilane, P. Min, M. Kazhdan, and T. Funkhouser, “The Princeton Shape Benchmark,” Shape Model. Appl. Int. Conf., vol. 0, pp. 167–178, 2004.
[4] B. Li, Y. Lu, C. Li, A. Godil, T. Schreck, M. Aono, M. Burtscher, H. Fu, T. Furuya, H. Johan, J. Liu, R. Ohbuchi, A. Tatsuma, and C. Zou, “Extended Large Scale Sketch-Based 3D Shape Retrieval,” pp. 121–130.
[5] P. B. Pascoal, P. Proença, F. Gaspar, M. S. Dias, F. Teixeira, A. Ferreira, V. Seib, N. Link, D. Paulus, A. Tatsuma, and M. Aono, “Retrieval of Objects Captured with Kinect One Camera,” in Eurographics Workshop on 3D Object Retrieval, 2015.