Manual to 3D Classification Survey

Abstract

We compiled a set of publicly available neural networks for classification of 3D models. The code works with ModelNet40 and ShapeNetCore datasets which are also available online. This is a manual explaining how to convert the datasets, train and test these networks.

Requirements

To run the code you will need a computer with Linux operating system and an NVIDIA GPU.

You will need to install:

Each neural network is an independent Docker image and all its dependencies are installed when building the image. All code is written in Python.

Datasets Setup

The code is made to work with ModelNet40 and ShapeNetCore datasets. The easiest way to run it with custom dataset is to restructure your data so it copies the structure of one of these datasets.

General Setup

You can download all the code from the paper’s webpage.

Each network is implemented as a separate Docker image. To learn more about Docker, images and containers visit this page.

Each neural network is contained in one directory in /dockers. None of the networks accepts mesh files as their input directly, so some data conversion is required. All data conversion is implemented in a Docker image with the same structure as neural networks images themselves. The code for data conversion is located in /dockers/data_conversion.

Each directory contains two important files - config.ini and run.sh, which you will need to open and edit. Another important file is Dockerfile which contains the definition of the Docker image. Remaining files contain the files which differ from the original network implementation. Original network code is downloaded automatically when building the image.

run.sh is a runnable script which builds the Docker image, runs the Docker container and executes (trains and evaluates) the neural network or runs the data conversion. You will need to setup a couple of variables here:

config.ini contains most of the relevant parameters of the network or data conversion. The file is split to sections where each section is started by [SECTION] statement. Then on each line a parameter is set in the format key = value. You can find explanation of network parameters in later sections.

Data conversion

To convert your dataset you need to set the parameters described above and then run script run.sh in your console. This will convert the dataset to various formats directly readable by the neural networks.

Parameters for data conversion in config.ini file:

Note that the paths are as seen from the running Docker container. The real paths on the host system correspond to the volume mapping (-v host_path:path_in_container in the run.sh file).

For more detail about individual data conversion scripts, continue here.

Neural Networks

Each of the neural networks is implemented in Python but in a different framework. That is why we used the Docker infrastructure. We try to present a unified framework to easily train and test the networks without changing the code. This section will briefly introduce used networks and some of their most important parameters.

Parameters common to all neural networks:

For more details about individual networks, continue here.

Logging and Evaluation

Our framework offers some basic logging options. It saves several .csv files to the logging directory. The logger keeps track of time of the training, training epochs and some other value.

By default four values are tracked: training loss, training accuracy, test loss and test accuracy. Evaluation on the test set is performed after each epoch of training. Also some basic graphs using matplotlib library are created and saved during training.

When testing your already trained network (using test = True in config.ini, or automatically after the training ends) an evaluation text file [network name].txt is saved, containing true and predicted category along with a simple confusion matrix visualisation ([network name].html). Additional evaluation statistics such as per-category accuracies can be computed from the evaluation text file by manually running the dockers/_common/Evaluation_tools.py script.