Skip to content
Snippets Groups Projects
Commit 6b2e4300 authored by Martin Iversen's avatar Martin Iversen
Browse files

Update README.md

parent e1ae48f6
No related branches found
No related tags found
No related merge requests found
# Installation guide
To reproduce results, please install the 3DTrans repository as detailed in the README provided in that repo [here](https://github.com/PJLab-ADG/3DTrans/blob/master/docs/INSTALL.md)
## Configuration
We ran the repository and its model using the following configuration, we strongly advice using a python virtual environment, the 3Dtrans repo has some quirky dependency requirements, the following configuration was created based on numpy 1.19.4:
- GCC 10.2.0
- Python 3.8.6
- CUDA 11.1
- cuDNN 8.0.4.30
- torch 1.8.1+cu111
- torchaudio 0.8.1
- torchvision 0.9.1+cu111
- waymo-open-dataset-tf-2-4-0 (Needed for dataset evaluation and automatically installs tensorflow)
- tensorflow 2.4.0
- tensorboardX 2.6
- spconv-cu111
- SharedArray 3.1.0
- scikit-image 0.19.3
- PyYAML 6.0.1 (yaml.load needs to be swapped to yaml.full.load or yaml.safe.load see documentation on PyYAML)
- protobuf 3.19.6
- Pillow 9.2.0
- opencv-python 4.8.1.78
- h5py 2.10.0
- numba 0.56.4
- numpy 1.19.4
To train the models we used 2 [NvidiaA100GPUS ](https://www.nvidia.com/en-us/data-center/a100/) with 160 gb of video ram,
4 cpu cores per GPU and 60gb of memory.
After installation please configure the ONCE and Waymo dataset as detailed in the repository's dedicated dataset [README](https://github.com/PJLab-ADG/3DTrans/blob/master/docs/GETTING_STARTED_DB.md) we used a small subset of the data and split the data using the txt files in the ImageSets folder located in the dedicated dataset folders.
In order to run the pointContrast pre-training model you need to merge labels, we have provided a script for this called mergeClasses.py in the preProcessScripts folder.
Please see the files comments for usage, this file needs to be used to merge the files: once_infos_train.pkl, once_dbinfos_train.pkl and once_infos_val.pkl which are generated from the generate_infos function command in the [README](https://github.com/PJLab-ADG/3DTrans/blob/master/docs/GETTING_STARTED_DB.md).
Additionally the once_dataset.py file in the follwoing located [here](https://github.com/PJLab-ADG/3DTrans/blob/master/pcdet/datasets/once/once_dataset.py)
on line 416 the list of ignored sets needs to be changed so that the raw_small section is not ignored.
# Debugging and troubleshooting
If errors are encountered when running the paradigm we would advice either posting an issue in the 3Dtrans repo or by looking for/posting an error on the opendPCdet repository located [here](https://github.com/open-mmlab/OpenPCDet) as 3Dtrans uses this repository.
Moreover debugging code by starting from source and properly configuring CUDA and tensorflow before starting the testing is crucial, make use of the tensorflow installation guide located [here](https://www.tensorflow.org/install/pip) and ensuring that the following command detects both GPUS:
`python3 -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"`
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment