Code accompanying the report for CS2309: Research Methodology
DuckieNet: Integrating Planning with Semantic Segmentation for Goal-Directed Autonomous Navigation in Crowded Environments, by Chen Yuan Bo and Larry Law
Duckie-Net requires Keras v2.2.4, TensorFlow 1.15, Python 3.6, Ros Kinetic.
Install the prequisite packages via
conda env create -f environment.yaml
conda activate duckienet
conda env list // verify if environment was installed correctly
Important: Remember to swap out the
path_to_repowith your own repository path! Do a find and swap of all occurrences in the whole repository.
- Straight Road Experiments on DuckieNet
./gym-duckietown/duckienet.py --seg --map-name straight_road - Straight Road Dynamic Experiments on DuckieNet
./gym-duckietown/duckienet.py --seg --map-name straight_road_moving - DuckieTown Experiment on DuckieNet
./gym-duckietown/duckienet.py --seg --map-name udem1
You can vary the difficulty by choosing starting tile in
./gym-duckietown/gym_duckietown/maps/udem1.yaml.
To test the experiments on I-Net, simply use the I-Net model instead of the DuckieNet model, and remove the
--segflag.
The data for the semantic segmentation model comprise of input variables RGB images and output variables segmentation labels.
- Use
generate_valid_positions.ipynbinside./gym-duckietownfolder to generate valid spawn positions within DuckieTown. - Generate segmentation labels:
a. Swap out the
textures/andmeshes/folders inside./gym-duckietown/gym_duckietown/with thetextures/andmeshes/folders inside./segmentation_textures/. b. Usegenerate_segmented_images.ipynbto generate the corresponding segmentation labels. - Generate RGB images:
a. Swap out the
textures/andmeshes/folders to the original folders. b. Usegenerate_segmented_images.ipynbto generate the corresponding RGB images.
The data for DuckieNet is prepared by following the steps in section 4.1.1 of the report with the command
./gym-duckietown/generate_data.py --map-name udem1 --save --counter-start 0
Data is saved in
./gym-duckietown/data
The training of DuckieNet comprise of two phases.
- Training of BiSeNet. For more information, refer to the README file in the
bisenetfolder.
$ cd bisenet
$ export CUDA_VISIBLE_DEVICES=0,1
$ python -m torch.distributed.launch --nproc_per_node=2 tools/train_amp.py --model bisenetv2 # or bisenetv1
- Training of DuckieNet
$ cd intention_net/intention_net
$ python main.py --ds DUCKIETOWN --mode DLM --input_frame NORMAL --seg
To train Intention-Net (which does not use segmented images), remove the
--segflag.
Simply run the commands in 'Running Experiments' section, or run the model on any of DuckieTown maps in./gym-duckietown/gym_duckietown/maps/.
Metrics (as stated in section 4.1 of the report) will be logged in
log.txt.
We used materials from the following code repositories.
Wei Gao, David Hsu, Wee Sun Lee, Shengmei Shen, & Karthikk Subramanian. (2017). Intention-Net: Integrating Planning and Deep Learning for Goal-Directed Autonomous Navigation. [https://github.com/AdaCompNUS/intention_net/tree/master/intention_net]
Atsushi Sakai, Daniel Ingram, Joseph Dinius, Karan Chawla, Antonin Raffin, & Alexis Paques. (2018). PythonRobotics: a Python code collection of robotics algorithms. [https://github.com/AtsushiSakai/PythonRobotics]
Changqian Yu, Jingbo Wang, Chao Peng, Changxin Gao, Gang Yu, & Nong Sang. (2018). BiSeNet: Bilateral Segmentation Network for Real-time Semantic Segmentation. [https://github.com/CoinCheung/BiSeNet]
L. Paull, J. Tani, H. Ahn, J. Alonso-Mora, L. Carlone, M. Cap, Y. F. Chen, C. Choi, J. Dusek, Y. Fang, D. Hoehener, S. Liu, M. Novitzky, I. F. Okuyama, J. Pazis, G. Rosman, V. Varricchio, H. Wang, D. Yershov, H. Zhao, M. Benjamin, C. Carr, M. Zuber, S. Karaman, E. Frazzoli, D. Del Vecchio, D. Rus, J. How, J. Leonard, & A. Censi (2017). Duckietown: An open, inexpensive and flexible platform for autonomy education and research. In 2017 IEEE International Conference on Robotics and Automation (ICRA) (pp. 1497-1504). [https://github.com/duckietown/gym-duckietown]