Scientific Publications
Peer reviewed scientific articles
List of peer reviewed publications
As an academic scholar, dissemination of my research findings are an important activity. For this, I focus on top tier peer reviewed scientific articles. Below you will find a list of all my peer reviewed scientific publications including conference proceedings, journal articles and books.
2022
Routray, Prasanna Kumar; Kanade, Aditya Sanjiv; Tiwari, Kshitij; Pounds, Pauline; Muniyandi, Manivannan
Towards multidimensional textural perception and classification through whisker Proceedings Article
In: 2022 IEEE International Symposium on Robotic and Sensors Environments (ROSE), pp. 1–7, IEEE 2022.
Links | BibTeX | Tags: Bioinspired, Touch Sensing
@inproceedings{routray2022towards,
title = {Towards multidimensional textural perception and classification through whisker},
author = {Prasanna Kumar Routray and Aditya Sanjiv Kanade and Kshitij Tiwari and Pauline Pounds and Manivannan Muniyandi},
url = {https://ieeexplore.ieee.org/abstract/document/9977409},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
booktitle = {2022 IEEE International Symposium on Robotic and Sensors Environments (ROSE)},
pages = {1--7},
organization = {IEEE},
keywords = {Bioinspired, Touch Sensing},
pubstate = {published},
tppubtype = {inproceedings}
}
Tiwari, Kshitij; Sakcak, Basak; Routray, Prasanna; Manivannan, M; LaValle, Steven M
Visibility-inspired models of touch sensors for navigation Proceedings Article
In: 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 13151–13158, IEEE 2022.
Links | BibTeX | Tags: Bioinspired, Touch Sensing
@inproceedings{tiwari2022visibilityb,
title = {Visibility-inspired models of touch sensors for navigation},
author = {Kshitij Tiwari and Basak Sakcak and Prasanna Routray and M Manivannan and Steven M LaValle},
url = {https://ieeexplore.ieee.org/abstract/document/9981084},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
booktitle = {2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
pages = {13151--13158},
organization = {IEEE},
keywords = {Bioinspired, Touch Sensing},
pubstate = {published},
tppubtype = {inproceedings}
}
2021
Pearson, Martin J; Dora, Shirin; Struckmeier, Oliver; Knowles, Thomas C; Mitchinson, Ben; Tiwari, Kshitij; Kyrki, Ville; Bohte, Sander; Pennartz, Cyriel MA
Multimodal Representation Learning for Place Recognition Using Deep Hebbian Predictive Coding Journal Article
In: Frontiers in Robotics and AI, vol. 8, 2021.
Abstract | Links | BibTeX | Tags: Bioinspired, Touch Sensing
@article{pearson2021multimodal,
title = {Multimodal Representation Learning for Place Recognition Using Deep Hebbian Predictive Coding},
author = {Martin J Pearson and Shirin Dora and Oliver Struckmeier and Thomas C Knowles and Ben Mitchinson and Kshitij Tiwari and Ville Kyrki and Sander Bohte and Cyriel MA Pennartz},
url = {https://www.frontiersin.org/articles/10.3389/frobt.2021.732023/full},
doi = {10.3389/frobt.2021.732023},
year = {2021},
date = {2021-01-01},
urldate = {2021-01-01},
journal = {Frontiers in Robotics and AI},
volume = {8},
publisher = {Frontiers Media SA},
abstract = {Recognising familiar places is a competence required in many engineering applications that interact with the real world such as robot navigation. Combining information from different sensory sources promotes robustness and accuracy of place recognition. However, mismatch in data registration, dimensionality, and timing between modalities remain challenging problems in multisensory place recognition. Spurious data generated by sensor drop-out in multisensory environments is particularly problematic and often resolved through adhoc and brittle solutions. An effective approach to these problems is demonstrated by animals as they gracefully move through the world. Therefore, we take a neuro-ethological approach by adopting self-supervised representation learning based on a neuroscientific model of visual cortex known as predictive coding. We demonstrate how this parsimonious network algorithm which is trained using a local learning rule can be extended to combine visual and tactile sensory cues from a biomimetic robot as it naturally explores a visually aliased environment. The place recognition performance obtained using joint latent representations generated by the network is significantly better than contemporary representation learning techniques. Further, we see evidence of improved robustness at place recognition in face of unimodal sensor drop-out. The proposed multimodal deep predictive coding algorithm presented is also linearly extensible to accommodate more than two sensory modalities, thereby providing an intriguing example of the value of neuro-biologically plausible representation learning for multimodal navigation.},
keywords = {Bioinspired, Touch Sensing},
pubstate = {published},
tppubtype = {article}
}