Deep Learning-based human activity recognition using RGB images in Human-robot collaboration

Babatunde Keshinro, Younho Seong, Sun Yi

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

In human-robot interaction, to ensure safety and effectiveness, robots need to be able to accurately predict human intentions. Hidden Markov Model, Bayesian Filtering, and deep learning methods have been used to predict human intentions. However, few studies have explored deep learning methods to predict variant human intention. Our study aims to evaluate the performance of the human intent recognition inference algorithm, and its impact on the human-robot team for collaborative tasks. Two deep learning algorithms ConvLSTM and LRCN were used to predict human intention. A dataset of 10 participants performing Pick, Throw, Wave, and Carry actions was used. The ConvLSTM method had a prediction accuracy of 74\%. The LRCN method had a lower prediction accuracy of 25\% compared to ConvLSTM. This result shows that deep learning methods using RGB images can predict human intent with high accuracy. The proposed method is successful in predicting human intents underlying human behavior.
Original languageEnglish
Title of host publicationUnknown book
Pages1548-1553
Volume66
Edition1
StatePublished - 2022

Fingerprint

Dive into the research topics of 'Deep Learning-based human activity recognition using RGB images in Human-robot collaboration'. Together they form a unique fingerprint.

Cite this