TY - GEN
T1 - Drone-based face recognition using deep learning
AU - Deeb, Adam
AU - Roy, Kaushik
AU - Edoh, Kossi D.
PY - 2021
Y1 - 2021
N2 - The next phase of facial recognition research is continuing to finding ways of improving the accuracy rates of models when input images are taken from less than ideal angles and distances, from lower quality images, and from images that do not show much facial information of the person. In this paper, we attempted to use convolutional neural network (CNN) models to accomplish these tasks and attain an improved top accuracy. In this study, we compared three different deep learning models: VGG16, VGG19, and InceptionResNetV2; when testing them in several different facial recognition tasks, using the DroneFace dataset. We used three of the most accurate CNNs, when tested by using the ImageNet database, in an attempt to show that they can be used to achieve high drone face recognition accuracy. After applying these three CNNs to the image dataset used in the study, we compared the accuracy achieved by using each deep learning model in order to see which model was best able to handle and interpret images presented it, when the images provided are taken from a drone. Specifically, we tested how the heights at which the images were taken, by the drone, affected the accuracy of the model at detecting who the photographs were taken of. We attempted to achieve this by training the model at large heights and testing at low heights, by training the model at large heights and testing at low heights, and by training on a random set of 80% of photographs of all subjects and testing on the remaining 20% of photographs of all subjects.
AB - The next phase of facial recognition research is continuing to finding ways of improving the accuracy rates of models when input images are taken from less than ideal angles and distances, from lower quality images, and from images that do not show much facial information of the person. In this paper, we attempted to use convolutional neural network (CNN) models to accomplish these tasks and attain an improved top accuracy. In this study, we compared three different deep learning models: VGG16, VGG19, and InceptionResNetV2; when testing them in several different facial recognition tasks, using the DroneFace dataset. We used three of the most accurate CNNs, when tested by using the ImageNet database, in an attempt to show that they can be used to achieve high drone face recognition accuracy. After applying these three CNNs to the image dataset used in the study, we compared the accuracy achieved by using each deep learning model in order to see which model was best able to handle and interpret images presented it, when the images provided are taken from a drone. Specifically, we tested how the heights at which the images were taken, by the drone, affected the accuracy of the model at detecting who the photographs were taken of. We attempted to achieve this by training the model at large heights and testing at low heights, by training the model at large heights and testing at low heights, and by training on a random set of 80% of photographs of all subjects and testing on the remaining 20% of photographs of all subjects.
UR - https://dx.doi.org/10.1007/978-981-15-3383-9_18
U2 - 10.1007/978-981-15-3383-9_18
DO - 10.1007/978-981-15-3383-9_18
M3 - Conference contribution
BT - Unknown book
PB - Springer
ER -