Human Activity Recognition (HAR, for short) is an important component of surveillance and security programs. HAR can be easily, directly and effectively done through surveillance cameras whether indoors or outdoors. However cameras are helpless when the space/room to be monitored is dark, dim lit, hazy or smoke filled. What is the alternative? That is the question Guo et al asked themselves and sure enough they hit upon an answer.
They suggest using ultrasonic sensors. Their recent paper in Applied Physics Letters describes how this can be done. Sound waves can propagate unhindered even if the ambience is dark, dim lit, hazy or smoke filled. Guo's team used a two dimensional array of acoustic receivers and a special algorithm based on the CNN (Convolutional Neural Network) to process the signals. CNNs have been in use for gesture recognition and body movements of humans engaged in routine activities. CNNs have the capability to extract specific features from the raw signals of complex body movements and classify extracted features into activities such as standing, sitting falling, walking etc.
Guo and his team used an acoustic grid roughly 40x40 cm in size which held 256 acoustic receivers in a 16X16 array and 4 ultrasonic transmitters in the centre. The transmitters emitted high frequency sinusoidal acoustic signals inclined at 45 deg. with an effective reach of 4meters. The human volunteers selected for the experiment varied in height and weight. They, one at a time, repeatedly performed activities such as standing, sitting, falling and walking at a distance of 2 meters from the gadget. The ultrasonic sensors collected the reflected signals and the CNN processor did the rest of the work. Guo et al found that the accuracy of HAR was 100% for simple static modes such as standing and sitting and 97.5% for others. They also report that higher the number of sensors and iterations, higher the recognition accuracy for complex activities such as walking and falling.
Guo and his team are of the opinion that acoustic surveillance is less intrusive of privacy than visual mode. Well, whatever that be, Eavesdropping 4G has arrived!
REFERENCES:
1.Deep Learning Models for Human Activity Recognition
2.Deep learning for sensor -based activity recognition A survey: Pattern Recognition Letters, vol.119, pp3-11 (2019)
3. Convolutional neural networks for human activity recognition using body-worn sensors
Rueda et al Informatics: 5 (26) pp (2018)
4. A single feature for human activity recognition using two dimensional acoustic array.
Guo et al, Applied Physics Letters Vol.114, 214101 (2019)
They suggest using ultrasonic sensors. Their recent paper in Applied Physics Letters describes how this can be done. Sound waves can propagate unhindered even if the ambience is dark, dim lit, hazy or smoke filled. Guo's team used a two dimensional array of acoustic receivers and a special algorithm based on the CNN (Convolutional Neural Network) to process the signals. CNNs have been in use for gesture recognition and body movements of humans engaged in routine activities. CNNs have the capability to extract specific features from the raw signals of complex body movements and classify extracted features into activities such as standing, sitting falling, walking etc.
Guo and his team used an acoustic grid roughly 40x40 cm in size which held 256 acoustic receivers in a 16X16 array and 4 ultrasonic transmitters in the centre. The transmitters emitted high frequency sinusoidal acoustic signals inclined at 45 deg. with an effective reach of 4meters. The human volunteers selected for the experiment varied in height and weight. They, one at a time, repeatedly performed activities such as standing, sitting, falling and walking at a distance of 2 meters from the gadget. The ultrasonic sensors collected the reflected signals and the CNN processor did the rest of the work. Guo et al found that the accuracy of HAR was 100% for simple static modes such as standing and sitting and 97.5% for others. They also report that higher the number of sensors and iterations, higher the recognition accuracy for complex activities such as walking and falling.
Guo and his team are of the opinion that acoustic surveillance is less intrusive of privacy than visual mode. Well, whatever that be, Eavesdropping 4G has arrived!
REFERENCES:
1.Deep Learning Models for Human Activity Recognition
2.Deep learning for sensor -based activity recognition A survey: Pattern Recognition Letters, vol.119, pp3-11 (2019)
3. Convolutional neural networks for human activity recognition using body-worn sensors
Rueda et al Informatics: 5 (26) pp (2018)
4. A single feature for human activity recognition using two dimensional acoustic array.
Guo et al, Applied Physics Letters Vol.114, 214101 (2019)