CS PhD student at IGDTUW | UNESCO Research Fellow | Don Lavoie Fellow at GMU | Elinor Ostrom Doctoral Fellow | ex-Google | ex-Adobe | ex-Mozilla
Nota Bene: I self-sustain. No cash reserves. I work and pay my bills, while studying. I am looking for remote work in 2024. Write to me if you're a professor, recruiter, admissions counselor, grant writer, policy researcher, run a non-profit, looking for online tutor, or simply a hustler yourself.Published Apr 29, 2019
Human Activity Recognition (HAR) comes under the umbrella of a rapidly emerging research domain, Human Computer Interaction (HCI), that tries to integrate something as complex as human behaviour and their social context with computer systems. HAR is a powerful method that has wide applications in the healthcare domain, for example, daily activity recognition for elderly people, detecting possible emergency situations, anomalies and deviations in ones routine as we are easily able to track their daily activities and give assistance. In this paper, we modelled a system for Human Activity Recognition harnessing smartphone sensor data, which is robust against noisy features.We classify the six basic human actions - standing, sitting, lying, walking downstairs, and walking upstairs based on their sensor signatures.
The dataset is procured from UCI Machine Learning repos- itory. It comprises of raw sensor data from the smart phone embedded sensors of 30 different subjects in the age group 19 - 48. The subjects performed six basic activities - standing, sit- ting, lying, walking, walking downstairs and walking upstairs. The accelerometer (3-axial linear acceleration) and gyroscope (3-axial angular velocity) sensor data of these activities and six postural transitions - stand-to-sit, sit-to-stand, sit-to-lie, lie-to- sit, stand-to-lie, and lie-to-stand were recorded. The six basic activities were considered for our experiment. The raw data set had 8,15,614 instances with six features. We de-noised the raw data, preprocessed it (steps explained in the next section) and generated 561 features for further classification.
To arrive at the proposed classification pipeline, we car- ried out a set of experiments using the HAR dataset. In the reference paper, they included three different classifiers: Gaussian Naive Bayes, Logistic Regression and Boosting, and two feature extraction techniques: Principal Component Analysis(PCA) and Latent Dirichlet Allocation (LDA). The different classifiers were used in combination with the feature extraction techniques to perform the classification task and we have selected the model with the best performance. In our experiments, we found that Gaussian Naive Bayes with feature extraction using LDA performs better, with an overall accuracy of 96.97%.