Front. Robot. AI Frontiers in Robotics and AI Front. Robot. AI 2296-9144 Frontiers Media S.A. 10.3389/frobt.2020.00071 Robotics and AI Review Elderly Fall Detection Systems: A Literature Survey Wang Xueyi 1 * Ellul Joshua 2 Azzopardi George 1 1Department of Computer Science, Bernoulli Institute for Mathematics, Computer Science and Artificial Intelligence, University of Groningen, Groningen, Netherlands 2Computer Science, Faculty of Information & Communication Technology, University of Malta, Msida, Malta

Edited by: Soumik Sarkar, Iowa State University, United States

Reviewed by: Sambuddha Ghosal, Massachusetts Institute of Technology, United States; Carl K. Chang, Iowa State University, United States

*Correspondence: Xueyi Wang xueyi.wang@rug.nl

This article was submitted to Sensor Fusion and Machine Perception, a section of the journal Frontiers in Robotics and AI

23 06 2020 2020 7 71 17 12 2019 30 04 2020 Copyright © 2020 Wang, Ellul and Azzopardi. 2020 Wang, Ellul and Azzopardi

This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

Falling is among the most damaging event elderly people may experience. With the ever-growing aging population, there is an urgent need for the development of fall detection systems. Thanks to the rapid development of sensor networks and the Internet of Things (IoT), human-computer interaction using sensor fusion has been regarded as an effective method to address the problem of fall detection. In this paper, we provide a literature survey of work conducted on elderly fall detection using sensor networks and IoT. Although there are various existing studies which focus on the fall detection with individual sensors, such as wearable ones and depth cameras, the performance of these systems are still not satisfying as they suffer mostly from high false alarms. Literature shows that fusing the signals of different sensors could result in higher accuracy and lower false alarms, while improving the robustness of such systems. We approach this survey from different perspectives, including data collection, data transmission, sensor fusion, data analysis, security, and privacy. We also review the benchmark data sets available that have been used to quantify the performance of the proposed methods. The survey is meant to provide researchers in the field of elderly fall detection using sensor networks with a summary of progress achieved up to date and to identify areas where further effort would be beneficial.

fall detection Internet of Things (IoT) information system wearable device ambient device sensor fusion

香京julia种子在线播放

    1. <form id=HxFbUHhlv><nobr id=HxFbUHhlv></nobr></form>
      <address id=HxFbUHhlv><nobr id=HxFbUHhlv><nobr id=HxFbUHhlv></nobr></nobr></address>

      1. Introduction

      More than nine percent of the population of China was aged 65 or older in 2015 and within 20 years (2017–2037) it is expected to reach 20%1. According to the World Health Organization (WHO), around 646 k fatal falls occur each year in the world, the majority of whom are suffered by adults older than 65 years (WHO, 2018). This makes it the second reason for unintentional injury death, followed by road traffic injuries. Globally, falls are a major public health problem for the elderly. Needless to say, the injuries caused by falls that elderly people experience have many consequences to their families, but also to the healthcare systems and to the society at large.

      As illustrated in Figure 1, Google Trends2 show that fall detection has drawn increasing attention from both academia and industry, especially in the last couple of years, where a sudden increase can be observed. Moreover, on the same line, the topic of fall-likelihood prediction is very significant too, which is coupled with some applications focused on prevention and protection.

      Interest of fall detection over time, from January 2004 to December 2019. The data is taken from Google Trends with the search topic “fall detection.” The values are normalized with the maximum interest, such that the highest interest has a value of 100.

      El-Bendary et al. (2013) reviewed the trends and challenges of elderly fall detection and prediction. Detection techniques are concerned with recognizing falls after they occur and trigger an alarm to emergency caregivers, while predictive methods aim to forecast fall incidents before or during their occurrence, and therefore allow immediate actions, such as the activation of airbags.

      During the past decades, much effort has been put into these fields to improve the accuracy of fall detection and prediction systems as well as to decrease the false alarms. Figure 2 shows the top 25 countries in terms of the number of publications about fall detection from the year 1945 to 2020. Most of the publications originate from the United States, followed by England, China, and Germany, among others. The data indicates that developed countries invest more in conducting research in this field than others. Due to higher living standards and better medical resources, people in developed countries are more likely to have longer life expectancy, which results in a higher aging population in such countries (Bloom et al., 2011).

      (A) A map and (B) a histogram of publications on fall detection by countries and regions from 1945 to 2020.

      In this survey paper, we provide a holistic overview of fall detection systems, which is aimed for a broad readership to become abreast with the literature in this field. Besides fall detection modeling techniques, this review covers other topics including issues pertaining to data transmission, data storage and analysis, and security and privacy, which are equally important in the development and deployment of such systems.

      The other parts of the paper are organized as follows. In section 2, we start by introducing the types of fall and reviewing other survey papers to illustrate the research trend and challenges up to date, followed by a description of our literature search strategy. Next, in section 3 we introduce hardware and software components typically used in fall detection systems. Sections 4 and 5 give an overview of fall detection methods that rely on both individual or a collection of sensors. In section 6, we address issues of security and privacy. Section 7 introduces projects and applications of fall detection. In section 8, we provide a discussion about the current trends and challenges, followed by a discussion on challenges, open issues, and other aspects on future directions. Finally, we provide a summary of the survey and draw conclusions in section 9.

      2. Types of Falls and Previous Reviews on Elderly Fall Detection 2.1. Types of Falls

      The impact and consequences of a fall can vary drastically depending upon various factors. For instance, falling whilst either walking, standing, sleeping or sitting on a chair, share some characteristics in common but also have significant differences between them.

      In El-Bendary et al. (2013), the authors group the types of falls in three basic categories, namely forward, lateral, and backward. Putra et al. (2017) divided falls into a broader set of categories, namely forward, backward, left-side, right-side, blinded-forward, and blinded-backward, and in the study by Chen et al. (2018) falls are grouped in more specific categories including fall lateral left lie on the floor, fall lateral left and sit up from floor, fall lateral right and lie on the floor, fall lateral and left sit up from the floor, fall forward and lie on the floor, and fall backward and lie on the floor.

      Besides the direction one takes whilst falling another important aspect is the duration of the fall, which may be influenced by age, health and physical condition, along with any consequences of activities that the individual was undertaking. Elderly people may suffer from longer duration of falls, because of motion with low speed in the activity of daily living. For instance, in fainting or chest pain related episodes an elderly person might try to rest by a wall before lying on the floor. In other situations, such as injuries due to obstacles or dangerous settings (e.g., slanting or uneven pavement or surfaces), an elderly person might fall abruptly. The age and gender of the subject also play a role in the kinematics of falls.

      The characteristics of different types of falls are not taken into consideration in most of the work on fall detection surveyed. In most of the papers to date, data sets typically contain falls that are simulated by young and healthy volunteers and do not cover all types of falls mentioned above. The resulting models from such studies, therefore, do not lead to models that generalize well enough in practical settings.

      2.2. Review of Previous Survey Papers

      There are various review papers that give an account of the development of fall detection from different aspects. Due to the rapid development of smart sensors and related analytical approaches, it is necessary to re-illustrate the trends and development frequently. We choose the most highly cited review papers, from 2014 to 2020, based on Google Scholar and Web of Science, and discuss them below. These selected review papers demonstrate the trends, challenges, and development in this field. Other significant review papers before 2014 are also covered in order to give sufficient background of earlier work.

      Chaudhuri et al. (2014) conducted a systematic review of fall detection devices for people of different ages (excluding children) from several perspectives, including background, objectives, data sources, eligibility criteria, and intervention methods. More than 100 papers were selected and reviewed. The selected papers were divided into several groups based on different criteria, such as the age of subjects, method of evaluation and devices used in detection systems. They noted that most of the studies were based on synthetic data. Although simulated data may share common features with real falls, a system trained on such data cannot reach the same reliability of those that use real data.

      In another survey, Zhang et al. (2015) focused on vision-based fall detection systems and their related benchmark data sets, which have not been discussed in other reviews. Vision-based approaches of fall detection were divided into four categories, namely individual single RGB cameras, infrared cameras, depth cameras, and 3D-based methods using camera arrays. Since the advent of depth cameras, such as Microsoft Kinect, fall detection with RGB-D cameras has been extensively and thoroughly studied due to the inexpensive price and easy installation. Systems which use calibrated camera arrays also saw prominent uptake. Because such systems rely on many cameras positioned at different viewpoints, challenges related to occlusion are typically reduced substantially, and therefore result in less false alarm rates. Depth cameras have gained particular popularity because unlike RGB camera arrays they do not require complicated calibration and they are also less intrusive of privacy. Zhang et al. (2015) also reviewed different types of fall detection methods, that rely on the activity/inactivity of the subjects, shape (width-to-height ratio), and motion. While that review gives a thorough overview of vision-based systems, it lacks an account of other fall detection systems that rely on non-vision sensors such as wearable and ambient ones.

      Further to the particular interest in depth cameras, Cai et al. (2017) reviewed the benchmark data sets acquired by Microsoft Kinect and similar cameras. They reviewed 46 public RGB-D data sets, 20 of which are highly used and cited. They compared and highlighted the characteristics of all data sets in terms of their suitability to certain applications. Therefore, the paper is beneficial for scientists who are looking for benchmark data sets for the evaluation of new methods or new applications.

      Based on the review provided by Chen et al. (2017a), individual depth cameras and inertial sensors seem to be the most significant approaches in vision- and non-vision-based systems, respectively. In their review, the authors concluded that fusion of both types of sensor resulted in a system that is more robust than a system relying on one type of sensor.

      The ongoing and fast development in electronics have resulted in more miniature and cheaper electronics. For instance, the survey by Igual et al. (2013) noted that low-cost cameras and accelerometers embedded in smartphones may offer the most sensible technological choice for the investigation of fall detection. Igual et al. (2013) identified two main trends on how research is progressing in this field, namely the use of vision and smartphone-based sensors that give input and the use of machine learning for the data analysis. Moreover, they reported the following three main challenges: (i) real-world deployment performance, (ii) usability, and (iii) acceptance. Usability refers to how practical the elderly people find the given system. Because of the issue of privacy and intrusive characteristics of some sensors, there is a lack of acceptance for the elderly to live in an environment monitored by sensors. They also pointed out several issues which need to be taken into account, such as smartphone limitations (e.g., people may not carry smartphones all the time with them), privacy concerns, and the lack of benchmark data sets of realistic falls.

      The survey papers mentioned above focus mostly on the different types of sensors that can be used for fall detection. To the best of our knowledge, there are no literature surveys that provide a holistic review of fall detection systems in terms of data acquisition, data analysis, data transport and storage, sensor networks and Internet of Things (IoT) platforms, as well as security and privacy, which are significant in the deployment of such systems.

      2.3. Key Results of Pioneering Papers

      In order to illustrate a timeline of fall detection development, in this section we focus on the key and pioneering papers. Through manual filtering of papers using the web of science, one can find the trendsetting and highly cited papers in this field. By analyzing retrieved articles using citespace one can find that fall detection research first appeared in the 1990s, beginning with the work by Lord and Colvin (1991) and Williams et al. (1998). A miniature accelerometer and microcomputer chip embedded in a badge was used to detect falls (Lord and Colvin, 1991), while Williams et al. (1998) applied a piezoelectric shock sensor and a mercury tilt switch which monitored the orientation of the body to detect falls. At first, most studies were based on accelerometers including the work by Bourke et al. (2007). In their work, they compared which of the trunk and thigh offer the best location to attach the sensor. Their results showed that a person's trunk is a better location in comparison to the thigh, and they achieved 100% specificity with a certain threshold value with a sensor located in the trunk. This method was the state-of-the-art at the time, which undoubtedly supported it in becoming the most highly cited paper in the field.

      At the time the trend was to use individual sensors for detection, within which another key paper by Bourke and Lyons (2008) was proposed to explore the problem at hand by using a single gyroscope that measures three variables, namely angular velocity, angular acceleration, and the change in the subject's trunk-angle. If the values of these three variables in a particular instance are above some empirically determined thresholds, then that instance is flagged as a fall. Three thresholds were set to distinguish falls from non-falls. Falls are detected when the angular velocity of a subject is greater than the fall threshold, and the angular acceleration of the subject is greater than the second fall threshold, and the change in the trunk-angle of the subject is greater than the third fall threshold. They reported accuracy of 100% on a data set with only four kinds of falls and 480 movements simulated by young volunteers. However, for those classifiers, which are based solely on either accelerometers or gyroscopes, are argued to suffer from insufficient robustness (Tsinganos and Skodras, 2018). Later, Li et al. (2009) investigated fusion of gyroscope and accelerometer data for the classification of falls and non-falls. In their work, they demonstrated how a fusion based approach resulted in a more robust classification. For instance, it could distinguish falls more accurately from certain fall-like activities, such as sitting down quickly and jumping, which is hard to detect using a single accelerometer. This work had inspired further research on sensor fusion. These two types of sensors can nowadays be found in all smart phones (Zhang et al., 2006; Dai et al., 2010; Abbate et al., 2012).

      Besides the two non-vision based types of sensors mentioned above, vision-based sensors, such as surveillance cameras, and ambience-based, started becoming an attractive alternative. Rougier et al. (2011b) proposed a shape matching technique to track a person's silhouette through a video sequence. The deformation of the human shape is then quantified from the silhouettes based on shape analysis methods. Finally, falls are classified from normal activities using a Gaussian mixture model. After surveillance cameras, depth cameras also attracted substantial attention in this field. The earliest research which applied Time-of-Flight (TOF) depth camera was conducted in 2010 by Diraco et al. (2010). They proposed a novel approach based on visual sensors, which does not require landmarks, calibration patterns or user intervention. A ToF camera is, however, expensive and has low image resolution. Following that, the Kinect depth camera was first used in 2011 by Rougier et al. (2011a). Two features, human centroid height and velocity of body, were extracted from depth information. A simple threshold based algorithm was applied to detect falls and an overall success rate of 98.7% was achieved.

      After the introduction of Kinect by Microsoft, there was a large shift in research from accelerometers to depth cameras. Accelerometers and depth cameras have become the most popular individual and combined sensors (Li et al., 2018). The combination of these two sensors achieved a substantial improvement when compared to the individual use of the sensors separately.

      2.4. Strategy of the Literature Search

      We use two databases, namely Web of Science and Google Scholar, to search for relevant literature. Since the sufficient advancements have been made at a rapid pace recently, searches included articles that were published in the last 6 years (since 2014). We also consider, all survey papers that were published on the topic of fall detection. Moreover, we give an account of all relevant benchmark data sets that have been used in this literature.

      For the keywords “fall detection”, 4,024 and 575,000 articles were found for the above two mentioned databases, respectively, since 2014. In order to narrow down our search to the more relevant articles we compiled a list of the most frequently used keywords that we report in Table 1.

      The most frequently used keywords in the topic of fall detection.

      Wearable sensor Visual sensor Ambient sensor Sensor fusion
      Fall detection Fall detection Fall detection Fall detection
      Falls Falls Falls Falls
      Fall accident Fall accident Fall accident Fall accident
      Machine learning Machine learning Machine learning Machine learning
      Deep learning Deep learning Deep learning Deep learning
      Reinforcement learning Reinforcement learning Reinforcement learning Reinforcement learning
      Body area networks Multiple camera Ambient sensor Health monitoring
      Wearable Visual Ambient Sensor fusion
      Worn Vision-based Ambience Sensor network
      Accelerometer Kinect RF-sensing Data fusion
      Gyroscope Depth camera WiFi Multiple sensors
      Biosensor Video surveillance Radar Camera arrays
      Smart watch RGB camera Cellular Decision fusion
      Gait Infrared camera Vibration Anomaly detection
      Wearable based Health- monitoring Ambience-based IoT

      They are manually classified into four categories.

      We use the identified keywords above to generate the queries listed in Table 2 in order to make the search more specific to the three classes of sensors that we are interested in. For the retrieved articles, we discuss their contributions and keep only those that are truly relevant to our survey paper. For instance, articles that focus on rehabilitation after falls, and causes of falls, among others, are filtered out manually. This process, which is illustrated in Figure 3, ends up with a total of 87 articles, 13 of which describe benchmark data sets.

      Search queries used in Google Scholar and Web of Science for the three types of sensor and sensor fusion.

      Sensor type Query
      Wearable-based (Topic): ((“Fall detection" OR “Fall” OR “Fall accident”) AND (“Wearable” OR “Worn” OR “Accelerometer” OR “Machine learning” OR “Deep learning” OR “Reinforcement learning”) NOT “Survey” NOT “Review” NOT “Kinect” NOT “Video” NOT “Infrared” NOT “Ambient”)
      Vision-based (Topic): ((“Fall detection” OR “Falls” OR “Fall accident”) AND (“Video” OR “Visual” OR “Vision-based” OR “Kinect” OR “Depth camera” OR “Video surveillance” OR “RGB camera” OR “Infrared camera” OR “Monocular camera” OR “Machine learning” OR “Deep learning” OR “Reinforcement learning”) NOT “Wearable” NOT “Ambient”)
      Ambient-based (Topic): ((“Fall detection” OR “Falls” OR “Fall accident”) AND (“Ambient” OR “Ambient-based” OR “Ambience-based” OR “RF-sensing” OR “WiFi” OR “Cellular” OR “vibration” OR “Ambience” OR “Radar” OR “Machine learning” OR “Deep learning” OR “Reinforcement learning”) NOT “Wearable” NOT “vision”)
      Sensor Fusion (Topic): ((“Fall detection” OR “Falls” OR “Falls accident”) AND (“Health monitoring” OR “Multiple sensors” OR “Sensor fusion” OR “Sensor network” “Data fusion” OR “IoT” OR “Camera arrays” OR “Decision fusion” OR “Health monitoring” OR “Fusion” OR “Multiple sensors” OR “Machine learning” OR “Deep learning” OR “Reinforcement learning”))

      Illustration of the literature search strategy. The wearable-based queries in Table 2 return 28 articles. The vision- and ambient-based queries return 31 articles, and the sensor fusion queries return 28 articles.

      3. Hardware and Software Components Involved in a Fall Detection System

      Most of the research of fall detection share a similar system architecture, which can be divided into four layers, namely Physiological Sensing Layer (PSL), Local Communication Layer (LCL), Information Processing Layer (IPL), and User application Layer (UAL), as suggested by Ray (2014) and illustrated in Figure 4.

      The main components typically present within fall detection system architectures include the illustrated sequence of four layers. Data is collected in the physiological sensing layer, transferred through the local communication layer, then it is analyzed in the information processing layer, and finally the results are presented in the user application layer.

      PSL is the fundamental layer that contains various (smart) sensors used to collect physiological and ambient data from the persons being monitored. The most commonly used sensors nowadays include accelerometers that sense acceleration, gyroscopes that detect angular velocity, and magnetometers which sense orientation. Video surveillance cameras, which provide a more traditional means of sensing human activity, are also often used but are installed in specific locations, typically with fixed fields of views. More details about PSL are discussed in sections 4.1 and 5.1.

      The next layer, namely LCL, is responsible for sending the sensor signals to the upper layers for further processing and analysis. This layer may have both wireless and wired methods of transmission, connected to local computing facilities or to cloud computing platforms. LCL typically takes the form of one (or potentially more) communication protocols, including wireless mediums like cellular, Zigbee, Bluetooth, WiFi, or even wired connections. We provide more details on LCL in sections 4.2 and 5.2.

      IPL is a key component of the system. It includes hardware and software components, such as micro-controller, to analyze and transfer data from PSL to higher layers. In terms of software components, different kinds of algorithms, such as threshold, conventional machine learning, deep learning, and deep reinforcement learning are discussed in sections 4.3, 5.3, and 8.1.

      Finally, the UAL concerns applications that assist the users. For instance, if a fall is detected in the IPL, a notification can first be sent to the user and if the user confirms the fall or does not answer, an alarm is sent to the nearest emergency caregivers who are expected to take immediate action. There are plenty of other products like Shimmer and AlertOne, which have been deployed as commercial applications to users. We also illustrate other different kinds of applications in section 7.

      4. Fall Detection Using Individual Sensors 4.1. Physiological Sensing Layer (PSL) of Individual Sensors

      As mentioned above, fall detection research applied either a single sensor or fusion by multiple sensors. The methods of collecting data are typically divided into four main categories, namely individual wearable sensors, individual visual sensors, individual ambient sensors and data fusion by sensor networks. Whilst some literature groups visual and ambient sensors together we treat them as two different categories in this survey paper due to visual sensors becoming more prominent as a detection method with the advent of depth cameras (RGBD), such as the Kinect.

      4.1.1. Individual Wearable Sensors

      Falls may result in key physiological variations of the human body, which provide a criterion to detect a fall. By measuring various human body related attributes using accelerometers, gyroscopes, glucometers, pressure sensors, ECG (Electrocardiography), EEG (Electroencephalography), or EOG (Electromyography), one can detect anomalies within subjects. Due to the advantages of mobility, portability, low cost, and availability, wearable devices are regarded as one of the key types of sensors for fall detection and have been widely studied. Numerous studies have been conducted to investigate wearable devices, which are regarded as a promising direction to study fall detection and prediction.

      Based on our search criteria and filtering strategy (Tables 1, 2), 28 studies, including eight papers focusing on public data sets, focusing on fall detection by individual wearable devices are selected and described to illustrate trends and challenges of fall detection during the past 6 years. Some conclusions can be drawn based on the literature during the past 6 years in comparison to the studies before 2014. From Table 3, we note that studies applying accelerometers account for a large percentage of research in this field. To the best of our knowledge, only Xi et al. (2017) deployed electromyography to detect falls, and 19 out of 20 papers applied an accelerometer to detect falls. Although the equipment used, such as Shimmer nodes, smartphones, and smart watches, often contain other sensors like gyroscopes and magnetometers, these sensors were not used to detect falls. Bourke et al. (2007) also found that accelerometers are regarded as the most popular sensors for fall detection mainly due to its affordable cost, easy installation and relatively good performance.

      Fall detection using individual wearable devices from 2014 to 2020.

      References Sensor Location No. subjects (age) Data sets Algorithms Equipment Alarm
      Saleh and Jeannès (2019) Accelerometer Waist 23 (19–30), 15 (60–75) Simulated SVM N/A N
      Zitouni et al. (2019) Accelerometer Sole 6 (N/A) Simulated Threshold Smartsole N/A
      Thilo et al. (2019) Accelerometer Torso 15 (mean = 81) N/A N/A N/A Y
      Wu et al. (2019) Accelerometer Chest and Thigh 42 (N/A), 36 (N/A) Public (Simulated) Decision tree Smartwatch (Samsung watch) N/A
      Sucerquia et al. (2018) Accelerometer Waist 38 (N/A) Public data sets
      Chen et al. (2018) Accelerometer Leg (pockets) 10 (20–26) N/A ML(SVM) Smartphones Y
      Putra et al. (2017) Accelerometer Waist 38 (N/A), 42 (N/A) Public data sets ML N N/A
      Khojasteh et al. (2018) Accelerometer N/A 17 (18–55), 6 (N/A), 15 (mean = 66.4) Public (Simulated) Threshold/ML N/A N/A
      de Araújo et al. (2018) Accelerometer Wrist 1 (30) N/A Threshold Smartwatch N/A
      Djelouat et al. (2017) Accelerometer Waist N/A Collected by authors (Simulated) ML Shimmer-3 Y
      Aziz et al. (2017) Accelerometer Waist 10 (mean = 26.6) Collected by authors (Simulated) Threshold/ML Accelerometers (Opal model, APDM Inc) N
      Kao et al. (2017) Accelerometer Wrist N/A Collected by authors (Simulated) ML ZenWatch(ASUS) Y
      Islam et al. (2017) Accelerometer Chest (pocket) 7 (N/A) N/A Threshold Smartphone N/A
      Xi et al. (2017) Electro-myography (sEMG) Ankle, Leg 3 (24–26) Collected by authors (Simulated) ML EMGworks 4.0 (DelSys Inc.) N
      Chen et al. (2017b) Accelerometer Lumbar, Thigh 22 (mean = 69.5) Public data sets (Real) ML N/A N/A
      Chen et al. (2017b) Accelerometer Chest, Waist, Arm, Hand N/A Collected by authors (Simulated) Threshold N/A Y
      Medrano et al. (2017) Accelerometer N/A 10 (20–42) Public (Simulated) ML Smartphones N
      Shi et al. (2016) Accelerometer N/A 10 (mean = 25) N/A Threshold Smartphone N/A
      Wu et al. (2015) Accelerometer Waist 3 (23, 42, 60) Collected by authors (Simulated) Threshold ADXL345 Accelerometer(ADI) Y
      Mahmud and Sirat (2015) Accelerometer Waist 13 (22–23) Collected by authors (Simulated) Threshold Shimmer N/A

      ML is the abbreviation of Machine Learning.

      Although smartphones have gained attention for studying falls, the underlying sensors of systems using them are still accelerometers and gyroscopes (Shi et al., 2016; Islam et al., 2017; Medrano et al., 2017; Chen et al., 2018). Users are more likely to carry smartphones all day rather than extra wearable devices, so smartphones are useful for eventual real-world deployments (Zhang et al., 2006; Dai et al., 2010).

      4.1.2. Individual Visual Sensors

      Vision-based detection is another prominent method. Extensive effort in this direction has been demonstrated, and some of which (Akagündüz et al., 2017; Ko et al., 2018; Shojaei-Hashemi et al., 2018) show promising performance. Although most cameras are not as portable as wearable devices, they offer other advantages which deem them as decent options depending upon the scenario. Most static RGB cameras are not intrusive and wired hence there is no need to worry about battery limitations. Work on demonstrating viability of vision-based approaches have been demonstrated which makes use of infrared cameras (Mastorakis and Makris, 2014), RGB cameras (Charfi et al., 2012), and RGB-D depth cameras (Cai et al., 2017). One main challenge of vision-based detection is the potential violation of privacy due to the levels of detail that cameras can capture, such as personal information, appearance, and visuals of the living environment.

      Further to the information that we report in Table 4, we note that RGB, depth, and infrared cameras are the three main visual sensors used. Moreover, it can be noted that the RGB-D camera (Kinect) is among the most popular vision-based sensor, as 12 out of 22 studies applied it in their work. Nine out of the other 10 studies used RGB cameras including cameras built into smartphones, web cameras, and monocular cameras, while the remaining study used an infrared camera within Kinect, to conduct their experiments.

      Fall detection using individual vision-based devices from 2014 to 2020.

      References Sensor No. subjects (age) Data sets Algorithms Real-time Alarm
      Han et al. (2020) Web camera N/A Simulated CNN N/A N/A
      Kong et al. (2019) Camera (Surveillance) N/A Public (Simulated) CNN Y N/A
      Ko et al. (2018) Camera (Smartphone) N/A Simulated Rao-Blackwellized Particle Filtering N/A N
      Shojaei-Hashemi et al. (2018) Kinect 40 (10–15) Public (Simulated) LSTM Y N
      Min et al. (2018) Kinect 4 (N/A), 11 (22–39) Public (Simulated) SVM Y N
      Ozcan et al. (2017) Web camera 10 (24–31) Simulated Relative-entropy-based N/A N/A
      Akagündüz et al. (2017) Kinect 10 (N/A) Public (Simulated) SDU (2011) Silhouette N/A N
      Adhikari et al. (2017) Kinect 5 (19–50) Simulated CNN N/A N
      Ozcan and Velipasalar (2016) Camera (Smartphone) 10 (24–31) Simulated Threshold/ML N/A N/A
      Senouci et al. (2016) Web Camera N/A Simulated SVM Y Y
      Amini et al. (2016) Kinect v2 11 (24–31) Simulated Adaptive Boosting Trigger, Heuristic Y N
      Kumar et al. (2016) Kinect 20 (N/A) Simulated SVM N/A N
      Aslan et al. (2015) Kinect 20 (N/A) Public (Simulated) SVM N/A N
      Yun et al. (2015) Kinect 12 (N/A) Simulated SVM N/A N
      Stone and Skubic (2015) Kinect 454 (N/A) Public (Simulated+Real) Decision trees N/A N
      Bian et al. (2015) Kinect 4 (24–31) Simulated SVM N/A N
      Chua et al. (2015) RGB camera N/A Simulated Human shape variation Y N
      Boulard et al. (2014) Web camera N/A Real Elliptical bounding box N/A N
      Feng et al. (2014) Monocular camera N/A Simulated Multi-class SVM Y N
      Mastorakis and Makris (2014) Infrared sensor (Kinect) N/A Simulated 3D bounding box Y N
      Gasparrini et al. (2014) Kinect N/A Simulated Depth frame analysis Y N
      Yang and Lin (2014) Kinect N/A Simulated Silhouette N/A N

      Static RGB cameras are the most widely used sensors within the vision-based fall detection research conducted before 2004, although the accuracies of RGB camera-based detection systems vary drastically due to environmental conditions, such as illumination changes—which often results in limitations during the night. Besides, RGB cameras are inherently likely to have a higher false alarm rate because some deliberate actions like lying on the floor, sleeping or sitting down abruptly are not easily distinguished by frames captured by RGB cameras. With the launch of the Microsoft Kinect, which consists of an RGB camera, a depth sensor, and a multi-array microphone, it stimulated a trend in 3D data collection and analysis, causing a shift from RGB to RGB-D cameras. Kinect depth cameras took the place of the traditional RGB cameras and became the second popular sensors in the field of fall detection after 2014 (Xu et al., 2018).

      In the last years, we are seeing an increased interest in the use of wearable cameras for the detection of falls. For instance, Ozcan and Velipasalar (2016) tried to exploit the cameras on smartphones. Smartphones were attached to the waists of subjects and their inbuilt cameras were used to record visual data. Ozcan et al. (2017) investigated how web cameras (e.g., Microsoft LifeCam) attached to the waists of subjects can contribute to fall detection. Although both approaches are not yet practical to be deployed in real applications, they show a new direction, which combines the advantages of wearable and visual sensors.

      Table 4 reports the work conducted for individual vision-based sensors. The majority of research still makes use of simulated data. Only two studies use real world data; the one by Boulard et al. (2014) has actual fall data and the other by Stone and Skubic (2015) has mixed data, including 9 genuine falls and 445 simulated falls by trained stunt actors. In contrast to the real data sets from the work of Klenk et al. (2016) collected by wearable devices, there are few purely genuine data sets collected in real life scenarios using individual visual sensors.

      4.1.3. Individual Ambient Sensors

      The ambient sensor provides another non-intrusive means of fall detection. Sensors like active infrared, RFID, pressure, smart tiles, magnetic switches, Doppler Radar, ultrasonic, and microphone are used to detect the environmental changes due to falling as shown in Table 5. It provides an innovative direction in this field, which is passive and pervasive detection. Ultra-sonic sensor network systems are one of the earliest solutions in fall detection systems. Hori et al. (2004) argues that one can detect falls by putting a series of spatially distributed sensors in the space where elderly persons live. In Wang et al. (2017a,b), a new fall detection approach which uses ambient sensors is proposed. It relies on Wi-Fi which, due to its non-invasive and ubiquitous characteristics, is gaining more and more popularity. However, the studies by Wang et al. (2017a,b) are limited in terms of multi-person detection due to their classifiers not being robust enough to distinguish new subjects and environments. In order to tackle this issue, other studies have developed more sophisticated methods. These include the Aryokee (Tian et al., 2018) and FallDeFi (Palipana et al., 2018) systems. The Aryokee system is ubiquitous, passive and uses RF-sensing methods. Over 140 people were engaged to perform 40 kinds of activities in different environments for the collection of data and a convolutional neural network was utilized to classify falls. Palipana et al. (2018) developed a fall detection technique named FallDeFi, which is based on WiFi signals as the enabling sensing technology. They provided a system applying time-frequency of WiFi Channel State Information (CSI) and achieved above 93% average accuracy.

      Fall detection using individual ambient devices from 2014 to 2020.

      References Sensor No. subjects (age) Data sets Algorithms Real-time Alarm
      Huang et al. (2019) Vibration 12 (19-29) Simulated HMM Y N/A
      Hao et al. (2019) WiFi N/A Simulated SVM Y N/A
      Tian et al. (2018) FMCW radio 140 (N/A) Simulated CNN Y N/A
      Palipana et al. (2018) WiFi 3 (27-30) Simulated SVM Y N/A
      Wang et al. (2017a) WiFi 6 (21-32) Simulated SVM Y N/A
      Wang et al. (2017b) WiFi N/A Simulated SVM, Random Forests N/A N/A

      RF-sensing technologies have also been widely applied to other recognition activities beyond fall detection (Zhao et al., 2018; Zhang et al., 2019) and even for subtle movements. Zhao et al. (2018) studied human pose estimation with multiple persons. Their experiment showed that RF-pose has better performance under occlusion. This improvement is attributable to the ability of their method to estimate the pose of the subject through a wall, something that visual sensors fail to do. Further research on RF-sensing was conducted by Niu et al. (2018) with applications to finger gesture recognition, human respiration and chins movement. Their research can be potentially used for applications of autonomous health monitoring and home appliances control. Furthermore, Zhang et al. (2019) used an RF-sensing approach in the proposed system WiDIGR for gait recognition. Guo et al. (2019) claimed that RF-sensing is drawing more attention which can be attributed to being device-free for users, and in contrast to RGB cameras it can work under low light conditions and occlusions.

      4.1.4. Subjects

      For most research groups there is not enough time and funding to collect data continuously within several years to study fall detection. Due to the rarity of genuine data in fall detection and prediction, Li et al. (2013) have started to hire stunt actors to simulate different kinds of fall. There are also many data sets of falls which are simulated by young healthy students as indicated in the studies by Bourke et al. (2007) and Ma et al. (2014). For obvious reasons elderly subjects cannot be engaged to perform the motion of falls for data collection. For most of the existing data sets, falls are simulated by young volunteers who perform soft falls under the protection of soft mats on the ground. Elderly subjects, however, often have totally different behavior due to less control over the speed of the body. One potential solution could include simulated data sets created using physics engines, such as OpenSim. Previous research (Mastorakis et al., 2007, 2018) have shown that simulated data from OpenSim contributed to an increase in performance to the resulting models. Another solution includes online learning algorithms which adapt to subjects who were not represented in the training data. For instance, Deng et al. (2014) applied the Transfer learning reduced Kernel Extreme Learning Machine (RKELM) approach and showed how they can adapt a trained classifier—based on data sets collected by young volunteers—to the elderly. The algorithm consists of two parts, namely offline classification modeling and online updating modeling, which is used to adapt to new subjects. After the model is trained by labeled training data offline, unlabeled test samples are fed into the pre-trained RKELM classifier and obtain a confidence score. The samples that obtain a confidence score above a certain threshold are used to update the model. In this way, the model is able to adapt to new subjects gradually when new samples are received from new subjects. Namba and Yamada (2018a,b) demonstrated how deep reinforcement learning can be applied to assisting mobile robots, in order to adapt to conditions that were not present in the training set.

      4.2. Local Communication Layer (LCL) of Individual Sensors

      There are two components which are involved with communication within such systems. Firstly, data collected from different smart sensors are sent to local computing facilities or remote cloud computing. Then, after the final decision is made by these computing platforms, instructions and alarms are sent to appointed caregivers for immediate assistance (El-Bendary et al., 2013).

      Protocol of data communication is divided into two categories, namely wireless and wired transmission. For the former, transmission protocols include Zigbee, Bluetooth, Wifi, WiMax, and Cellular network.

      Most of the studies that used individual wearable sensors deployed commercially available wearable devices. In those cases, data was communicated by transmission modules built in the wearable products, using mediums such as Bluetooth and cellular networks. In contrast to detection systems using wearable devices, most static vision- and ambient-based studies are connected to smart gateways by wired connections. These approaches are usually applied as static detection methods, so a wired connection is a better choice.

      4.3. Information Processing Layer (IPL) of Individual Sensors 4.3.1. Detection Using Threshold-Based and Data-Driven Algorithms

      Threshold-based and data-driven algorithms (including machine learning and deep learning) are the two main approaches that have been used for fall detection. Threshold-based approaches are usually used for data coming from individual sensors, such as accelerometers, gyroscopes, and electromyography. Their decisions are made by comparing measured values from concerned sensors to empirically established threshold values. Data driven approaches are more applicable for sensor fusion as they can learn non-trivial non-linear relationships from the data of all involved sensors. In terms of the algorithms used to analyze data collected using wearable devices, Figure 5 demonstrates that there is a significant shift to machine learning based approaches, in comparison to the work conducted between 1998 and 2012. From papers presented between 1998 and 2012, threshold-based approaches account for 71%, while only 4% applied machine learning based methods (Schwickert et al., 2013). We believe that this shift is due to two main reasons. First, the rapid development of affordable sensors and the rise of the Internet-of-Things made it possible to more easily deploy multiple sensors in different applications. As mentioned above the non-linear fusion of multiple sensors can be modeled very well by machine learning approaches. Second, with the breakthrough of deep learning, threshold-based approaches have become even less preferable. Moreover, different types of machine learning approaches have been explored, namely, Bayesian networks, rule-based systems, nearest neighbor-based techniques, and neural networks. These data-driven approaches (Gharghan et al., 2018) show better accuracy and they are more robust in comparison to threshold-based methods. Notable is the fact that data-driven approaches are more resource hungry than threshold-based methods. With the ever advancement of technology, however, this is not a major concern and we foresee that more effort will be invested in this direction.

      Different types of methods used in fall detection using individual wearable sensors in the period 1998–2012 based on the survey of Schwickert et al. (2013) and in the period 2014–2020 based on our survey. The term “others” refers to traditional methods that are neither based on threshold nor on machine learning, and the term “N/A” stands for not available and refers to studies whose methods are not clearly defined.

      4.3.2. Detection Using Deep Learning

      Traditional machine learning approaches determine mapping functions between extracted handcrafted features from raw training data and the respective output labels (e.g., no fall or fall, to keep it simple). The extraction of handcrafted features requires domain expertise and are, therefore, limited to the knowledge of the domain experts. Though such a limitation is imposed, literature shows that traditional machine learning, based on support vector machines, hidden Markov models, and decision trees are still very active in the field of fall detection that uses individual wearable non-visual or ambient sensors (e.g., accelerometer) (Wang et al., 2017a,b; Chen et al., 2018; Saleh and Jeannès, 2019; Wu et al., 2019). For visual sensors the trend has been moving toward deep learning for convolutional neural networks (CNN) (Adhikari et al., 2017; Kong et al., 2019; Han et al., 2020), or LSTM (Shojaei-Hashemi et al., 2018). Deep learning is a sophisticated learning framework that besides the mapping function (as mentioned above and used in traditional machine learning), it also learns the features (in a hierarchy fashion) that characterize the concerned classes (e.g., falls and no falls). This approach has been inspired by the visual system of the mammalian brain (LeCun et al., 2015). In computer vision applications, which take as input images or videos, deep learning has been established as state-of-the-art. In this regard, similar to other computer vision applications, fall detection approaches that rely on vision data have been shifting from traditional machine learning to deep learning in recent years.

      4.3.3. Real Time and Alarms

      Real-time is a key feature for fall detection systems, especially for commercial products. Considering that certain falls can be fatal or detrimental to the health, it is crucial that the deployed fall detection systems have high computational efficiency, preferably operating in (near) real-time. Below, we comment how the methods proposed in the reviewed literature fit within this aspect.

      The percentage of studies applying real-time detection by static visual sensors are lower than that of wearable devices. For the studies using wearable devices, Table 3 illustrates that six out of 20 studies that we reviewed can detect falls and send alarms. There are, however, few studies which demonstrate the ability to process data and send alerts in real-time for work conducted using individual visual sensors. Based on Table 4, one can note that although 40.9% (nine out of 22) of the studies claim that their systems can be used in real-time only one study showed that an alarm can actually be sent in real-time. The following are a couple of reasons why a higher percentage of vision-based systems can not be used in real time. Firstly, visual data is much larger and, therefore, its processing is more time consuming than that of one dimensional signals coming from non-vision-based wearable devices. Secondly, most of the work using vision sensors conducted their experiments with off-line methods, and modules like data transmission were not involved.

      4.3.3.1. Summary

      For single-sensor-based fall detection systems most of the studies used data sets that include simulated falls by young and healthy volunteers. Further work is needed to establish whether such simulated falls can be used to detect genuine falls by the elderly.

      The types of sensors utilized in fall detection systems have changed in the past 6 years. For individual wearable sensors, accelerometers are still the most frequently deployed sensors. Static vision-based devices shifted from RGB to RGB-D cameras.

      Data-driven machine learning and deep learning approaches are gaining more popularity especially with vision-based systems. Such techniques may, however, be heavier than threshold-based counterparts in terms of computational resources.

      The majority of proposed approaches, especially those that rely on vision-based sensors, work in offline mode as they cannot operate in real-time. While such methods can be effective in terms of detection, their practical use is debatable as the time to respond is crucial.

      5. Sensor Fusion by Sensor Network 5.1. Physiological Sensing Layer (PSL) Using Sensor Fusion 5.1.1. Sensors Deployed in Sensor Networks

      In terms of sensor fusion, there are two categories, typically referred to as homogeneous and heterogeneous which take input from three types of sensors, namely wearable, visual, ambient sensors, as shown in Figure 6. Sensor fusion involves using multiple and different signals coming from various devices, which may for instance include, accelerometer, gyroscope, magnetometer, and visual sensors, among others. This is all done to complement the strengths of all devices for the design and development of more robust algorithms that can be used to monitor the health of subjects and detect falls (Spasova et al., 2016; Ma et al., 2019).

      Different kinds of individual sensors and sensor networks, including vision-based, wearable, and ambient sensors, along with sensor fusion.

      For the visual detection based approaches, the fusion of signals coming from RGB (Charfi et al., 2012), and RGB-D depth cameras along with camera arrays have been studied (Zhang et al., 2014). They showed that such fusion provides more viewpoints of detected locations, and improves the stability and robustness by decreasing false alarms due to occluded falls (Auvinet et al., 2011).

      Li et al. (2018) combined accelerometer data from smartphones and Kinect depth data as well as smartphone camera signals. Liu et al. (2014) and Yazar et al. (2014) fused data from infrared sensors with ambient sensors, and data from doppler and vibration sensors separately. Among them, accelerometers and depth cameras (Kinect) are most frequently studied due to their low costs and effectiveness.

      5.1.2. Sensor Networks Platform

      Most of the existing IoT platforms, such as Microsoft Azure IoT, IBM Watson IoT Platform, and Google Cloud Platform, have not been used in the deployment of fall detection approaches by sensor fusion. In general, research studies on fall detection using sensor fusion are carried out by offline methods and decision fusion approaches. Therefore, in such studies, there is no need for data transmission and storage modules. From Tables 6, 7, one can also observe that most of the time researchers applied their own workstations or personal computers as their platforms, as there was no need for the integration of sensors and real-time analysis in terms of fall detection in off-line mode.

      Fall detection by fusion of wearable sensors from 2014 to 2020.

      Fusion within wearable sensors
      References Sensor No. subjects (age) Data sets Algorithms Real-time (Alarm) Fusion method Platforms
      Kerdjidj et al. (2020) Accelerometer, Gyroscope 17 (N/A) Simulated Compressive sensing Y (N/A) Feature fusion N/A
      Xi et al. (2020) Electromyography, Plantar Pressure 12 (23–27) Simulated FMMNN, DPK-OMELM Y (Y) Feature fusion N/A
      Chelli and Pätzold (2019) Accelerometer, Gyroscope 30 (N/A) Public (Simulated) KNN, ANN, QSVM, EBT Y (N/A) Feature fusion N/A
      Queralta et al. (2019) Accelerometer, Gyroscope, Magnetometer 57 (20-47) Public (Simulated) LTSM Y(Y) Feature fusion N/A
      Gia et al. (2018) Accelerometer, Gyroscope, Magnetometer 2 (N/A) N/A Threshold Y (Y) Feature fusion N/A
      de Quadros et al. (2018) Accelerometer, Gyroscope, Magnetometer 22 (mean = 26.09) Simulated Threshold/ML N/A Feature fusion N/A
      Yang et al. (2016) Accelerometer, Gyroscope, Magnetometer 5 (N/A) Simulated SVM Y (Y) Feature fusion PC
      Pierleoni et al. (2015) Accelerometer, Gyroscope, Magnetometer 10 (22–29) Simulated Threshold Y (Y) Feature fusion ATmega328p (ATMEL)
      Nukala et al. (2014) Accelerometer, Gyroscopes 2 (N/A) Simulated ANN Y (N/A) Feature fusion PC
      Kumar et al. (2014) Accelerometer, Pressure sensors, Heart rate monitor N/A Simulated Threshold Y (Y) Partial fusion PC
      Hsieh et al. (2014) Accelerometer, Gyroscope 3 (N/A) Simulated Threshold N/A Partial fusion N/A

      Fall detection using fusion of sensor networks from 2014 to 2020.

      References Sensor No. subjects (age) Data sets Algorithms Real-time (Alarm) Fusion method Platforms
      Fusion within visual sensors and ambient sensors
      Espinosa et al. (2019) Two cameras 17 (18-24) Simulated CNN N/A (N) Feature fusion N/A
      Ma et al. (2019) RGB camera, Thermal camera 14 (N/A) Simulated CNN N/A (N) Partial fusion N/A
      Spasova et al. (2016) Web Camera, Infrared sensor 5 (27-81) Simulated SVM Y (Y) Partial fusion A13-OlinuXino
      Fusion within different kinds of individual sensors
      Mart́ınez-Villaseñor et al. (2019) Accelerometer, Gyroscope, Ambient light, Electroencephalograph, Infrared sensors, Web cameras 17 (18–24) Simulated Random Forest, SVM, ANN, kNN, CNN Feature fusion N/A N/A
      Li et al. (2018) Accelerometer (smartphone), Kinect N/A Simulated SVM, Threshold Y (N/A) Decision fusion N/A
      Daher et al. (2017) Force sensors, Accelerometers 6 (N/A) Simulated Threshold N (N/A) Decision fusion N/A
      Ozcan and Velipasalar (2016) Camera (smartphone), Accelerometer 10 (24 -30) Simulated Histogram of oriented gradients Y (Y) Decision fusion N/A
      Kwolek and Kepski (2016) Accelerometer, Kinect 5 (N/A) Simulated Fuzzy logic Y (Y) Feature fusion, Partial fusion PandaBoard ES
      Sabatini et al. (2016) Barometric altimeters, Accelerometer, Gyroscope 25 (mean = 28.3) Simulated Threshold N/A (N) Feature fusion N/A
      Chen et al. (2015) Kinect, Inertial sensor 12 (23–30) Public Simulated Ofli et al. (2013) Collaborative representation, N/A (N) Feature fusion N/A
      Gasparrini et al. (2015) Kinect v2, Accelerometer 11 (22-39) Simulated Threshold N (N/A) Data fusion N/A
      Kwolek and Kepski (2014) Accelerometer, Kinect 5 (N/A) Public (Simulated) URF (2014) SVM, k-NN Y (Y) Partial fusion PandaBoard ES
      Kepski and Kwolek (2014) Accelerometer, Kinect 30 (under 28) Simulated Alogorithms Y (N) Partial fusion PandaBoard
      Liu et al. (2014) Passive infrared sensor, Doppler radar sensor 454 (N/A) Simulated + Real life SVM N/A (N) Decision fusion N/A
      Yazar et al. (2014) Passive infrared sensors, Vibration sensor N/A Simulated Threshold, SVM N/A (N) Decision fusion N/A

      Some works, such as those in Kwolek and Kepski (2014), Kepski and Kwolek (2014), and Kwolek and Kepski (2016), applied low-power single-board computer development platforms running in Linux, namely PandaBoard, PandaBoard ES, and A13-OlinuXino. A13-OlinuXino is an ARM-based single-board computer development platform, which runs Debian Linux distribution. PandaBoard ES, which is the updated version of PandaBoard, is a single-board computer development platform running at Linux. The PandaBoard ES can run different kinds of Linux-based operating systems, including Android and Ubuntu. It consists of 1 GB of DDR2 SDRAM, dual USB 2.0 ports as well as wired 10/100 Ethernet along with wireless Ethernet and Bluetooth connectivity. Linux is well-known for real-time embedded platforms since it provides various flexible inter-process communication methods, which is quite suitable for fall detection using sensor fusion.

      In the research by Kwolek and Kepski (2014, 2016), wearable devices and Kinect were connected to the Pandaboard through Bluetooth and cable, separately. Firstly, data was collected by accelerometers and Kinect sensors, individually, which was then transmitted and stored in a memory card. The procedure of data transmission is asynchronous since there are different sampling rates for accelerometers and Kinect. Finally, all data was grouped together and processed by classification models that detected falls. The authors reported high accuracy rates but could not compare with other approaches since there is no benchmark data set.

      Spasova et al. (2016) applied the A13-OlinuXino board as their platform. A standard web camera was connected to it via USB and an infrared camera was connected to the development board via I2C (Inter-Integrated Circuit). Their experiment achieved excellent performance with over 97% sensitivity and specificity. They claim that their system can be applied in real-time with hardware of low-cost and open source software platform.

      Despite the available platforms mentioned above, the majority of fall detection studies trained their models in an offline mode with a single sensor on personal computers. The studies in Kwolek and Kepski (2014), Kepski and Kwolek (2014), Kwolek and Kepski (2016), and Spasova et al. (2016) utilized single-board computer platforms in their experiments to demonstrate the efficacy of their approaches. The crucial aspects of scalability and efficiency were not addressed and hence it is difficult to speculate the appropriateness of their methods in real-world applications. We believe that the future trend is to apply an interdisciplinary approach that deploys the data analysis modules on mature cloud platforms, which can provide a stable and robust environment while meeting the exploding demands of commercial applications.

      5.1.3. Subjects and Data Sets

      Although some groups devoted their efforts to acquire data of genuine falls, most researchers used data that contained simulated falls. We know that monitoring the lives of elderly people and waiting to capture real falls is very sensitive and time consuming. Having said that though, with regards to sensor fusion by wearable devices, there have been some attempts which have tried to build data sets of genuine data in real life. FARSEEING (Fall Repository for the design of Smart and self-adaptive Environments prolonging Independent living) is one such data set (Klenk et al., 2016). It is actually the largest data set of genuine falls in real life, and is open to public research upon request on their website. From 2012 to 2015, more than 2,000 volunteers have been involved, and more than 300 real falls have been collected under the collaboration of six institutions3.

      As for the fusion by visual sensors and the combination of other non-wearable sensors, it becomes quite hard to acquire genuine data in real life. There was one group which tried to collect real data by visual sensors, but only nine real falls by elderly (Demiris et al., 2008) were captured during several years. The availability of only nine falls is too limited to train a meaningful model. As an alternative, Stone and Skubic (2015) hired trained stunt actors to simulate different kinds of falls and made a benchmark data set with 454 falls including 9 real falls by elderly.

      5.2. Local Communication Layer (LCL) Using Sensor Fusion

      Data transmission for fall detection using sensor networks can be done in different ways. In particular, Bluetooth (Pierleoni et al., 2015; Yang et al., 2016), Wi-Fi, ZigBee (Hsieh et al., 2014), cellular network using smart phones (Chen et al., 2018) and smart watches (Kao et al., 2017), as well as wired connection have all been explored. In studies that used wearable devices, most of them applied wireless methods, such as Bluetooth, which allowed the subject to move unrestricted.

      Currently, when it comes to wireless sensors, Bluetooth has become probably the most popular communication protocol and it is widely used in existing commercial wearable products such as Shimmer. In the work by Yang et al. (2016), data is transmitted to a laptop in real-time by a Bluetooth module that is built in a commercial wearable device named Shimmer 2R. The sampling frame rate can be customized, and they chose to work with the 32-Hz sampling rate instead of the default sampling rate of 51.2-Hz. At high sampling frequencies, packet loss can occur and higher sampling rate also means higher energy consumption. Bluetooth is also applied to transmit data in non-commercial wearable devices. For example, Pierleoni et al. (2015) customized a wireless sensor node, where sensor module, micro-controller, Bluetooth module, battery, mass-storage unit, and wireless receiver were integrated within a prototype device of size 70–45–30 mm. Zigbee was used to transmit data in the work by Hsieh et al. (2014). In Table 8, we compare different kinds of wireless communication protocols.

      Comparison of different kinds of communication protocol.

      Protocol Zigbee Bluetooth WiFi WiMax Cellular network
      Range 100 m 10 m 5 km 15 km 10–50 km
      Data rate 250–500 kbps 1–3 Mbps 1–450 Mbps 75 Mbps 240 kbps
      Band-width 2.4 GHz 2.4 GHz 2.4, 3.7, and 5 GHz 2.3, 2.5, and 3.5 GHz 824–894 MHz/1,900 MHz
      Energy consumption Low Medium High N/A N/A

      As for the data transmission using vision-based and ambient-based approaches, wired options are usually preferred. In the work by Spasova et al. (2016), a standard web camera was connected to an A13-OlinuXino board via USB and an infrared camera was connected to the development board via I2C (Inter-Integrated Circuit). Data and other messages were exchanged within the smart gateways through the internet.

      For sensor fusion using different types of sensors, both wireless and cabled methods were utilized because of data variety. In the work by Kwolek and Kepski (2014, 2016), wearable devices and Kinect were connected to the Pandaboard through Bluetooth and cable, separately. Kinect was connected to a PC using USB interface and smart phones were connected by wireless methods (Li et al., 2018). These two types of sensor, smartphone and Kinect, were first used separately to monitor the same events and the underlying methods that processed their signals sent their output to a Netty server through the Internet where another method was used to fuse the outcomes of both methods to come to a final decision of whether the involved individual has fallen or not.

      In the studies by Kwolek and Kepski (2014, 2016), accelerometers and Kinect cameras were connected to a pandaboard through Bluetooth and USB connections. Then, the final decision was made based on the data collected from the two sensors.

      5.3. Information Processing Layer (IPL) Using Sensor Fusion 5.3.1. Methods of Sensor Fusion

      Speaking of the fusion of different sensors, there are several criteria to group them. Yang and Yang (2006) and Tsinganos and Skodras (2018) grouped them into three categories, namely direct data fusion, feature fusion, and decision fusion. We divide sensor fusion techniques into four groups as shown in Figure 7, which we refer to as fusion with partial sensors, direct data fusion, feature fusion, and decision fusion.

      Four kinds of sensor fusion methods including partial fusion, feature fusion, decision fusion, and data fusion. Partial fusion means that a subset of sensors are deployed to make decisions, while the other types of fusion techniques use all sensors as input.

      For the partial fusion, although multiple sensors are deployed, only one sensor is used to take the final decision, such as the work by Ma et al. (2019). They used an RGB and a thermal camera to conduct their experiments, with the thermal camera being used only for the localization of faces. Falls were eventually detected only based on the data collected from the regular RGB cameras. A similar approach was applied by Spasova et al. (2016), where an infrared camera was deployed to confirm the presence of the subject and the data produced by the RGB camera was used to detect falls. There are also other works that used wearable devices that deployed the sensors at different stages. For instance, in (Kepski and Kwolek, 2014; Kwolek and Kepski, 2014) a fall detection system was built by utilizing a tri-axial accelerometer and an RGB-D camera. The accelerometer was deployed to detect the motion of the subject. If the measured signal exceeded a given threshold then the Kinect was activated to capture the ongoing event.

      The second approach of sensor fusion is known as feature fusion. In such an approach, feature extraction takes places on signals that come from different sensors. Then all features are merged into long feature vectors and used to train classification models. Most of the studies that we reviewed applied feature fusion for wearable-based fall detection systems. Many commercial products of wearable devices, sensors like accelerometers, gyroscope, magnetometer are built in one device. Data from these sensors is homogeneous synchronous with the same frequency and transmitted with built-in wireless modules. Having signals producing data with the synchronized frequency simplifies the fusion of data. Statistical features, such as mean, maximum, standard deviation, correlation, spectral entropy, spectral, sum vector magnitude, the angle between y-axis and vertical direction, and differential sum vector magnitude centroid can be determined from the signals of accelerometers, magnetometers, and gyroscopes, and used as features to train a classification model that can detect different types of falls (Yang et al., 2016; de Quadros et al., 2018; Gia et al., 2018).

      Decision fusion is the third approach, where a chain of classifiers is used to come to a decision. A typical arrangement is to have a classification model that takes input from one type of sensor, another model that takes input from another sensor, and in turn the outputs of these two models are used as input to a third classification model that takes the final decision. Li et al. (2018) explored this approach with accelerometers embedded in smart phones and Kinect sensors. Ozcan and Velipasalar (2016) deployed an accelerometer and an RGB camera for the detection of falls. Different sensors, such as accelerometer, RGB and RGB-D cameras were deployed in these studies. Decisions are made separately based on the individual sensors, and then the final decision is achieved by combining the individual sensors.

      The final approach is data fusion. This is achieved by first fusing the data from different sensors and perform feature extraction from the fused data. This is in contrast to feature fusion where data from these sensors is homogeneous synchronous with the same frequency. Data fusion can be applied to different sensors with different sampling frequency and data characteristics. Data from various sensors can be synchronized and combined directly for some sensors of different types. Because of the difference in sampling rate between the Kinect camera and wearable sensors, it is challenging to conduct feature fusion directly. In order to mitigate this difficulty, the transmission and exposure times of the Kinect camera are adapted to synchronize the RGB-D data with that of wearable sensors by an ad-hoc acquisition software, as was done by Gasparrini et al. (2015).

      Ozcan and Velipasalar (2016) used both partial and feature fusion. They divided the procedure in two stages. In the first stage, only the accelerometer was utilized to indicate a potential fall, then the Kinect camera activates after the accelerometer flagged a potential fall. Features from both the Kinect camera and accelerometer were then extracted to classify activities of fall or non-fall in the second stage.

      5.3.2. Machine Learning, Deep Learning, and Deep Reinforcement Learning

      In terms of fall detection techniques based on wearable sensor fusion, the explored methods include threshold-based, traditional machine learning, and deep learning. The latter two are the most popular due to their robustness. The research by Chelli and Pätzold (2019) applied both traditional machine learning [kNN, QSVM, Ensemble Bagged Tree (EBT)] and deep learning. Their experiments were divided into two parts, namely activity recognition and fall detection. For the former, their experiments showed that traditional machine learning and deep learning outperformed other approaches, which showed 94.1 and 93.2% accuracy, respectively. Queralta et al. (2019) applied a long short-term memory (LSTM) approach, where wearable nodes including accelerometer, gyroscope, and magnetometer were embedded in a low power wide area network, with combined edge and fog computing. The LSTM algorithm is a type of recurrent neural network aimed at solving long sequence learning tasks. Their system achieved an average recall of 95% while providing a real-time solution of fall detection running on cloud platforms. Another example is the work by Nukala et al. (2014) who fused the measurements of accelerometers and gyroscopes and applied an Artificial Neural Network (ANN) for the modeling of fall detection.

      As for visual sensor based fusion techniques, the limited studies that were included in our survey applied either traditional machine learning or deep learning (Espinosa et al., 2019; Ma et al., 2019) approaches. Fusion of multiple visual sensors from a public data set was presented by Espinosa et al. (2019), where a 2D CNN was trained to classify falls during daily life activities.

      Another approach is reinforcement learning (RL), which is a growing branch in machine learning, and is gaining popularity in the fall detection field as well. Deep reinforcement learning (DRL) combines the advantages of deep learning and reinforcement learning, and has already shown its benefits in fall prevention (Namba and Yamada, 2018a,b; Yang, 2018) and fall detection (Yang, 2018). Namba and Yamada (2018a) proposed a fall risk prevention approach by assisting robots for the elderly living independently. Images and movies with the location information of accidents were collected. Most conventional machine learning and deep learning methods are, however, challenged when the operational environment changes. This is due to their data-driven nature that allows them to learn how to become robust mostly in the same environments where they were trained.

      5.3.3. Data Storage and Analysis

      Typical data storage devices include SD cards, local storage on the integration device, or remote storage on the cloud. For example, some studies used the camera and accelerometer in smartphones, and stored the data on the local storage of the smarphones (Ozcan and Velipasalar, 2016; Shi et al., 2016; Medrano et al., 2017). Other studies applied off-line methods and stored data in their own computer, and could be processed at a later stage. Alamri et al. (2013) argue that sensor-cloud will become the future trend because cloud platforms can be more open and more flexible than local platforms, which have limited local storage and processing power.

      5.4. User Application Layer (UAL) of Sensor Fusion

      Due to the rapid development of miniature bio-sensing devices, there has been a booming development of wearable sensors and other fall detection modules. Wearable modules, such as Shimmer, embedded with sensing sensors, communication protocols, and sufficient computational ability are available as affordable commercial products. For example, some wearable-based applications have been applied to the detection of falls and for monitoring health, in general. The target of the wearable devices is to wear and forget. Taking as an example the electronic skins (e-skins) that adhere to the body surface, clothing-based or accessory-based devices where proximity is sufficient. To fulfill the target of wearing and forgetting, many efforts have been put into the study of wearable systems, such as the My Heart project (Habetha, 2006), the Wearable Health Care System (WEALTHY) project (Paradiso et al., 2005), the Medical Remote Monitoring of clothes (MERMOTH) project (Luprano, 2006), and the project by Pandian et al. (2008). Some wearable sensors are also developed specifically to address fall detection. Shibuya et al. (2015) used a wearable wireless gait sensor for the detection of falls. More and more research work use existing commercial wearable products, which includes function of data transmission and sending alarms when falls are detected.

      5.4.1. Summary

      Due to the sampling frequency and data characteristic, there are two main categories for sensor fusion. As shown in Tables 6, 7, studies by sensor fusion are divided into fusion by sensor from the same category (e.g., fusion of wearable sensors, fusion of visual sensors, and fusion of ambient sensors) and fusion of sensors from different categories.

      Subjects in fall detection studies using sensor networks are still young and healthy volunteers, which is similar to that of individual sensors. Only one research adopted mixed data with simulated and genuine data.

      More wearable-based approaches are embedded with IoT platforms than that of vision-based approaches because data transmission and storage modules are built in existing commercial products.

      For the research combining sensors from different categories, the combination of accelerometer and Kinect camera is the most popular method.

      Partial fusion, data fusion, feature fusion, and decision fusion are four main methods of sensor fusion. Among them, feature fusion is the most popular approach, followed by decision fusion. For fusion using non-vision wearable sensors, most of the studies that we reviewed applied feature fusion, while decision fusion is the most appealing one for fusing sensors from different categories.

      6. Security and Privacy

      Because data generated by autonomous monitoring systems are security-critical and privacy-sensitive, there is an urgent demand to protect user's privacy and prevent these systems from being attacked. Cyberattacks on the autonomous monitoring systems may cause physical or mental damages and even threaten the lives of subjects under monitoring.

      6.1. Security

      In this survey we approached the systems of fall detection from different layers, including Physiological Sensing Layer (PSL), Local Communication Layer (LCL), Information Processing Layer (IPL), Internet Application Layer (IAL), and User Application Layer (UAL). Every layer faces security issues. For instance, information may leak in the LCL during data transmission, along with potential vulnerabilities with cloud storage and processing facility. Based on the literature that we report in Tables 37, most of the studies in the field of fall detection do not address security matters. Only few studies (Edgcomb and Vahid, 2012; Mastorakis and Makris, 2014; Ma et al., 2019) take privacy into consideration. Because of the distinct characteristics of wired and wireless transmission, it is still an open problem to find a comprehensive security protocol which can cover the security issues in both wired and wireless data transmission and storage (Islam et al., 2015).

      6.2. Privacy

      As mentioned above, privacy is one of the most important issue for users of autonomous health monitoring systems. Methods to protect privacy are dependent on the type of sensor used. Not all sensors tend to suffer from the issues of privacy equally. For example, vision-based sensors, like RGB cameras, are more vulnerable than wearable sensors, such as accelerometers, in terms of privacy. In the case of a detection system that uses only wearable sensors, problems of privacy are not as critical as systems involved with visual sensors.

      In order to address the privacy concerns associated with RGB cameras some researchers proposed to mitigate them by blurring and distorting the appearances as post-processing steps in the application layer (Edgcomb and Vahid, 2012). An alternative way is to address the privacy issue in the design stage, as suggested by Ma et al. (2019). They investigated an optical level anonymous image sensing system. A thermal camera was deployed to locate faces and an RGB camera was used to detect falls. The location of the subject's face was used to generate a mask pattern on a spatial light modulator to control the light entering the RGB camera. Faces of subjects were blurred by blocking the visible light rays using the mask pattern on the spatial light modulator.

      The infrared camera is another sensor which could protect the privacy of subjects. Mastorakis and Makris (2014) investigated an infrared camera built in a Kinect sensor. It only captures the thermal distribution of subjects and there is no information on the subject's appearance and living environment involved. Other vision-based sensors which could protect privacy are depth cameras. The fact they only capture depth information has made them more popular than RGB cameras.

      As for the research of fall detection using sensor networks, different kinds of data are collected when more sensors are involved. Because of more data collection and transfer involved, the whole fall detection system by sensor fusion becomes more complicated and it makes the protection of privacy and security even harder. There is a trade-off between privacy and benefits of autonomous monitoring systems. The aim is to keep improving the algorithms while keeping the privacy and security issues to a minimum. This is the only way to make such systems socially acceptable.

      7. Projects and Applications Around Fall Detection

      Approaches of fall detection evolve from personal emergency response systems (PERS) to intelligent automatic ones. One of the early fall detection systems sends an alarm by the PERS push-button, but it may fail when the concerned person loses consciousness or is too weak to move (Leff, 1997). Numerous attempts have been made to monitor not only falls but also other specific activities in autonomous health monitoring. Many projects have been conducted to develop applications of autonomous health monitoring, including fall detection, prediction, and prevention. Some of the aforementioned studies were promoted as commercial products. Different sensors from wearable sensors, visual sensors, and ambient sensors are deployed as commercial applications for fall detection. Among them, more wearable sensors have been developed as useful applications. For example, a company named Shimmer has developed 7 kinds of wearable sensing products aiming at autonomous health monitoring. One of the products is the Shimmer3 IMU Development Kit. It is a wearable sensor node including a sensing module, data transmission module, receiver, and it has been used by Mahmud and Sirat (2015) and Djelouat et al. (2017). The iLife fall detection sensor is developed by AlertOne4, which provides the service of fall detection and one-button alert system. Smartwatch is another commercial solution for fall detection. Accelerometers embedded in smartwatches have been studied to detect falls (Kao et al., 2017; Wu et al., 2019). Moreover, Apple Watch Series 4 and later versions are equipped with the fall detection function, and it can help the consumer to connect to the emergency service. Although there are few specific commercial fall detection products based on RGB cameras, the relevant studies also show a promising future in the field. There are open source solutions provided by Microsoft using Kinect which could detect falls in real time and have the potential to be deployed as commercial products. As for ambient sensors, Linksys Aware apply tri-band mesh WiFi systems to fall detection, and they provide a premium subscription service as a commercial motion detection product. CodeBlue, a Harvard University research project, also focused on developing wireless sensor networks for medical applications (Lorincz et al., 2004). The MIThril project (DeVaul et al., 2003) is a next-generation wearable research platform developed by researchers at the MIT Media Lab. They made their software open source and hardware specifications available to the public.

      The Ivy project (Pister et al., 2003) is a sensor network infrastructure from the Berkeley College of Engineering, University of California. The project aims to develop a sensor network system to provide assistance for the elderly living independently. Using a sensor network with fixed sensors and mobile sensors worn on the body, anomalies by the concerned elderly can be detected. Once falls are detected, the system sends alarms to caregivers to respond urgently.

      A sensor network was built in 13 apartments in TigerPlace, which is an aging in place for people of retirement in Columbia, Missouri, and continuous data was collected for 3,339 days (Demiris et al., 2008). The sensor network with simple motion sensors, video sensors, and bed sensors that capture sleep restlessness and pulse and respiration levels, were installed in some apartments of 14 volunteers. Activities of 16 elderly people in TigerPlace, whose age range from 67 to 97, were recorded continuously and 9 genuine falls were captured. Based on the data set, Li et al. (2013) developed a sensor fusion algorithm. which achieved low rate of false alarms and a high detection rate.

      8. Trends and Open Challenges 8.1. Trends 8.1.1. Sensor Fusion

      There seems to be a general consensus that sensor fusion provides a more robust approach for the detection of elderly falls. The use of various sensors may complement each other in different situations. Thus, instead of relying on only one sensor, which may be unreliable if the conditions are not suitable for that sensor, the idea is to rely on different types of sensor that together can capture reliable data in various conditions. This results in a more robust system that can keep false alarms to a minimum while achieving high precision.

      8.1.2. Machine Learning, Deep Learning and Deep Reinforcement Learning

      Conventional machine learning approaches have been widely applied in fall detection and activity recognition, and results outperform those of threshold-based methods in studies that use wearable sensors. Deep learning is a subset of machine learning, which is concerned with artificial neural networks inspired by the mammalian brain. Approaches of deep learning are gaining popularity especially for visual sensors and sensor fusion and are becoming the state-of-the-art for fall detection and other activity recognition. Deep reinforcement learning is another promising research direction for fall detection. Reinforcement learning is inspired by the psychological neuro-scientific understandings of humans which can adapt and optimize decisions in a changing environment. Deep reinforcement learning combines advantages of deep learning, and reinforcement learning which can provide alternatives for detection that can adapt to the changing condition without sacrificing accuracy and robustness.

      8.1.3. Fall Detection Systems on 5G Wireless Networks

      5G is a softwarized and virtualized wireless network, which includes both a physical network and software virtual network functions. In comparison to 4G networks, 5th generation mobile introduces the ability of data transmission with high speed and low latency, which could contribute to the development of fall detection by IoT systems. Firstly, 5G is envisioned to become an important and universal communication protocol for IoT. Secondly, 5G cellular can be used for passive sensing approaches. Different from other kinds of RF-sensing approaches (e.g., WiFi or radar) which are aimed for short-distance indoor fall detection, the 5G wireless network can be applied to both indoor and outdoor scenarios as a pervasive sensing method. This type of network has already been successfully investigated by Gholampooryazdi et al. (2017) for the detection of crowd-size, presence detection, and walking speed, and their experiments showed accuracy of 80.9, 92.8, and 95%, respectively. Thirdly, we expect that 5G as a network is going to become a highly efficient and accurate platform to achieve better performance of anomaly detection. Smart networks or systems powered by 5G IoT and deep learning can be applied not only in fall detection systems, but also in other pervasive sensing and smart monitoring systems which assist elderly groups to live independently with high-quality life.

      8.1.4. Personalized or Simulated Data

      El-Bendary et al. (2013) and Namba and Yamada (2018b) have proposed to include historical medical and behavioral data of individuals along with sensor data. This allowed the enrichment of the data and consequently to make better informed decisions. This innovative perspective allows a more personalized approach as it uses the health profile of the concerned individual and it has the potential to become a trend also in this field. Another trend could be the way data sets are created to evaluate systems for fall detection. Mastorakis et al. (2007, 2018) applied the skeletal model simulated in Opensim, which is an open-source software developed by Stanford University. It can simulate different kinds of pre-defined skeletal models. They acquired 132 videos of different types of falls, and trained their own algorithms based on those models. The high results that they report indicate that the simulated falls by OpenSim are very realistic and, therefore, effective for training a fall detection model. Physics engines, like Opensim, can simulate customized data based on the height and age of different subjects and it offers the possibility of new directions to detect falls. Another solution, which can potentially address the scarcity of data, is to develop algorithms that can be adapted to subjects that were not part of the original training set (Deng et al., 2014; Namba and Yamada, 2018a,b) as we described in section 4.1.4.

      8.1.5. Fog Computing

      As to architecture is concerned, Fog computing offers the possibility to distribute different levels of processing across the involved edge devices in a decentralized way. Smart devices that can carry out some processing and that can communicate directly with each other are more attractive for (near) real-time processing as opposed to systems based on cloud computing (Queralta et al., 2019). An example of such smart devices include the Intel® RealSense™ depth camera, which includes a 28 nanometer (nm) processor to compute real-time depth images.

      8.2. Open Challenges

      The topic of fall detection has been studied extensively during the past two decades and many attempts have been proposed. The rapid development of new technologies keeps this topic very active in the research community. Although much progress has been made, there are still various open challenges, which we discuss below.

      The rarity of data of real falls: There is no convincing public data set which could provide a gold standard. Many simulated data sets by individual sensors are available, but it is debatable whether models trained on data collected by young and healthy subjects can be applied to elderly people in real-life scenarios. To the best of our knowledge, only Liu et al. (2014) used a data set with nine real falls along with 445 simulated ones. As for data sets with multiple sensors, the data sets are even scarcer. There is, therefore, an urgent need to create a benchmark data set of data coming from multiple sensors.

      Detection in real-time: The attempts that we have seen in the literature are all based on offline methods that detect falls. While this is an important step, it is time that research starts focusing more on real-time systems that can be applied in the real-world.

      Security and privacy: We have seen little attention to the security and privacy concerned with fall detection approaches. Security and privacy is therefore another topic which to our opinion must be addressed in cohesion with fall detection methods.

      Platform of sensor fusion: It is still a novice topic with a lot of potential. Studies so far have treated this topic to a minimum as they mostly focused on the analytics aspect of the problem. In order to bring solutions closer to the market more holistic studies are needed to develop full information systems that can deal with the management and transmission of data in an efficient, effective and secure way.

      Limitation of location: Some sensors, such as visual ones, have limited capability because they are fixed and static. It is necessary to develop fall detection systems which can be applied to controlled (indoor) and uncontrolled (outdoor) environments.

      Scalability and flexibility: With the increasing number of affordable sensors there is a crucial necessity to study the scalability of fall detection systems especially when inhomogeneous sensors are considered (Islam et al., 2015). There is an increasing demand for scalable fall detection approaches that do not sacrifice robustness or security. When considering cloud-based trends, fall detection modules, such as data transmission, processing, applications, and services, should be configurable and scalable in order to adapt to the growth of commercial demands. Cloud-based systems enable more scalability of health monitoring systems at different levels as the need for resources of both hardware and software level changes with time. Cloud-based systems can add or remove sensors and services with little effort on the architecture (Alamri et al., 2013).

      9. Summary and Conclusions

      In this review we give an account on fall detection systems from a holistic point of view that includes data collection, data management, data transmission, security and privacy as well as applications.

      In particular we compare approaches that rely on individual sensors with those that are based on sensor networks with various fusion techniques. The survey provides a description of the components of fall detection and it is aimed to give a comprehensive understanding of physical elements, software organization, working principles, techniques, and arrangement of different components that concern fall detection systems.

      We draw the following conclusions.

      The sensors and algorithms proposed during the past 6 years are very different in comparison to the research before 2014. Accelerometers are still the most popular sensors in wearable devices, while Kinect took the place of the RGB camera and became the most popular visual sensor. The combination of Kinect and accelerometer is turning out to be the most sought after.

      There is not yet a benchmark data set on which fall detection systems can be evaluated and compared. This creates a hurdle in advancing the field. Although there has been an attempt to use middle-age subjects to simulate falls (Kangas et al., 2008), there are still differences in behavior between the elderly and middle-aged subjects.

      Sensor fusion seems to be the way forward. It provides more robust solutions in fall detection systems but come with higher computational costs when compared to those that rely on individual sensors. The challenge is therefore to mitigate the computational costs.

      Existing studies focus mainly on the data analytics aspect and do not give too much attention to IoT platforms in order to build full and stable systems. Moreover, the effort is put on analyzing data in offline mode. In order to bring such systems to the market, more effort needs to be invested in building all the components that make a robust, stable, and secure system that allows (near) real-time processing and that gains the trust of the elderly people.

      The detection of elderly falls is an example of the potential of autonomous health monitoring systems. While the focus here was on elderly people, the same or similar systems can be applicable to people with mobility problems. With the ongoing development of IoT devices, autonomous health monitoring and assistance systems that rely on such devices seems to be the key for the detection of early signs of physical and cognitive problems that can range from cardiovascular issues to mental disorders, such as Alzheimer's and dementia.

      Author Contributions

      GA and XW conceived and planned the paper. XW wrote the manuscript in consultation with GA and JE. All authors listed in this paper have made a substantial, direct and intellectual contribution to the work, and approved it for publication.

      Conflict of Interest

      The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

      References (2011). Sdufall. Available online at: http://www.sucro.org/homepage/wanghaibo/SDUFall.html (2014). Urfd. Available online at:https://sites.google.com/view/haibowang/home Abbate S. Avvenuti M. Bonatesta F. Cola G. Corsini P. Vecchio A. (2012). A smartphone-based fall detection system. Pervas. Mobile Comput. 8, 883899. 10.1016/j.pmcj.2012.08.003 Adhikari K. Bouchachia H. Nait-Charif H. (2017). “Activity recognition for indoor fall detection using convolutional neural network,” in 2017 Fifteenth IAPR International Conference on Machine Vision Applications (MVA) (Nagoya: IEEE), 8184. 10.23919/MVA.2017.7986795 Akagündüz E. Aslan M. Şengür A. Wang H. İnce M. C. (2017). Silhouette orientation volumes for efficient fall detection in depth videos. IEEE J. Biomed. Health Inform. 21, 756763. 10.1109/JBHI.2016.257030028113444 Alamri A. Ansari W. S. Hassan M. M. Hossain M. S. Alelaiwi A. Hossain M. A. (2013). A survey on sensor-cloud: architecture, applications, and approaches. Int. J. Distribut. Sensor Netw. 9, 917923. 10.1155/2013/917923 Amini A. Banitsas K. Cosmas J. (2016). “A comparison between heuristic and machine learning techniques in fall detection using kinect v2,” in 2016 IEEE International Symposium on Medical Measurements and Applications (MeMeA) (Benevento: IEEE), 16. 10.1109/MeMeA.2016.7533763 Aslan M. Sengur A. Xiao Y. Wang H. Ince M. C. Ma X. (2015). Shape feature encoding via fisher vector for efficient fall detection in depth-videos. Applied Soft. Comput. 37, 10231028. 10.1016/j.asoc.2014.12.035 Auvinet E. Multon F. Saint-Arnaud A. Rousseau J. Meunier J. (2011). Fall detection with multiple cameras: an occlusion-resistant method based on 3-D silhouette vertical distribution. IEEE Trans. Inform. Technol. Biomed. 15, 290300. 10.1109/TITB.2010.208738520952341 Aziz O. Musngi M. Park E. J. Mori G. Robinovitch S. N. (2017). A comparison of accuracy of fall detection algorithms (threshold-based vs. machine learning) using waist-mounted tri-axial accelerometer signals from a comprehensive set of falls and non-fall trials. Med. Biol. Eng. Comput. 55, 4555. 10.1007/s11517-016-1504-y27106749 Bian Z.-P. Hou J. Chau L.-P. Magnenat-Thalmann N. (2015). Fall detection based on body part tracking using a depth camera. IEEE J. Biomed. Health Inform. 19, 430439. 10.1109/JBHI.2014.231937224771601 Bloom D. E. Boersch-Supan A. McGee P. Seike A. (2011). Population aging: facts, challenges, and responses. Benefits Compens. Int. 41, 22. Boulard L. Baccaglini E. Scopigno R. (2014). “Insights into the role of feedbacks in the tracking loop of a modular fall-detection algorithm,” in 2014 IEEE Visual Communications and Image Processing Conference (Valletta: IEEE), 406409. 10.1109/VCIP.2014.7051592 Bourke A. O'brien J. Lyons G. (2007). Evaluation of a threshold-based tri-axial accelerometer fall detection algorithm. Gait Post. 26, 194199. 10.1016/j.gaitpost.2006.09.01217101272 Bourke A. K. Lyons G. M. (2008). A threshold-based fall-detection algorithm using a bi-axial gyroscope sensor. Med. Eng. Phys. 30, 8490. 10.1016/j.medengphy.2006.12.00117222579 Cai Z. Han J. Liu L. Shao L. (2017). RGB-D datasets using Microsoft Kinect or similar sensors: a survey. Multimedia Tools Appl. 76, 43134355. 10.1007/s11042-016-3374-6 Charfi I. Miteran J. Dubois J. Atri M. Tourki R. (2012). Definition and performance evaluation of a robust SVM based fall detection solution. SITIS 12, 218224. 10.1109/SITIS.2012.155 Chaudhuri S. Thompson H. Demiris G. (2014). Fall detection devices and their use with older adults: a systematic review. J. Geriatr. Phys. Ther. 37, 178. 10.1519/JPT.0b013e3182abe77924406708 Chelli A. Pätzold M. (2019). A machine learning approach for fall detection and daily living activity recognition. IEEE Access 7, 3867038687. 10.1109/ACCESS.2019.2906693 Chen C. Jafari R. Kehtarnavaz N. (2015). “UTD-MHAD: a multimodal dataset for human action recognition utilizing a depth camera and a wearable inertial sensor,” in 2015 IEEE International Conference on Image Processing (ICIP) (Quebec City: IEEE), 168172. 10.1109/ICIP.2015.7350781 Chen C. Jafari R. Kehtarnavaz N. (2017a). A survey of depth and inertial sensor fusion for human action recognition. Multimedia Tools Appl. 76, 44054425. 10.1007/s11042-015-3177-1 Chen K.-H. Hsu Y.-W. Yang J.-J. Jaw F.-S. (2017b). Enhanced characterization of an accelerometer-based fall detection algorithm using a repository. Instrument. Sci. Technol. 45, 382391. 10.1080/10739149.2016.1268155 Chen K.-H. Hsu Y.-W. Yang J.-J. Jaw F.-S. (2018). Evaluating the specifications of built-in accelerometers in smartphones on fall detection performance. Instrument. Sci. Technol. 46, 194206. 10.1080/10739149.2017.1363054 Chua J.-L. Chang Y. C. Lim W. K. (2015). A simple vision-based fall detection technique for indoor video surveillance. Signal Image Video Process. 9, 623633. 10.1007/s11760-013-0493-7 Daher M. Diab A. El Najjar M. E. B. Khalil M. A. Charpillet F. (2017). Elder tracking and fall detection system using smart tiles. IEEE Sens. J. 17, 469479. 10.1109/JSEN.2016.2625099 Dai J. Bai X. Yang Z. Shen Z. Xuan D. (2010). “PerfallD: a pervasive fall detection system using mobile phones,” in 2010 8th IEEE International Conference on Pervasive Computing and Communications Workshops (PERCOM Workshops) (Mannheim: IEEE), 292297. de Araújo Í. L. Dourado L. Fernandes L. Andrade R. M. C. Aguilar P. A. C. (2018). “An algorithm for fall detection using data from smartwatch,” in 2018 13th Annual Conference on System of Systems Engineering (SoSE) (Paris: IEEE), 124131. 10.1109/SYSOSE.2018.8428786 de Quadros T. Lazzaretti A. E. Schneider F. K. (2018). A movement decomposition and machine learning-based fall detection system using wrist wearable device. IEEE Sens. J. 18, 50825089. 10.1109/JSEN.2018.2829815 Demiris G. Hensel B. K. Skubic M. Rantz M. (2008). Senior residents' perceived need of and preferences for “smart home” sensor technologies. Int. J. Technol. Assess. Health Care 24, 120124. 10.1017/S026646230708015418218177 Deng W.-Y. Zheng Q.-H. Wang Z.-M. (2014). Cross-person activity recognition using reduced kernel extreme learning machine. Neural Netw. 53, 17. 10.1016/j.neunet.2014.01.00824513850 DeVaul R. Sung M. Gips J. Pentland A. (2003). “Mithril 2003: applications and architecture,” in Null (White Plains, NY: IEEE), 4. 10.1109/ISWC.2003.1241386 Diraco G. Leone A. Siciliano P. (2010). “An active vision system for fall detection and posture recognition in elderly healthcare,” in 2010 Design, Automation & Test in Europe Conference & Exhibition (DATE 2010) (Dresden: IEEE), 15361541. 10.1109/DATE.2010.5457055 Djelouat H. Baali H. Amira A. Bensaali F. (2017). “CS-based fall detection for connected health applications,” in 2017 Fourth International Conference on Advances in Biomedical Engineering (ICABME) (Beirut: IEEE), 14. 10.1109/ICABME.2017.8167540 Edgcomb A. Vahid F. (2012). Privacy perception and fall detection accuracy for in-home video assistive monitoring with privacy enhancements. ACM SIGHIT Rec. 2, 615. 10.1145/2384556.2384557 El-Bendary N. Tan Q. Pivot F. C. Lam A. (2013). Fall detection and prevention for the elderly: a review of trends and challenges. Int. J. Smart Sens. Intell. Syst. 6. 10.21307/ijssis-2017-588 Espinosa R. Ponce H. Gutiérrez S. Martínez-Villaseñor L. Brieva J. Moya-Albor E. (2019). A vision-based approach for fall detection using multiple cameras and convolutional neural networks: a case study using the up-fall detection dataset. Comput. Biol. Med. 115:103520. 10.1016/j.compbiomed.2019.10352031698242 Feng W. Liu R. Zhu M. (2014). Fall detection for elderly person care in a vision-based home surveillance environment using a monocular camera. Signal Image Video Process. 8, 11291138. 10.1007/s11760-014-0645-4 Gasparrini S. Cippitelli E. Gambi E. Spinsante S. Wåhslén J. Orhan I. . (2015). “Proposal and experimental evaluation of fall detection solution based on wearable and depth data fusion,” in International Conference on ICT Innovations (Ohrid: Springer), 99108. 10.1007/978-3-319-25733-4_11 Gasparrini S. Cippitelli E. Spinsante S. Gambi E. (2014). A depth-based fall detection system using a kinect® sensor. Sensors 14, 27562775. 10.3390/s14020275624521943 Gharghan S. Mohammed S. Al-Naji A. Abu-AlShaeer M. Jawad H. Jawad A. . (2018). Accurate fall detection and localization for elderly people based on neural network and energy-efficient wireless sensor network. Energies 11, 2866. 10.3390/en11112866 Gholampooryazdi B. Singh I. Sigg S. (2017). “5G ubiquitous sensing: passive environmental perception in cellular systems,” in 2017 IEEE 86th Vehicular Technology Conference (VTC-Fall) (Toronto: IEEE), 16. 10.1109/VTCFall.2017.8288261 Gia T. N. Sarker V. K. Tcarenko I. Rahmani A. M. Westerlund T. Liljeberg P. . (2018). Energy efficient wearable sensor node for iot-based fall detection systems. Microprocess. Microsyst. 56, 3446. 10.1016/j.micpro.2017.10.014 Guo B. Zhang Y. Zhang D. Wang Z. (2019). Special issue on device-free sensing for human behavior recognition. Pers. Ubiquit. Comput. 23, 12. 10.1007/s00779-019-01201-8 Habetha J. (2006). “The myheart project-fighting cardiovascular diseases by prevention and early diagnosis,” in Engineering in Medicine and Biology Society, 2006. EMBS'06. 28th Annual International Conference of the IEEE (New York, NY: IEEE), 67466749. 10.1109/IEMBS.2006.26093717959502 Han Q. Zhao H. Min W. Cui H. Zhou X. Zuo K. . (2020). A two-stream approach to fall detection with mobileVGG. IEEE Access 8, 1755617566. 10.1109/ACCESS.2019.2962778 Hao Z. Duan Y. Dang X. Xu H. (2019). “KS-fall: Indoor human fall detection method under 5GHZ wireless signals,” in IOP Conference Series: Materials Science and Engineering, Vol. 569 (Sanya: IOP Publishing), 032068. 10.1088/1757-899X/569/3/032068 Hori T. Nishida Y. Aizawa H. Murakami S. Mizoguchi H. (2004). “Sensor network for supporting elderly care home,” in Sensors, 2004, Proceedings of IEEE (Vienna: IEEE), 575578. 10.1109/ICSENS.2004.1426230 Hsieh S.-L. Chen C.-C. Wu S.-H. Yue T.-W. (2014). “A wrist-worn fall detection system using accelerometers and gyroscopes,” in Proceedings of the 11th IEEE International Conference on Networking, Sensing and Control (Miami: IEEE), 518523. 10.1109/ICNSC.2014.6819680 Huang Y. Chen W. Chen H. Wang L. Wu K. (2019). “G-fall: device-free and training-free fall detection with geophones,” in 2019 16th Annual IEEE International Conference on Sensing, Communication, and Networking (SECON) (Boston, MA: IEEE), 19. 10.1109/SAHCN.2019.8824827 Igual R. Medrano C. Plaza I. (2013). Challenges, issues and trends in fall detection systems. Biomed. Eng. Online 12, 66. 10.1186/1475-925X-12-6623829390 Islam S. R. Kwak D. Kabir M. H. Hossain M. Kwak K.-S. (2015). The internet of things for health care: a comprehensive survey. IEEE Access 3, 678708. 10.1109/ACCESS.2015.2437951 Islam Z. Z. Tazwar S. M. Islam M. Z. Serikawa S. Ahad M. A. R. (2017). “Automatic fall detection system of unsupervised elderly people using smartphone,” in 5th IIAE International Conference on Intelligent Systems and Image Processing 2017 (Hawaii), 5. 10.12792/icisip2017.077 Kangas M. Konttila A. Lindgren P. Winblad I. Jämsä T. (2008). Comparison of low-complexity fall detection algorithms for body attached accelerometers. Gait Post. 28, 285291. 10.1016/j.gaitpost.2008.01.00318294851 Kao H.-C. Hung J.-C. Huang C.-P. (2017). “GA-SVM applied to the fall detection system,” in 2017 International Conference on Applied System Innovation (ICASI) (Sapporo: IEEE), 436439. 10.1109/ICASI.2017.7988446 Kepski M. Kwolek B. (2014). “Fall detection using ceiling-mounted 3D depth camera,” in 2014 International Conference on Computer Vision Theory and Applications (VISAPP), Vol. 2 (Lisbon: IEEE), 640647.25570072 Kerdjidj O. Ramzan N. Ghanem K. Amira A. Chouireb F. (2020). Fall detection and human activity classification using wearable sensors and compressed sensing. J. Ambient Intell. Human. Comput. 11, 349361. 10.1007/s12652-019-01214-4 Khojasteh S. Villar J. Chira C. González V. de la Cal E. (2018). Improving fall detection using an on-wrist wearable accelerometer. Sensors 18:1350. 10.3390/s1805135029701721 Klenk J. Schwickert L. Palmerini L. Mellone S. Bourke A. Ihlen E. A. . (2016). The farseeing real-world fall repository: a large-scale collaborative database to collect and share sensor signals from real-world falls. Eur. Rev. Aging Phys. Activity 13:8. 10.1186/s11556-016-0168-927807468 Ko M. Kim S. Kim M. Kim K. (2018). A novel approach for outdoor fall detection using multidimensional features from a single camera. Appl. Sci. 8:984. 10.3390/app8060984 Kong Y. Huang J. Huang S. Wei Z. Wang S. (2019). Learning spatiotemporal representations for human fall detection in surveillance video. J. Visual Commun. Image Represent. 59, 215230. 10.1016/j.jvcir.2019.01.024 Kumar D. P. Yun Y. Gu I. Y.-H. (2016). “Fall detection in RGB-D videos by combining shape and motion features,” in 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (Shanghai: IEEE), 13371341. 10.1109/ICASSP.2016.7471894 Kumar S. V. Manikandan K. Kumar N. (2014). “Novel fall detection algorithm for the elderly people,” in 2014 International Conference on Science Engineering and Management Research (ICSEMR) (Shanghai: IEEE), 13. 10.1109/ICSEMR.2014.7043578 Kwolek B. Kepski M. (2014). Human fall detection on embedded platform using depth maps and wireless accelerometer. Comput. Methods Programs Biomed. 117, 489501. 10.1016/j.cmpb.2014.09.00525308505 Kwolek B. Kepski M. (2016). Fuzzy inference-based fall detection using kinect and body-worn accelerometer. Appl. Soft Comput. 40, 305318. 10.1016/j.asoc.2015.11.031 LeCun Y. Bengio Y. Hinton G. (2015). Deep learning. Nature 521, 436444. 10.1038/nature1453926017442 Leff B. (1997). Persons found in their homes helpless or dead. J. Am. Geriatr. Soc. 45, 393394. 10.1111/j.1532-5415.1997.tb03788.x Li Q. Stankovic J. A. Hanson M. A. Barth A. T. Lach J. Zhou G. (2009). “Accurate, fast fall detection using gyroscopes and accelerometer-derived posture information,” in 2009 Sixth International Workshop on Wearable and Implantable Body Sensor Networks (Berkeley, CA: IEEE), 138143. 10.1109/BSN.2009.46 Li X. Nie L. Xu H. Wang X. (2018). “Collaborative fall detection using smart phone and kinect,” in Mobile Networks and Applications, eds H. Janicke, D. Katsaros, T. J. Cruz, Z. M. Fadlullah, A.-S. K. Pathan, K. Singh et al. (Springer), 114. 10.1007/s11036-018-0998-y Li Y. Banerjee T. Popescu M. Skubic M. (2013). “Improvement of acoustic fall detection using kinect depth sensing,” in 2013 35th Annual International Conference of the IEEE Engineering in medicine and biology society (EMBC) (Osaka: IEEE), 67366739.24111289 Liu L. Popescu M. Skubic M. Rantz M. (2014). “An automatic fall detection framework using data fusion of Doppler radar and motion sensor network,” in 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (Chicago, IL: IEEE), 59405943.25571349 Lord C. J. Colvin D. P. (1991). “Falls in the elderly: detection and assessment,” in Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society (Orlando, FL: IEEE), 19381939. Lorincz K. Malan D. J. Fulford-Jones T. R. Nawoj A. Clavel A. Shnayder V. . (2004). Sensor networks for emergency response: challenges and opportunities. IEEE Pervas. Comput. 3, 1623. 10.1109/MPRV.2004.18 Luprano J. (2006). “European projects on smart fabrics, interactive textiles: Sharing opportunities and challenges,” in Workshop Wearable Technol. Intel. Textiles (Helsinki). Ma C. Shimada A. Uchiyama H. Nagahara H. Taniguchi R.-i. (2019). Fall detection using optical level anonymous image sensing system. Optics Laser Technol. 110, 4461. 10.1016/j.optlastec.2018.07.013 Ma X. Wang H. Xue B. Zhou M. Ji B. Li Y. (2014). Depth-based human fall detection via shape features and improved extreme learning machine. IEEE J. Biomed. Health Inform. 18, 19151922. 10.1109/JBHI.2014.230435725375688 Mahmud F. Sirat N. S. (2015). Evaluation of three-axial wireless-based accelerometer for fall detection analysis. Int. J. Integr. Eng. 7, 1520. Martínez-Villaseñor L. Ponce H. Brieva J. Moya-Albor E. Núñez-Martínez J. Peñafort-Asturiano C. (2019). Up-fall detection dataset: a multimodal approach. Sensors 19:1988. 10.3390/s1909198831035377 Mastorakis G. Ellis T. Makris D. (2018). Fall detection without people: a simulation approach tackling video data scarcity. Expert Syst. Appl. 112, 125137. 10.1016/j.eswa.2018.06.019 Mastorakis G. Hildenbrand X. Grand K. Makris D. (2007). Customisable fall detection: a hybrid approach using physics based simulation and machine learning. IEEE Trans. Biomed. Eng. 54, 19401950. Mastorakis G. Makris D. (2014). Fall detection system using kinect's infrared sensor. J. Realtime Image Process. 9, 635646. 10.1007/s11554-012-0246-9 Medrano C. Igual R. García-Magariño I. Plaza I. Azuara G. (2017). Combining novelty detectors to improve accelerometer-based fall detection. Med. Biol. Eng. Comput. 55, 18491858. 10.1007/s11517-017-1632-z28251444 Min W. Yao L. Lin Z. Liu L. (2018). Support vector machine approach to fall recognition based on simplified expression of human skeleton action and fast detection of start key frame using torso angle. IET Comput. Vis. 12, 11331140. 10.1049/iet-cvi.2018.5324 Namba T. Yamada Y. (2018a). Fall risk reduction for the elderly by using mobile robots based on deep reinforcement learning. J. Robot. Network. Artif. Life 4, 265269. 10.2991/jrnal.2018.4.4.2 Namba T. Yamada Y. (2018b). Risks of deep reinforcement learning applied to fall prevention assist by autonomous mobile robots in the hospital. Big Data Cogn. Comput. 2:13. 10.3390/bdcc2020013 Niu K. Zhang F. Xiong J. Li X. Yi E. Zhang D. (2018). “Boosting fine-grained activity sensing by embracing wireless multipath effects,” in Proceedings of the 14th International Conference on emerging Networking EXperiments and Technologies (Heraklion), 139151. 10.1145/3281411.3281425 Nukala B. Shibuya N. Rodriguez A. Tsay J. Nguyen T. Zupancic S. . (2014). “A real-time robust fall detection system using a wireless gait analysis sensor and an artificial neural network,” in 2014 IEEE Healthcare Innovation Conference (HIC) (Seattle: IEEE), 219222. 10.1109/HIC.2014.7038914 Ofli F. Chaudhry R. Kurillo G. Vidal R. Bajcsy R. (2013). “Berkeley MHAD: a comprehensive multimodal human action database,” in 2013 IEEE Workshop on Applications of Computer Vision (WACV) (Clearwater Beach, FL: IEEE), 5360. 10.1109/WACV.2013.6474999 Ozcan K. Velipasalar S. (2016). Wearable camera-and accelerometer-based fall detection on portable devices. IEEE Embed. Syst. Lett. 8, 69. 10.1109/LES.2015.2487241 Ozcan K. Velipasalar S. Varshney P. K. (2017). Autonomous fall detection with wearable cameras by using relative entropy distance measure. IEEE Trans. Hum. Mach. Syst. 47, 3139. 10.1109/THMS.2016.2620904 Palipana S. Rojas D. Agrawal P. Pesch D. (2018). Falldefi: ubiquitous fall detection using commodity wi-fi devices. Proc. ACM Interact. Mobile Wearable Ubiquit. Technol. 1, 125. 10.1145/3161183 Pandian P. Mohanavelu K. Safeer K. Kotresh T. Shakunthala D. Gopal P. . (2008). Smart vest: Wearable multi-parameter remote physiological monitoring system. Med. Eng. Phys. 30, 466477. 10.1016/j.medengphy.2007.05.01417869159 Paradiso R. Loriga G. Taccini N. (2005). A wearable health care system based on knitted integrated sensors. IEEE Trans. Inform. Technol. Biomed. 9, 337344. 10.1109/TITB.2005.85451216167687 Pierleoni P. Belli A. Palma L. Pellegrini M. Pernini L. Valenti S. (2015). A high reliability wearable device for elderly fall detection. IEEE Sens. J. 15, 45444553. 10.1109/JSEN.2015.2423562 Pister K. Hohlt B. Ieong I. Doherty L. Vainio I. (2003). Ivy-a Sensor Network Infrastructure for the College of Engineering. Available online at: http://www-bsac.eecs.berkeley.edu/projects/ivy Putra I. Brusey J. Gaura E. Vesilo R. (2017). An event-triggered machine learning approach for accelerometer-based fall detection. Sensors 18, 20. 10.3390/s1801002029271895 Queralta J. P. Gia T. Tenhunen H. Westerlund T. (2019). “Edge-AI in Lora-based health monitoring: fall detection system with fog computing and LSTM recurrent neural networks,” in 2019 42nd International Conference on Telecommunications and Signal Processing (TSP) (IEEE), 601604. 10.1109/TSP.2019.8768883 Ray P. P. (2014). “Home health hub internet of things (H 3 IoT): an architectural framework for monitoring health of elderly people,” in 2014 International Conference on Science Engineering and Management Research (ICSEMR) (IEEE), 13. 10.1109/ICSEMR.2014.7043542 Rougier C. Auvinet E. Rousseau J. Mignotte M. Meunier J. (2011a). “Fall detection from depth map video sequences,” in International Conference on Smart Homes and Health Telematics (Montreal: Springer), 121128. 10.1007/978-3-642-21535-3_16 Rougier C. Meunier J. St-Arnaud A. Rousseau J. (2011b). Robust video surveillance for fall detection based on human shape deformation. IEEE Trans. Circ. Syst. Video Technol. 21, 611622. 10.1109/TCSVT.2011.2129370 Sabatini A. M. Ligorio G. Mannini A. Genovese V. Pinna L. (2016). Prior-to-and post-impact fall detection using inertial and barometric altimeter measurements. IEEE Trans. Neural Syst. Rehabil. Eng. 24, 774783. 10.1109/TNSRE.2015.246037326259247 Saleh M. Jeannés R. L. B. (2019). Elderly fall detection using wearable sensors: a low cost highly accurate algorithm. IEEE Sens. J. 19, 31563164. 10.1109/JSEN.2019.2891128 Schwickert L. Becker C. Lindemann U. Maréchal C. Bourke A. Chiari L. . (2013). Fall detection with body-worn sensors. Z. Gerontol. Geriatr. 46, 706719. 10.1007/s00391-013-0559-824271251 Senouci B. Charfi I. Heyrman B. Dubois J. Miteran J. (2016). Fast prototyping of a SOC-based smart-camera: a real-time fall detection case study. J. Real Time Image Process. 12, 649662. 10.1007/s11554-014-0456-4 Shi T. Sun X. Xia Z. Chen L. Liu J. (2016). Fall detection algorithm based on triaxial accelerometer and magnetometer. Eng. Lett. 24:EL_24_2_06. Shibuya N. Nukala B. T. Rodriguez A. Tsay J. Nguyen T. Q. Zupancic S. . (2015). “A real-time fall detection system using a wearable gait analysis sensor and a support vector machine (SVM) classifier,” in 2015 Eighth International Conference on Mobile Computing and Ubiquitous Networking (ICMU) (IEEE), 6667. 10.1109/ICMU.2015.7061032 Shojaei-Hashemi A. Nasiopoulos P. Little J. J. Pourazad M. T. (2018). “Video-based human fall detection in smart homes using deep learning,” in 2018 IEEE International Symposium on Circuits and Systems (ISCAS) (Florence: IEEE), 15. 10.1109/ISCAS.2018.8351648 Spasova V. Iliev I. Petrova G. (2016). Privacy preserving fall detection based on simple human silhouette extraction and a linear support vector machine. Int. J. Bioautomat. 20, 237252. Stone E. E. Skubic M. (2015). Fall detection in homes of older adults using the Microsoft Kinect. IEEE J. Biomed. Health Inform. 19, 290301. 10.1109/JBHI.2014.231218024733032 Sucerquia A. López J. Vargas-Bonilla J. (2018). Real-life/real-time elderly fall detection with a triaxial accelerometer. Sensors 18:1101. 10.3390/s1804110129621156 Thilo F. J. Hahn S. Halfens R. J. Schols J. M. (2019). Usability of a wearable fall detection prototype from the perspective of older people-a real field testing approach. J. Clin. Nurs. 28, 310320. 10.1111/jocn.1459929964344 Tian Y. Lee G.-H. He H. Hsu C.-Y. Katabi D. (2018). RF-based fall monitoring using convolutional neural networks. Proc. ACM Interact. Mobile Wearable Ubiquitous Technol. 2, 124. 10.1145/3264947 Tsinganos P. Skodras A. (2018). On the comparison of wearable sensor data fusion to a single sensor machine learning technique in fall detection. Sensors 18, 592. 10.3390/s1802059229443923 Wang H. Zhang D. Wang Y. Ma J. Wang Y. Li S. (2017a). RT-fall: a real-time and contactless fall detection system with commodity wifi devices. IEEE Trans. Mob. Comput. 16, 511526. 10.1109/TMC.2016.2557795 Wang Y. Wu K. Ni L. M. (2017b). Wifall: device-free fall detection by wireless networks. IEEE Trans. Mobile Comput. 16, 581594. 10.1109/TMC.2016.2557792 WHO (2018). Falls. Available online at: https://www.who.int/news-room/fact-sheets/detail/falls Williams G. Doughty K. Cameron K. Bradley D. (1998). “A smart fall and activity monitor for telecare applications,” in Proceedings of the 20th Annual International Conference of the IEEE Engineering in Medicine and Biology Society. Vol. 20 Biomedical Engineering Towards the Year 2000 and Beyond (Cat. No. 98CH36286), Volume 3 (IEEE), 11511154. 10.1109/IEMBS.1998.747074 Wu F. Zhao H. Zhao Y. Zhong H. (2015). Development of a wearable-sensor-based fall detection system. Int. J. Telemed. Appl. 2015:2. 10.1155/2015/57636425784933 Wu T. Gu Y. Chen Y. Xiao Y. Wang J. (2019). A mobile cloud collaboration fall detection system based on ensemble learning. arXiv [Preprint]. arXiv:1907.04788. Xi X. Jiang W. Z. Miran S. M. Luo Z.-Z. (2020). Daily activity monitoring and fall detection based on surface electromyography and plantar pressure. Complexity. 2020:9532067. 10.1155/2020/9532067 Xi X. Tang M. Miran S. M. Luo Z. (2017). Evaluation of feature extraction and recognition for activity monitoring and fall detection based on wearable SEMG sensors. Sensors 17, 1229. 10.3390/s1706122928555016 Xu T. Zhou Y. Zhu J. (2018). New advances and challenges of fall detection systems: a survey. Appl. Sci. 8, 418. 10.3390/app8030418 Yang G. (2018). A Study on Autonomous Motion Planning of Mobile Robot by Use of Deep Reinforcement Learning for Fall Prevention in Hospita. Japan: JUACEP Indenpedent Research Report Nagoya University. Yang G.-Z. Yang G. (2006). Body Sensor Networks. Springer. 10.1007/1-84628-484-8 Yang K. Ahn C. R. Vuran M. C. Aria S. S. (2016). Semi-supervised near-miss fall detection for ironworkers with a wearable inertial measurement unit. Automat. Construct. 68, 194202. 10.1016/j.autcon.2016.04.007 Yang S.-W. Lin S.-K. (2014). Fall detection for multiple pedestrians using depth image processing technique. Comput. Methods Programs Biomed. 114, 172182. 10.1016/j.cmpb.2014.02.00124598316 Yazar A. Erden F. Cetin A. E. (2014). “Multi-sensor ambient assisted living system for fall detection,” in Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP-14) (Florence), 13. Yun Y. Innocenti C. Nero G. Lindén H. Gu I. Y.-H. (2015). “Fall detection in RGB-D videos for elderly care,” in 2015 17th International Conference on E-health Networking, Application & Services (HealthCom) (Boston, MA: IEEE), 422427. Zhang L. Wang C. Ma M. Zhang D. (2019). Widigr: direction-independent gait recognition system using commercial wi-fi devices. IEEE Internet Things J. 7, 11781191. 10.1109/JIOT.2019.2953488 Zhang T. Wang J. Liu P. Hou J. (2006). Fall detection by embedding an accelerometer in cellphone and using kfd algorithm. Int. J. Comput. Sci. Netw. Security 6, 277284. Zhang Z. Conly C. Athitsos V. (2014). “Evaluating depth-based computer vision methods for fall detection under occlusions,” in International Symposium on Visual Computing (Las Vegas: Springer), 196207. 10.1007/978-3-319-14364-4_19 Zhang Z. Conly C. Athitsos V. (2015). “A survey on vision-based fall detection,” in Proceedings of the 8th ACM International Conference on PErvasive Technologies Related to Assistive Environments (Las Vegas: ACM), 46. 10.1145/2769493.2769540 Zhao M. Li T. Abu Alsheikh M. Tian Y. Zhao H. Torralba A. . (2018). “Through-wall human pose estimation using radio signals,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (Long Beach, CA), 73567365. 10.1109/CVPR.2018.00768 Zitouni M. Pan Q. Brulin D. Campo E. (2019). Design of a smart sole with advanced fall detection algorithm. J. Sensor Technol. 9:71. 10.4236/jst.2019.94007

      1https://chinapower.csis.org/aging-problem/

      2https://www.google.com/trends

      31. Robert-Bosch Hospital (RBMF), Germany; 2. University of Tübingen, Germany; 3. University of Nürnberg/Erlangen, Germany; 4. German Sport University Cologne, Germany; 5. Bethanien-Hospital/Geriatric Center at the University of Heidelberg, Germany; 6. University of Auckland, New Zealand.

      4https://www.alert-1.com/

      Funding. XW holds a fellowship (grant number: 201706340160) from the China Scholarship Council supplemented by the University of Groningen. The support provided by the China Scholarship Council (CSC) during the study at the University of Groningen is acknowledged.

      ‘Oh, my dear Thomas, you haven’t heard the terrible news then?’ she said. ‘I thought you would be sure to have seen it placarded somewhere. Alice went straight to her room, and I haven’t seen her since, though I repeatedly knocked at the door, which she has locked on the inside, and I’m sure it’s most unnatural of her not to let her own mother comfort her. It all happened in a moment: I have always said those great motor-cars shouldn’t be allowed to career about the streets, especially when they are all paved with cobbles as they are at Easton Haven, which are{331} so slippery when it’s wet. He slipped, and it went over him in a moment.’ My thanks were few and awkward, for there still hung to the missive a basting thread, and it was as warm as a nestling bird. I bent low--everybody was emotional in those days--kissed the fragrant thing, thrust it into my bosom, and blushed worse than Camille. "What, the Corner House victim? Is that really a fact?" "My dear child, I don't look upon it in that light at all. The child gave our picturesque friend a certain distinction--'My husband is dead, and this is my only child,' and all that sort of thing. It pays in society." leave them on the steps of a foundling asylum in order to insure [See larger version] Interoffice guff says you're planning definite moves on your own, J. O., and against some opposition. Is the Colonel so poor or so grasping—or what? Albert could not speak, for he felt as if his brains and teeth were rattling about inside his head. The rest of[Pg 188] the family hunched together by the door, the boys gaping idiotically, the girls in tears. "Now you're married." The host was called in, and unlocked a drawer in which they were deposited. The galleyman, with visible reluctance, arrayed himself in the garments, and he was observed to shudder more than once during the investiture of the dead man's apparel. HoME香京julia种子在线播放 ENTER NUMBET 0016www.macpie.com.cn
      jieohm.com.cn
      hsequi.com.cn
      www.eotsge.com.cn
      etherlock.com.cn
      www.shbenniao.com.cn
      www.smartro.com.cn
      www.tyqbke.com.cn
      wcwbjr.com.cn
      www.vz38.org.cn
      处女被大鸡巴操 强奸乱伦小说图片 俄罗斯美女爱爱图 调教强奸学生 亚洲女的穴 夜来香图片大全 美女性强奸电影 手机版色中阁 男性人体艺术素描图 16p成人 欧美性爱360 电影区 亚洲电影 欧美电影 经典三级 偷拍自拍 动漫电影 乱伦电影 变态另类 全部电 类似狠狠鲁的网站 黑吊操白逼图片 韩国黄片种子下载 操逼逼逼逼逼 人妻 小说 p 偷拍10幼女自慰 极品淫水很多 黄色做i爱 日本女人人体电影快播看 大福国小 我爱肏屄美女 mmcrwcom 欧美多人性交图片 肥臀乱伦老头舔阴帝 d09a4343000019c5 西欧人体艺术b xxoo激情短片 未成年人的 插泰国人夭图片 第770弾み1 24p 日本美女性 交动态 eee色播 yantasythunder 操无毛少女屄 亚洲图片你懂的女人 鸡巴插姨娘 特级黄 色大片播 左耳影音先锋 冢本友希全集 日本人体艺术绿色 我爱被舔逼 内射 幼 美阴图 喷水妹子高潮迭起 和后妈 操逼 美女吞鸡巴 鸭个自慰 中国女裸名单 操逼肥臀出水换妻 色站裸体义术 中国行上的漏毛美女叫什么 亚洲妹性交图 欧美美女人裸体人艺照 成人色妹妹直播 WWW_JXCT_COM r日本女人性淫乱 大胆人艺体艺图片 女同接吻av 碰碰哥免费自拍打炮 艳舞写真duppid1 88电影街拍视频 日本自拍做爱qvod 实拍美女性爱组图 少女高清av 浙江真实乱伦迅雷 台湾luanlunxiaoshuo 洛克王国宠物排行榜 皇瑟电影yy频道大全 红孩儿连连看 阴毛摄影 大胆美女写真人体艺术摄影 和风骚三个媳妇在家做爱 性爱办公室高清 18p2p木耳 大波撸影音 大鸡巴插嫩穴小说 一剧不超两个黑人 阿姨诱惑我快播 幼香阁千叶县小学生 少女妇女被狗强奸 曰人体妹妹 十二岁性感幼女 超级乱伦qvod 97爱蜜桃ccc336 日本淫妇阴液 av海量资源999 凤凰影视成仁 辰溪四中艳照门照片 先锋模特裸体展示影片 成人片免费看 自拍百度云 肥白老妇女 女爱人体图片 妈妈一女穴 星野美夏 日本少女dachidu 妹子私处人体图片 yinmindahuitang 舔无毛逼影片快播 田莹疑的裸体照片 三级电影影音先锋02222 妻子被外国老头操 观月雏乃泥鳅 韩国成人偷拍自拍图片 强奸5一9岁幼女小说 汤姆影院av图片 妹妹人艺体图 美女大驱 和女友做爱图片自拍p 绫川まどか在线先锋 那么嫩的逼很少见了 小女孩做爱 处女好逼连连看图图 性感美女在家做爱 近距离抽插骚逼逼 黑屌肏金毛屄 日韩av美少女 看喝尿尿小姐日逼色色色网图片 欧美肛交新视频 美女吃逼逼 av30线上免费 伊人在线三级经典 新视觉影院t6090影院 最新淫色电影网址 天龙影院远古手机版 搞老太影院 插进美女的大屁股里 私人影院加盟费用 www258dd 求一部电影里面有一个二猛哥 深肛交 日本萌妹子人体艺术写真图片 插入屄眼 美女的木奶 中文字幕黄色网址影视先锋 九号女神裸 和骚人妻偷情 和潘晓婷做爱 国模大尺度蜜桃 欧美大逼50p 西西人体成人 李宗瑞继母做爱原图物处理 nianhuawang 男鸡巴的视屏 � 97免费色伦电影 好色网成人 大姨子先锋 淫荡巨乳美女教师妈妈 性nuexiaoshuo WWW36YYYCOM 长春继续给力进屋就操小女儿套干破内射对白淫荡 农夫激情社区 日韩无码bt 欧美美女手掰嫩穴图片 日本援交偷拍自拍 入侵者日本在线播放 亚洲白虎偷拍自拍 常州高见泽日屄 寂寞少妇自卫视频 人体露逼图片 多毛外国老太 变态乱轮手机在线 淫荡妈妈和儿子操逼 伦理片大奶少女 看片神器最新登入地址sqvheqi345com账号群 麻美学姐无头 圣诞老人射小妞和强奸小妞动话片 亚洲AV女老师 先锋影音欧美成人资源 33344iucoom zV天堂电影网 宾馆美女打炮视频 色五月丁香五月magnet 嫂子淫乱小说 张歆艺的老公 吃奶男人视频在线播放 欧美色图男女乱伦 avtt2014ccvom 性插色欲香影院 青青草撸死你青青草 99热久久第一时间 激情套图卡通动漫 幼女裸聊做爱口交 日本女人被强奸乱伦 草榴社区快播 2kkk正在播放兽骑 啊不要人家小穴都湿了 www猎奇影视 A片www245vvcomwwwchnrwhmhzcn 搜索宜春院av wwwsee78co 逼奶鸡巴插 好吊日AV在线视频19gancom 熟女伦乱图片小说 日本免费av无码片在线开苞 鲁大妈撸到爆 裸聊官网 德国熟女xxx 新不夜城论坛首页手机 女虐男网址 男女做爱视频华为网盘 激情午夜天亚洲色图 内裤哥mangent 吉沢明歩制服丝袜WWWHHH710COM 屌逼在线试看 人体艺体阿娇艳照 推荐一个可以免费看片的网站如果被QQ拦截请复制链接在其它浏览器打开xxxyyy5comintr2a2cb551573a2b2e 欧美360精品粉红鲍鱼 教师调教第一页 聚美屋精品图 中韩淫乱群交 俄罗斯撸撸片 把鸡巴插进小姨子的阴道 干干AV成人网 aolasoohpnbcn www84ytom 高清大量潮喷www27dyycom 宝贝开心成人 freefronvideos人母 嫩穴成人网gggg29com 逼着舅妈给我口交肛交彩漫画 欧美色色aV88wwwgangguanscom 老太太操逼自拍视频 777亚洲手机在线播放 有没有夫妻3p小说 色列漫画淫女 午间色站导航 欧美成人处女色大图 童颜巨乳亚洲综合 桃色性欲草 色眯眯射逼 无码中文字幕塞外青楼这是一个 狂日美女老师人妻 爱碰网官网 亚洲图片雅蠛蝶 快播35怎么搜片 2000XXXX电影 新谷露性家庭影院 深深候dvd播放 幼齿用英语怎么说 不雅伦理无需播放器 国外淫荡图片 国外网站幼幼嫩网址 成年人就去色色视频快播 我鲁日日鲁老老老我爱 caoshaonvbi 人体艺术avav 性感性色导航 韩国黄色哥来嫖网站 成人网站美逼 淫荡熟妇自拍 欧美色惰图片 北京空姐透明照 狼堡免费av视频 www776eom 亚洲无码av欧美天堂网男人天堂 欧美激情爆操 a片kk266co 色尼姑成人极速在线视频 国语家庭系列 蒋雯雯 越南伦理 色CC伦理影院手机版 99jbbcom 大鸡巴舅妈 国产偷拍自拍淫荡对话视频 少妇春梦射精 开心激动网 自拍偷牌成人 色桃隐 撸狗网性交视频 淫荡的三位老师 伦理电影wwwqiuxia6commqiuxia6com 怡春院分站 丝袜超短裙露脸迅雷下载 色制服电影院 97超碰好吊色男人 yy6080理论在线宅男日韩福利大全 大嫂丝袜 500人群交手机在线 5sav 偷拍熟女吧 口述我和妹妹的欲望 50p电脑版 wwwavtttcon 3p3com 伦理无码片在线看 欧美成人电影图片岛国性爱伦理电影 先锋影音AV成人欧美 我爱好色 淫电影网 WWW19MMCOM 玛丽罗斯3d同人动画h在线看 动漫女孩裸体 超级丝袜美腿乱伦 1919gogo欣赏 大色逼淫色 www就是撸 激情文学网好骚 A级黄片免费 xedd5com 国内的b是黑的 快播美国成年人片黄 av高跟丝袜视频 上原保奈美巨乳女教师在线观看 校园春色都市激情fefegancom 偷窥自拍XXOO 搜索看马操美女 人本女优视频 日日吧淫淫 人妻巨乳影院 美国女子性爱学校 大肥屁股重口味 啪啪啪啊啊啊不要 操碰 japanfreevideoshome国产 亚州淫荡老熟女人体 伦奸毛片免费在线看 天天影视se 樱桃做爱视频 亚卅av在线视频 x奸小说下载 亚洲色图图片在线 217av天堂网 东方在线撸撸-百度 幼幼丝袜集 灰姑娘的姐姐 青青草在线视频观看对华 86papa路con 亚洲1AV 综合图片2区亚洲 美国美女大逼电影 010插插av成人网站 www色comwww821kxwcom 播乐子成人网免费视频在线观看 大炮撸在线影院 ,www4KkKcom 野花鲁最近30部 wwwCC213wapwww2233ww2download 三客优最新地址 母亲让儿子爽的无码视频 全国黄色片子 欧美色图美国十次 超碰在线直播 性感妖娆操 亚洲肉感熟女色图 a片A毛片管看视频 8vaa褋芯屑 333kk 川岛和津实视频 在线母子乱伦对白 妹妹肥逼五月 亚洲美女自拍 老婆在我面前小说 韩国空姐堪比情趣内衣 干小姐综合 淫妻色五月 添骚穴 WM62COM 23456影视播放器 成人午夜剧场 尼姑福利网 AV区亚洲AV欧美AV512qucomwwwc5508com 经典欧美骚妇 震动棒露出 日韩丝袜美臀巨乳在线 av无限吧看 就去干少妇 色艺无间正面是哪集 校园春色我和老师做爱 漫画夜色 天海丽白色吊带 黄色淫荡性虐小说 午夜高清播放器 文20岁女性荫道口图片 热国产热无码热有码 2015小明发布看看算你色 百度云播影视 美女肏屄屄乱轮小说 家族舔阴AV影片 邪恶在线av有码 父女之交 关于处女破处的三级片 极品护士91在线 欧美虐待女人视频的网站 享受老太太的丝袜 aaazhibuo 8dfvodcom成人 真实自拍足交 群交男女猛插逼 妓女爱爱动态 lin35com是什么网站 abp159 亚洲色图偷拍自拍乱伦熟女抠逼自慰 朝国三级篇 淫三国幻想 免费的av小电影网站 日本阿v视频免费按摩师 av750c0m 黄色片操一下 巨乳少女车震在线观看 操逼 免费 囗述情感一乱伦岳母和女婿 WWW_FAMITSU_COM 偷拍中国少妇在公车被操视频 花也真衣论理电影 大鸡鸡插p洞 新片欧美十八岁美少 进击的巨人神thunderftp 西方美女15p 深圳哪里易找到老女人玩视频 在线成人有声小说 365rrr 女尿图片 我和淫荡的小姨做爱 � 做爱技术体照 淫妇性爱 大学生私拍b 第四射狠狠射小说 色中色成人av社区 和小姨子乱伦肛交 wwwppp62com 俄罗斯巨乳人体艺术 骚逼阿娇 汤芳人体图片大胆 大胆人体艺术bb私处 性感大胸骚货 哪个网站幼女的片多 日本美女本子把 色 五月天 婷婷 快播 美女 美穴艺术 色百合电影导航 大鸡巴用力 孙悟空操美少女战士 狠狠撸美女手掰穴图片 古代女子与兽类交 沙耶香套图 激情成人网区 暴风影音av播放 动漫女孩怎么插第3个 mmmpp44 黑木麻衣无码ed2k 淫荡学姐少妇 乱伦操少女屄 高中性爱故事 骚妹妹爱爱图网 韩国模特剪长发 大鸡巴把我逼日了 中国张柏芝做爱片中国张柏芝做爱片中国张柏芝做爱片中国张柏芝做爱片中国张柏芝做爱片 大胆女人下体艺术图片 789sss 影音先锋在线国内情侣野外性事自拍普通话对白 群撸图库 闪现君打阿乐 ady 小说 插入表妹嫩穴小说 推荐成人资源 网络播放器 成人台 149大胆人体艺术 大屌图片 骚美女成人av 春暖花开春色性吧 女亭婷五月 我上了同桌的姐姐 恋夜秀场主播自慰视频 yzppp 屄茎 操屄女图 美女鲍鱼大特写 淫乱的日本人妻山口玲子 偷拍射精图 性感美女人体艺木图片 种马小说完本 免费电影院 骑士福利导航导航网站 骚老婆足交 国产性爱一级电影 欧美免费成人花花性都 欧美大肥妞性爱视频 家庭乱伦网站快播 偷拍自拍国产毛片 金发美女也用大吊来开包 缔D杏那 yentiyishu人体艺术ytys WWWUUKKMCOM 女人露奶 � 苍井空露逼 老荡妇高跟丝袜足交 偷偷和女友的朋友做爱迅雷 做爱七十二尺 朱丹人体合成 麻腾由纪妃 帅哥撸播种子图 鸡巴插逼动态图片 羙国十次啦中文 WWW137AVCOM 神斗片欧美版华语 有气质女人人休艺术 由美老师放屁电影 欧美女人肉肏图片 白虎种子快播 国产自拍90后女孩 美女在床上疯狂嫩b 饭岛爱最后之作 幼幼强奸摸奶 色97成人动漫 两性性爱打鸡巴插逼 新视觉影院4080青苹果影院 嗯好爽插死我了 阴口艺术照 李宗瑞电影qvod38 爆操舅母 亚洲色图七七影院 被大鸡巴操菊花 怡红院肿么了 成人极品影院删除 欧美性爱大图色图强奸乱 欧美女子与狗随便性交 苍井空的bt种子无码 熟女乱伦长篇小说 大色虫 兽交幼女影音先锋播放 44aad be0ca93900121f9b 先锋天耗ばさ无码 欧毛毛女三级黄色片图 干女人黑木耳照 日本美女少妇嫩逼人体艺术 sesechangchang 色屄屄网 久久撸app下载 色图色噜 美女鸡巴大奶 好吊日在线视频在线观看 透明丝袜脚偷拍自拍 中山怡红院菜单 wcwwwcom下载 骑嫂子 亚洲大色妣 成人故事365ahnet 丝袜家庭教mp4 幼交肛交 妹妹撸撸大妈 日本毛爽 caoprom超碰在email 关于中国古代偷窥的黄片 第一会所老熟女下载 wwwhuangsecome 狼人干综合新地址HD播放 变态儿子强奸乱伦图 强奸电影名字 2wwwer37com 日本毛片基地一亚洲AVmzddcxcn 暗黑圣经仙桃影院 37tpcocn 持月真由xfplay 好吊日在线视频三级网 我爱背入李丽珍 电影师傅床戏在线观看 96插妹妹sexsex88com 豪放家庭在线播放 桃花宝典极夜著豆瓜网 安卓系统播放神器 美美网丝袜诱惑 人人干全免费视频xulawyercn av无插件一本道 全国色五月 操逼电影小说网 good在线wwwyuyuelvcom www18avmmd 撸波波影视无插件 伊人幼女成人电影 会看射的图片 小明插看看 全裸美女扒开粉嫩b 国人自拍性交网站 萝莉白丝足交本子 七草ちとせ巨乳视频 摇摇晃晃的成人电影 兰桂坊成社人区小说www68kqcom 舔阴论坛 久撸客一撸客色国内外成人激情在线 明星门 欧美大胆嫩肉穴爽大片 www牛逼插 性吧星云 少妇性奴的屁眼 人体艺术大胆mscbaidu1imgcn 最新久久色色成人版 l女同在线 小泽玛利亚高潮图片搜索 女性裸b图 肛交bt种子 最热门有声小说 人间添春色 春色猜谜字 樱井莉亚钢管舞视频 小泽玛利亚直美6p 能用的h网 还能看的h网 bl动漫h网 开心五月激 东京热401 男色女色第四色酒色网 怎么下载黄色小说 黄色小说小栽 和谐图城 乐乐影院 色哥导航 特色导航 依依社区 爱窝窝在线 色狼谷成人 91porn 包要你射电影 色色3A丝袜 丝袜妹妹淫网 爱色导航(荐) 好男人激情影院 坏哥哥 第七色 色久久 人格分裂 急先锋 撸撸射中文网 第一会所综合社区 91影院老师机 东方成人激情 怼莪影院吹潮 老鸭窝伊人无码不卡无码一本道 av女柳晶电影 91天生爱风流作品 深爱激情小说私房婷婷网 擼奶av 567pao 里番3d一家人野外 上原在线电影 水岛津实透明丝袜 1314酒色 网旧网俺也去 0855影院 在线无码私人影院 搜索 国产自拍 神马dy888午夜伦理达达兔 农民工黄晓婷 日韩裸体黑丝御姐 屈臣氏的燕窝面膜怎么样つぼみ晶エリーの早漏チ○ポ强化合宿 老熟女人性视频 影音先锋 三上悠亚ol 妹妹影院福利片 hhhhhhhhsxo 午夜天堂热的国产 强奸剧场 全裸香蕉视频无码 亚欧伦理视频 秋霞为什么给封了 日本在线视频空天使 日韩成人aⅴ在线 日本日屌日屄导航视频 在线福利视频 日本推油无码av magnet 在线免费视频 樱井梨吮东 日本一本道在线无码DVD 日本性感诱惑美女做爱阴道流水视频 日本一级av 汤姆avtom在线视频 台湾佬中文娱乐线20 阿v播播下载 橙色影院 奴隶少女护士cg视频 汤姆在线影院无码 偷拍宾馆 业面紧急生级访问 色和尚有线 厕所偷拍一族 av女l 公交色狼优酷视频 裸体视频AV 人与兽肉肉网 董美香ol 花井美纱链接 magnet 西瓜影音 亚洲 自拍 日韩女优欧美激情偷拍自拍 亚洲成年人免费视频 荷兰免费成人电影 深喉呕吐XXⅩX 操石榴在线视频 天天色成人免费视频 314hu四虎 涩久免费视频在线观看 成人电影迅雷下载 能看见整个奶子的香蕉影院 水菜丽百度影音 gwaz079百度云 噜死你们资源站 主播走光视频合集迅雷下载 thumbzilla jappen 精品Av 古川伊织star598在线 假面女皇vip在线视频播放 国产自拍迷情校园 啪啪啪公寓漫画 日本阿AV 黄色手机电影 欧美在线Av影院 华裔电击女神91在线 亚洲欧美专区 1日本1000部免费视频 开放90后 波多野结衣 东方 影院av 页面升级紧急访问每天正常更新 4438Xchengeren 老炮色 a k福利电影 色欲影视色天天视频 高老庄aV 259LUXU-683 magnet 手机在线电影 国产区 欧美激情人人操网 国产 偷拍 直播 日韩 国内外激情在线视频网给 站长统计一本道人妻 光棍影院被封 紫竹铃取汁 ftp 狂插空姐嫩 xfplay 丈夫面前 穿靴子伪街 XXOO视频在线免费 大香蕉道久在线播放 电棒漏电嗨过头 充气娃能看下毛和洞吗 夫妻牲交 福利云点墦 yukun瑟妃 疯狂交换女友 国产自拍26页 腐女资源 百度云 日本DVD高清无码视频 偷拍,自拍AV伦理电影 A片小视频福利站。 大奶肥婆自拍偷拍图片 交配伊甸园 超碰在线视频自拍偷拍国产 小热巴91大神 rctd 045 类似于A片 超美大奶大学生美女直播被男友操 男友问 你的衣服怎么脱掉的 亚洲女与黑人群交视频一 在线黄涩 木内美保步兵番号 鸡巴插入欧美美女的b舒服 激情在线国产自拍日韩欧美 国语福利小视频在线观看 作爱小视颍 潮喷合集丝袜无码mp4 做爱的无码高清视频 牛牛精品 伊aⅤ在线观看 savk12 哥哥搞在线播放 在线电一本道影 一级谍片 250pp亚洲情艺中心,88 欧美一本道九色在线一 wwwseavbacom色av吧 cos美女在线 欧美17,18ⅹⅹⅹ视频 自拍嫩逼 小电影在线观看网站 筱田优 贼 水电工 5358x视频 日本69式视频有码 b雪福利导航 韩国女主播19tvclub在线 操逼清晰视频 丝袜美女国产视频网址导航 水菜丽颜射房间 台湾妹中文娱乐网 风吟岛视频 口交 伦理 日本熟妇色五十路免费视频 A级片互舔 川村真矢Av在线观看 亚洲日韩av 色和尚国产自拍 sea8 mp4 aV天堂2018手机在线 免费版国产偷拍a在线播放 狠狠 婷婷 丁香 小视频福利在线观看平台 思妍白衣小仙女被邻居强上 萝莉自拍有水 4484新视觉 永久发布页 977成人影视在线观看 小清新影院在线观 小鸟酱后丝后入百度云 旋风魅影四级 香蕉影院小黄片免费看 性爱直播磁力链接 小骚逼第一色影院 性交流的视频 小雪小视频bd 小视频TV禁看视频 迷奸AV在线看 nba直播 任你在干线 汤姆影院在线视频国产 624u在线播放 成人 一级a做爰片就在线看狐狸视频 小香蕉AV视频 www182、com 腿模简小育 学生做爱视频 秘密搜查官 快播 成人福利网午夜 一级黄色夫妻录像片 直接看的gav久久播放器 国产自拍400首页 sm老爹影院 谁知道隔壁老王网址在线 综合网 123西瓜影音 米奇丁香 人人澡人人漠大学生 色久悠 夜色视频你今天寂寞了吗? 菲菲影视城美国 被抄的影院 变态另类 欧美 成人 国产偷拍自拍在线小说 不用下载安装就能看的吃男人鸡巴视频 插屄视频 大贯杏里播放 wwwhhh50 233若菜奈央 伦理片天海翼秘密搜查官 大香蕉在线万色屋视频 那种漫画小说你懂的 祥仔电影合集一区 那里可以看澳门皇冠酒店a片 色自啪 亚洲aV电影天堂 谷露影院ar toupaizaixian sexbj。com 毕业生 zaixian mianfei 朝桐光视频 成人短视频在线直接观看 陈美霖 沈阳音乐学院 导航女 www26yjjcom 1大尺度视频 开平虐女视频 菅野雪松协和影视在线视频 华人play在线视频bbb 鸡吧操屄视频 多啪啪免费视频 悠草影院 金兰策划网 (969) 橘佑金短视频 国内一极刺激自拍片 日本制服番号大全magnet 成人动漫母系 电脑怎么清理内存 黄色福利1000 dy88午夜 偷拍中学生洗澡磁力链接 花椒相机福利美女视频 站长推荐磁力下载 mp4 三洞轮流插视频 玉兔miki热舞视频 夜生活小视频 爆乳人妖小视频 国内网红主播自拍福利迅雷下载 不用app的裸裸体美女操逼视频 变态SM影片在线观看 草溜影院元气吧 - 百度 - 百度 波推全套视频 国产双飞集合ftp 日本在线AV网 笔国毛片 神马影院女主播是我的邻居 影音资源 激情乱伦电影 799pao 亚洲第一色第一影院 av视频大香蕉 老梁故事汇希斯莱杰 水中人体磁力链接 下载 大香蕉黄片免费看 济南谭崔 避开屏蔽的岛a片 草破福利 要看大鸡巴操小骚逼的人的视频 黑丝少妇影音先锋 欧美巨乳熟女磁力链接 美国黄网站色大全 伦蕉在线久播 极品女厕沟 激情五月bd韩国电影 混血美女自摸和男友激情啪啪自拍诱人呻吟福利视频 人人摸人人妻做人人看 44kknn 娸娸原网 伊人欧美 恋夜影院视频列表安卓青青 57k影院 如果电话亭 avi 插爆骚女精品自拍 青青草在线免费视频1769TV 令人惹火的邻家美眉 影音先锋 真人妹子被捅动态图 男人女人做完爱视频15 表姐合租两人共处一室晚上她竟爬上了我的床 性爱教学视频 北条麻妃bd在线播放版 国产老师和师生 magnet wwwcctv1024 女神自慰 ftp 女同性恋做激情视频 欧美大胆露阴视频 欧美无码影视 好女色在线观看 后入肥臀18p 百度影视屏福利 厕所超碰视频 强奸mp magnet 欧美妹aⅴ免费线上看 2016年妞干网视频 5手机在线福利 超在线最视频 800av:cOm magnet 欧美性爱免播放器在线播放 91大款肥汤的性感美乳90后邻家美眉趴着窗台后入啪啪 秋霞日本毛片网站 cheng ren 在线视频 上原亚衣肛门无码解禁影音先锋 美脚家庭教师在线播放 尤酷伦理片 熟女性生活视频在线观看 欧美av在线播放喷潮 194avav 凤凰AV成人 - 百度 kbb9999 AV片AV在线AV无码 爱爱视频高清免费观看 黄色男女操b视频 观看 18AV清纯视频在线播放平台 成人性爱视频久久操 女性真人生殖系统双性人视频 下身插入b射精视频 明星潜规测视频 mp4 免賛a片直播绪 国内 自己 偷拍 在线 国内真实偷拍 手机在线 国产主播户外勾在线 三桥杏奈高清无码迅雷下载 2五福电影院凸凹频频 男主拿鱼打女主,高宝宝 色哥午夜影院 川村まや痴汉 草溜影院费全过程免费 淫小弟影院在线视频 laohantuiche 啪啪啪喷潮XXOO视频 青娱乐成人国产 蓝沢润 一本道 亚洲青涩中文欧美 神马影院线理论 米娅卡莉法的av 在线福利65535 欧美粉色在线 欧美性受群交视频1在线播放 极品喷奶熟妇在线播放 变态另类无码福利影院92 天津小姐被偷拍 磁力下载 台湾三级电髟全部 丝袜美腿偷拍自拍 偷拍女生性行为图 妻子的乱伦 白虎少妇 肏婶骚屄 外国大妈会阴照片 美少女操屄图片 妹妹自慰11p 操老熟女的b 361美女人体 360电影院樱桃 爱色妹妹亚洲色图 性交卖淫姿势高清图片一级 欧美一黑对二白 大色网无毛一线天 射小妹网站 寂寞穴 西西人体模特苍井空 操的大白逼吧 骚穴让我操 拉好友干女朋友3p