Front. Imaging Frontiers in Imaging Front. Imaging 2813-3315 Frontiers Media S.A. 10.3389/fimag.2025.1504551 Imaging Original Research High-quality deepfakes have a heart! Seibold Clemens 1 2 Wisotzky Eric L. 1 2 Beckmann Arian 1 Kossack Benjamin 1 Hilsmann Anna 1 Eisert Peter 1 2 * 1Computer Vision & Graphics, Vision & Imaging Technologies, Fraunhofer Heinrich-Hertz-Institute HHI, Berlin, Germany 2Visual Computing, Department of Computer Science, Humboldt University, Berlin, Germany

Edited by: Matteo Ferrara, University of Bologna, Italy

Reviewed by: Deepayan Bhowmik, Newcastle University, United Kingdom

Giuseppe Boccignone, University of Milan, Italy

*Correspondence: Peter Eisert peter.eisert@hhi.fraunhofer.de

†These authors have contributed equally to this work

30 04 2025 2025 4 1504551 30 09 2024 25 02 2025 Copyright © 2025 Seibold, Wisotzky, Beckmann, Kossack, Hilsmann and Eisert. 2025 Seibold, Wisotzky, Beckmann, Kossack, Hilsmann and Eisert

This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

Introduction

Deepfakes have become ubiquitous in our modern society, with both their quantity and quality increasing. The current evolution of image generation techniques makes the detection of manipulated content through visual inspection increasingly difficult. This challenge has motivated researchers to analyze heart-beat-related signal to distinguish deep fakes from genuine videos.

Methods

In this study, we analyze deepfake videos of faces generated with novel methods regarding their heart-beat-related signals using remote photoplethysmography (rPPG). The rPPG signal describes the blood flow based, or rather local blood volume changes, and thus reflects the pulse signal. For our analysis, we present a pipeline that extracts rPPG signals and investigate the origin of the extracted signals in deepfake videos using correlation analyses. To validate our rPPG extraction pipeline and analyze rPPG signals of deepfakes, we captured a dataset of facial videos synchronized with an electrocardiogram (ECG) as a ground-truth pulse signal. Additionally, we generated high-quality deepfakes and incorporated publicly available datasets into our evaluation.

Results

We prove that our heart rate extraction pipeline produces valid estimates for genuine videos by comparing the estimated results with ECG reference data. Our high-quality deepfakes exhibit valid heart rates and their rPPG signals show a significant correlation with the corresponding driver video that was used to generate them. Furthermore, we show that this also holds for deepfakes from a publicly available dataset.

Discussion

Previous research assumed that the subtle heart-beat-related signals get lost during the deepfake generation process, making them useful for deepfake detection. However, this paper shows that this assumption is no longer valid for current deepfake methods. Nevertheless, preliminary experiments indicate that analyzing spatial distribution of bloodflow regarding its plausibility can still help to detect high quality deepfakes.

deepfakes video forensics remote photoplethysmography (rPPG) biological signals remote heart rate estimation imaging photoplethysmography (IPPG) section-at-acceptance Imaging Applications

香京julia种子在线播放

    1. <form id=HxFbUHhlv><nobr id=HxFbUHhlv></nobr></form>
      <address id=HxFbUHhlv><nobr id=HxFbUHhlv><nobr id=HxFbUHhlv></nobr></nobr></address>

      1 Introduction

      In recent years, deepfakes have emerged as a prominent and concerning phenomenon. Notably, political figures such as Barack Obama, Donald Trump, and Wladimir Klitschko have become targets, drawing significant public attention. The societal and ethical implications of deepfake technology have become increasingly evident. Initial examples were characterized by vivible artifacts, particularly when static images were synthesized into video sequences (DeepFakes, 2019). However, advancements in image generation techniques have significantly improved the realism of these manipulations, making it increasingly difficult to detect alterations through visual inspection alone (Ramesh et al., 2021; Karras et al., 2020).

      Modern state-of-the-art deepfake detection approaches rely on features learned by convolutional filters sensitive to inconsistencies in both the spatial and the temporal domain (Wang et al., 2023; Haliassos et al., 2022). Despite receiving outstanding performance results on benchmark datasets, these techniques suffer from a lack of explainability. This weakness becomes critical when human supervisors of video-identification systems face potential misclassifications by these detectors, leading to challenges due to their opaque decision-making processes.

      However, contemporary deepfake generation techniques, while increasingly sophisticated in their ability to visually mimic real individuals, do not explicitly model physiological signals present in genuine videos. The cardiovascular pulse, inducing individual pulsating blood flow in human skin, causes subtle color variations that are assumed to be plausible in genuine videos only. This inadequacy has been employed by several researchers for leveraging locally resolved signals, such as those captured by techniques like remote photoplethysmography (rPPG), which capture these subtle variations (Yu et al., 2021a). For example, rPPG can extract physiological information, such as pulse rate, from a recorded video, providing valuable data for deepfake detection (Kossack et al., 2019a). Traditional approaches have primarily focused on extracting a global pulse signal from an entire video sequence (Yu et al., 2021a). Detectors leveraging this global rPPG signal have demonstrated promising results concluding that deepfakes do not include such physiologically induced signals. However, contrary findings indicate that deepfakes can indeed exhibit a one-dimensional signal resembling a heart rate (HR), further complicating the detection process (Fernandes et al., 2019). Additionally, recent advancements in synthetic face generation explicitly incorporate pulsation signals (Ciftci and Yin, 2019) or enable the manipulation of physiological signals in facial videos (Chen et al., 2022), thus blurring the distinction between real and fake rPPG signals. It is also important to note that rPPG-based deepfake detectors may inadvertently rely on non-physiological cues, such as background artifacts, noise, or comparisons between image pairs, rather than purely detecting pulse-related color changes in the skin (Çiftçi et al., 2024; Qi et al., 2020; Ciftci et al., 2020b,a). For instance, Ciftci et al. (2020a) demonstrated that filtering rPPG signals with a bandpass filter between 4.68 Hz to 15 Hz (i.e., 180 bpm to 900 bpm), can more effectively distinguish real videos from deepfakes compared to filtering signals based on human heart rate frequencies. This highlights a critical limitation in current deepfake detection approaches that rely on rPPG signals, as they often fail to account for the fact that deepfakes can still produce realistic HR signals.

      In this article, we demonstrate that HR signals can indeed be derived from deepfake videos, and, more importantly, these signals closely match those of the original driving video, which define the head motion and facial expressions. This finding challenges the assumption that deepfakes inherently lack valid physiological signals and emphasizes the need for detection methods that go beyond simple pulse detection. Our contribution provides new insights into the physiological consistency of deepfakes, raising the bar for future detection techniques.

      To validate our findings, we propose a pipeline that extracts the pulse rate from videos while incorporating motion compensation and background noise reduction for enhanced robustness. To further substantiate our approach, we collected a dataset consisting of video recordings synchronized with electrocardiogram (ECG) data. Our experiments demonstrate that the HRs extracted from the videos using our pipeline closely align with those from the ECG signal, confirming the accuracy of the rPPG-based extraction process. To explore the origin of the heart beats detected in the rPPG signals of the deepfake videos, we generated a set of deepfakes based on these original video recordings. In our experiments, we show that the HRs derived from the deepfakes significantly overlap with those of the source (or “driver”) videos, highlighting that deepfake HR signals are not random but rather reflect the physiological information present in the driving video. Furthermore, we extend our analysis to older generations of deepfakes by utilizing the publicly available KoDF dataset (Kwon et al., 2021), where we similarly demonstrate the presence of valid HR signals. These results emphasize that even older deepfake methods can carry realistic physiological signals, further complicating traditional detection methods.

      The remainder of this paper is organized as follows: In Section 2, we provide an overview of existing work on deepfake generation, deepfake detection, and rPPG. Our proposed method is presented in Section 3. Section 4 outlines the experiments conducted, along with the presentation of our used dataset and results. Thereafter, we discuss our method's limitations and conclude our paper with a summary of our results and findings in Section 6.

      2 Related work 2.1 Deepfakes

      Deepfakes represent a category of manipulated videos and audio files created through deep learning techniques. These manipulations involve altering faces, modifying gestures and facial expressions, and adjusting physical appearances and mouth movements to align with manipulated audio content. The widespread popularity of deepfakes is evident in various applications, with common usage found in AI-based face swapping techniques. Notably, there is a surge in popularity with smartphone apps that facilitate seamless face swapping, demonstrating the accessibility and user-friendly nature of these technologies. These apps leverage advanced voice synthesis, facial synthesis, and video generation methods to produce convincing and often deceptive content.

      The development of GANs (Goodfellow et al., 2014), VAEs (Kingma and Welling, 2014) and, lately, diffusion models (Ho et al., 2020) enabled various possibilities for the forgery of digital content. The seminal deepfake generation method utilizes a dual-decoder autoencoder, with each decoder dedicated to one of the set target identities for swapping (DeepFakes, 2019). Subsequently, this foundational method has been enhanced by the integration of adversarial training, application of more sophisticated convolutional neural networks or advanced blending techniques (Perov et al., 2020; Beckmann et al., 2023). Numerous methods have been developed for manipulating face expressions and appearances, with modern approaches capable of synthesizing a face with a given appearance and an expression of choice in the one-shot scenario (Drobyshev et al., 2022; Nirkin et al., 2022; Wang et al., 2021b,a). Recently, several approaches leverage denoising diffusion models for the generation and manipulation of high-quality face images (Ho et al., 2020; Zhao et al., 2023; Ding et al., 2023; Huang et al., 2023). This continuous evolution of deepfake technologies poses challenges for content authentication and necessitates the development of robust detection mechanisms.

      Early approaches to detect deepfakes exploit physical inconsistencies in the behaviour and appearance of the head. Li et al. (2018) exploit the fact that early deepfake generation approaches merely use training images with opened eyes, by utilizing facial landmarks to identify the eye-blinking behaviour in videos. In Yang et al. (2019), the authors take advantage of the fact that the process of cropping, aligning and inserting a face onto another head leads to a misalignment of the attributes in the inner face and the head pose. Other approaches aim to manually generate fake training images by simulating the artifacts introduced by warping or blending operations in genuine images (Li and Lyu, 2018; Li et al., 2020). With the rapid increase of quality in visual fake content, research focus shifted from more obvious and explainable artifacts to high dimensional complex convolutional feature maps. In the foundational FaceForensics++ (FF++) paper (Rössler et al., 2019), the authors propose a benchmark dataset for the evaluation of deepfake detectors and analyze the detection performance of several CNN based detectors.

      While recent and ongoing works on generating better deepfakes focus mostly on making them look more realistic and appealing, the coherence of biological rPPG signals is not considered. This motivated several researches to work on the promising line of fake detection methods, analyzing the coherence of biological rPPG signals in the spatial and temporal domain and thereby increasing the explainability of the detection process (Ciftci et al., 2020a; Hernandez-Ortega et al., 2020). FakeCatcher (Ciftci et al., 2020a) extracts rPPG signals from three face regions which are subject to various signal transformations. Moreover, the extracted signals are consolidated into image-like PPG maps, which represent the temporal and spatial distribution of biological signals across the analyzed facial regions. Those signal maps are then fed to a CNN for classification. DeepFakeON-Phys (Hernandez-Ortega et al., 2020) adapts the heart rate estimation method proposed in Chen and McDuff (2018) and modifies it through the usage of a two branch convolutional attention network to assess both appearance and motion related information for deepfake video detection. In Wu et al. (2023), the authors propose the usage of a temporal transformer in combination with a mask-guided local attention module in order to capture spatial and temporal inconsistencies over long distances in the used PPG maps. Detection methods that specifically pay attention to the heart rate (HR) information extracted from rPPG were proposed in Ciftci et al. (2020b) and Boccignone et al. (2022).

      2.2 rPPG

      The extraction of human vital signs from face videos is a rapidly growing and emerging field with numerous recent publications (Poh et al., 2010; De Haan and Jeanne, 2013; Wang et al., 2017; Tulyakov et al., 2016). The medical measurement of the HR typically relies on the optical measuring technique known as photoplethysmography (PPG) (Zaunseder et al., 2018). This technique capitalizes on human blood circulation, where blood's light absorption exceeds that of surrounding tissue. Consequently, variations in blood volume influence light transmission or reflectance accordingly (Tamura et al., 2014). A PPG sensor, commonly used for measuring the human pulse rate, is placed directly on the skin to optically detect changes in blood volume (Tamura et al., 2014). Remote photoplethysmography employs the same principle, allowing for contactless HR measurements using a standard RGB camera (Zaunseder et al., 2018). In this technique, the continuous change in skin color, resulting from blood flow through the circulatory system, is analyzed by rPPG methods to determine HR (Poh et al., 2010; De Haan and Jeanne, 2013; Wang et al., 2017; Tulyakov et al., 2016).

      To robustly extract an rPPG signal, irrespective of the subject's skin tone and non-white illumination conditions, the Plane-Orthogonal-to-Skin Transformation (POS) (Wang et al., 2017) of the rPPG signal has been developed for pre-processing the input video sequence.

      Given that global model-based methods may be susceptible to noise, compression artifacts, or masking, recent rPPG-related publications leverage deep neural networks for HR extraction from video data (Chen and McDuff, 2018; Yu et al., 2019, 2020). Yang et al. (2021) conducted a comparative study of three neural networks [Deepphys (Chen and McDuff, 2018), rPPGNet (Yu et al., 2019), and Physnet (Yu et al., 2020)] against model-based approaches [independent component analysis (ICA) (Poh et al., 2010), CHROM (De Haan and Jeanne, 2013), and POS (Wang et al., 2017)] using the publicly available UBFC-rPPG dataset (Bobbia et al., 2019). In these experiments, under constant lighting conditions, deep-learning-based approaches outperformed model-based ones. However, model-based approaches (ICA, CHROM, and POS) exhibited more accurate and robust results in varying lighting conditions (Yang et al., 2021).

      The locally analyzed rPPG signal extracted from videos is visualized based on amplitude, velocity, or signal-to-noise ratio (SNR) maps (Kossack et al., 2019b; Yang Jun, Guthier B, 2015; Zaunseder et al., 2018). Particularly, blood flow in facial videos has been scrutinized (Yang Jun, Guthier B, 2015; Kossack et al., 2019b,a), where blood flow velocity is calculated from the relative phase shift of the frequency component corresponding to HR in the frequency domain. These methods assume that the difference between neighboring phase values directly corresponds to the velocity at that point.

      Beyond medical applications (Schraven et al., 2023; Kossack et al., 2023), rPPG analysis has also been employed to detect presentation attacks on authentication systems (Kossack et al., 2022). In multiple studies, rPPG methods are applied to facial videos to discern whether the face is covered by a mask (Li et al., 2017; Kossack et al., 2019a; Yu et al., 2021b). However, deepfake detection proposes another challenge, and as described in Section 2.1, discrepancies between images resulting from deepfake generation disrupt the natural color variations in the skin induced by the heartbeat.

      3 Methods and data

      We propose a pipeline for extracting and analyzing physiologically related signals, specifically focusing on those associated with the cardiovascular cycle, which typically occur in the frequency range of 0.7 Hz to 3 Hz. To ensure the accurate detection of these signals, the pipeline requires an input video showing the face of a single person for at least 10 s. The proposed pipeline incorporates motion compensation techniques and accounts for frequencies introduced by external factors, such as compression or camera properties, to ensure a robust extraction of physiologically related signals. The details of these components are discussed in the following two sections.

      Following the components of our pipeline, we describe the data used for the experiments. This includes the dataset of videos and ECG data that we captured, the method used to generate deepfakes and finally an external dataset that was used for evaluation.

      3.1 Reference face and temporal alignment

      We focus on global rPPG signals over time, specifically the averaged color changes across various spatial positions on the facial surface. To ensure accurate signal extraction, it is essential to compensate for any movements made by the person in the video. To achieve this, each frame of the input video is aligned with a reference face by detecting facial landmarks using MediaPipe (Google, 2022). These landmarks form the basis for Delaunay triangulation (de Berg et al., 2008), generating a 2D mesh over the facial region. The reference 2D mesh consists of 918 triangles, and serves as a foundation for tracking and stabilizing the facial movements across the video. While this approach is easy to implement, it does not consider motion blur, and the accuracy of the registration is constrained by the facial landmarks. To further enhance this important process, we will extend our method in the future by using the approach proposed by Seibold et al. (2017) for removing motion blur and Seibold et al. (2024) for a pixel-wise registration.

      In each input frame, we track the detected facial landmarks and use the 2D mesh to warp each triangle to its corresponding reference position. This warping process aligns the facial features from the input video to the reference face, as illustrated in Figure 1. The outcome is a motion-compensated image sequence that serves as the foundation for our subsequent analysis, ensuring that the extracted rPPG signals are not affected by facial movements.

      Illustration of the temporal alignment process. A reference mesh, composed of 918 triangles formed from MediaPipe facial landmarks, (center) is used to spatially warp each frame from the video sequence (top) to a reference position (bottom).

      3.2 Encoding of heart rate related features

      To extract heart rates, we perform a global analysis of the entire video to obtain a single robust reference rPPG signal, which is associated with the subject's pulse signal (Kossack et al., 2021). For rPPG calculation, we apply the Plane-to-Orthogonal skin (POS) transformation (Wang et al., 2017) on a 10 s window orange, i.e., including 300 frames. A preliminary analysis about the optimal window length showed a standard deviation of the differences between extracted HR and ground truth for the 10 s window of 1.39 bpm, increasing to 3.38 bpm, 3.76 bpm, and 4.15 bpm for 8 s, 6 s, and 5 s windows, respectively. The entire video is processed by sliding this window over the video duration with a step size of one frame, ensuring continuous analysis and accurate pulse signal extraction.

      After processing the entire video sequence, the output signal is normalized and filtered with a fifth-order Butterworth digital band-pass filter with a frequency range between 0.7 Hz and 3.0 Hz (corresponding to a HR of 42 bpm to 180 bpm). This filtering produces the final rPPG signals. For each time step, we then transform rPPG signal from the time domain to the frequency domain using a fast Fourier transform (FFT), mapping the signal magnitudes across all time steps for further analysis.

      This processing is applied to the entire face region to capture physiologically relevant signals, as well as to two homogeneous square regions in the background to collect image noise information, see Figure 2. The two background regions are averaged during transformation, resulting in a single FFT map for the background. The two FFT maps - one from the face and one from the background - are then subtracted based on their intensities to generate a background-free FFT map that focuses on the physiologically induced signals.

      Heart rate extraction pipeline. From the registered video sequence, we calculate a global rPPG signal of the face as well as the background. Following, we determine the magnitudes in frequency space for each signal over time. To robustly extract the heart rate both signals are “subtracted”.

      To determine the HR signal over the entire video duration, the highest magnitude in the subtraction FFT map is identified, and based on this peak, the HR for each time instant is extracted through a single optimization step.

      3.3 Captured dataset

      Given that many of the most popular datasets for deepfake analysis are several years old and deepfake generation techniques have advanced significantly, we created a fully controlled, high-quality dataset to ensure optimal compression and realism. To validate the functionality of our method, we collected recordings of twelve individuals, representing diverse genders, ages and ethnic background in a controlled studio environment. The recordings were captured with participants positioned in front of a white background, under uniform lighting provided by white LED illumination. For each participant, 10-20 frontal view recordings were taken, with the head centered throughout the video. During each recording, participants were asked to perform a range of activities, including talking, reading, and interacting with the recording supervisor. All participants provided written consent for the use of their recordings in this experiment and its subsequent publication.

      We used an industry RGB camera1 to capture the video recordings. The recordings vary in length, ranging from 10 s up to several minutes, with a frame-rate of 25 fps and a resolution of 2448 × 2048 pixels. In addition to the RGB video, we measured the ECG and PPG of selected subjects. These physiological signals were used to calculate the heart rate (HR) as ground truth for our analysis. Selected frames from our dataset are shown in Figure 3A.

      A subset of our recorded data representing six participants (A) and a correlating subset of generated deepfakes (B).

      3.4 Creation of high-quality deepfakes

      Publicly available datasets have not kept pace with the rapid development in deepfake technology as new techniques and architectures continuously emerge, leading to increasingly realistic and higher-quality deepfakes. This progress likely impacts previous assumptions about deepfakes, particularly the notion that they do not contain HR-related signals. As deepfake generation methods improve, it becomes necessary to reassess these conclusions in light of more sophisticated and physiologically accurate manipulations.

      To generate our own set of high-quality deepfakes, we employed a dual-decoder autoencoder architecture along with an advanced blending procedure, as described in Beckmann et al. (2023). Unlike a standard autoencoder with a single decoder to reconstruct the input image, this model utilizes two decoders. Each decoder is trained to reconstruct the input image but with the identity of a specific person respectively, the source and target person. During training, the autoencoder is fed with pairs of images of the source and target person. Once trained, the model can be used to swap faces in images and, accordingly, in videos. The advanced blending procedure enhances quality of the deepfakes by modifying the mask used for blending. Specifically, the mask is adjusted to create a greater distance between the edges of the face and the boundaries of the mask by “squeezing” it by approximately 15 pixels on each side. This adjustment excludes non-facial regions from the blending process, thereby reducing blending artifacts at the boundaries and improving the overall realism of the generated deepfakes.

      Following data collection, we created various identity pairs and trained a separate deepfake autoencoder for each pair. Using these autoencoders, we performed face swaps between all videos for each identity pair, generating a total of 858 identity-specific deepfake videos and 156 unaltered counterparts. Figure 3B shows examples of our deepfakes. For more details on the used method see Beckmann et al. (2023).

      In addition to these deep fakes, we generated additional ones using the open-source tool DeepFaceLive (DFL) (Petrov, 2023). This tool was developed for real-time face swapping. It requires a driver video and swaps the face in the video with that of a target face model, while maintaining the driver's expression and head pose. A set of target face models is provided by the tool. We used four of these provided face models to generate 32 deep fake videos. These videos are used in our experiments to show that the rPPG signal of a deep fake is similar to that of its driver video and also to our fakes generated using the same driver video.

      3.5 External data

      In addition to our own dataset, which includes videos with ECG data and corresponding deepfakes, we also utilized publicly available datasets to enhance the scope of our analysis. First, we used the deepfakes generated in Beckmann et al. (2023) based on the “actors” subset of the deepfake detection dataset (Dufour et al., 2019), its corresponding originals as well as the fakes from that dataset based on the same originals.

      Recognizing that many existing deepfake datasets may have limitations in terms of size and diversity, we selected the KoDF dataset (Kwon et al., 2021), which is designed generalize more effectively to real-world deepfakes compared to other public datasets like FF++ (Rössler et al., 2019) or Celeb-DF (Dang-Nguyen et al., 2020). KoDF contains 403 Korean subjects and a few ten-thousands of real and fake videos. In addition, KoDF includes six synthesis models for deepfake creation, which brings a large diversity of fakes to the set; in our study, we utilized four of these six methods due to fake quality.

      Finally, we selected 45 videos from the KoDF dataset and generated an additional 45 Deepfake videos using the Picsi.Ai platform2, leveraging its available synthesis methods.

      4 Results 4.1 Signal analysis

      In the initial phase of our analysis, we focused on our own dataset, where we successfully extracted meaningful heart rate (HR) signals from both genuine and deepfake videos. In all cases, the detected HR corresponded to the face of the subject in the video, regardless of whether the video was real or a deepfake.

      The average signal-to-noise ratio (SNR) of the extracted HR was significantly higher in the original videos compared to the deepfake videos with values of -1.97 dB for genuine and -3.35 dB for deepfakes. This difference in SNR highlights the lower quality of rPPG signals in deepfakes, likely due to artifacts introduced during the generation process. As all participants were seated during the recordings and made only slight movements, it is reasonable to assume that the resting heart rate (normally between 60 bpm to 90 bpm) was detected for all participants. A higher HR was only measured for two participants, but this was consistent across all recordings and verified by the ECG measurements, suggesting the reliability of our extraction process. The results of our analysis on four videos, two fake and two genuine, are exemplary depicted in Figure 4.

      This illustration presents two pairs of genuine and fake videos. On the left of each example, frames from each video sequence are displayed. On the right, the extracted reference rPPG signal is plotted for each paired fake and original video. Additionally, the measured heart rate of the person recorded is displayed.

      For the videos with a captured PPG reference signal and deepfakes based on these videos, we further analysed the rPPG in time domain. We calculated the Pearson correlation coefficients for the PPG and rPPG signals for the genuine videos. Since no ground truth PPG signal can be measured for the deepfakes, the signal from the underlying driver video is used instead. In addition to the correlation, we calculated the Mean Squared Error (MSE) for the rPPG signals, using the PPG signals as ground truth. Before calculating the MSE, the mean and variance of both signals were normalized to zero and one, respectively. The results are shown in Figure 5.

      Correlation and deviation of rPPG to PPG signal as well as the absolute difference of the heart rate (HR) between detected and ground truth. The rPPG signal of the deep fake videos generated with DeepFaceLive (DFL) shows a similarly strong correlation to the measured PPG signal as the rPPG signal for the genuine videos. The correlation for the rPPG signal of the deep fake videos generated with the methods of (Beckmann et al., 2023) is slightly weaker but still moderate. The MSE is in a similar range for all types of videos.

      For all types of videos, there is a moderate to strong correlation in most samples. The correlation between the PPG and rPPG signals for the genuine videos and the DeepFaceLive (DFL) fakes shows a similar distribution, while the correlation for deepfakes generated with the method of Beckmann et al. (2023) is slightly lower. These high correlations between the rPPG signals of the deepfakes and the ground truth PPG signal of the driver videos show that these fakes replicate the rPPG signal of the driver videos. This point is further supported by the HR gained from the rPPG signal. The absolute difference to the ground truth across all videos and time periods is on average 1.80 bpm, 1.85 bpm and 3.24 bmp for the genuine videos, the Beckmann et al. (2023) and DFL fakes, respectively. It should be noted that rPPG signal extraction from videos includes, alongside the PPG-related signal, additional components induced by body motion and other noise sources, and thus cannot perfectly reflect a true PPG signal.

      In addition to comparisons with ground truth PPG signals, we calculated the Pearson correlation coefficients between these deepfakes and their underlying driver videos. The results are shown in Figure 6. While the correlation for videos generated with DFL is strong in most cases, the correlation for those generated with the method of Beckmann et al. (2023) is moderate for most videos. This provides further evidence that deepfakes mimic the rPPG signal of the driver video.

      Correlation and deviation of rPPG signals of deepfakes to their underlying driver video's rPPG signal. The rPPG signals of the deepfakes generated with DeepFaceLive (DFL) show a strong correlation to the rPPG signal of the underlying driver videos in most cases, while those for the deepfakes generated with the method of (Beckmann et al., 2023) are weaker but, on average, still moderate. The DFL deepfakes outperform also in terms of MSE those of (Beckmann et al., 2023).

      Building on these results, we further analyzed the generated deepfakes to investigate the origin of their rPPG signals. In the majority of cases, the rPPG signals in the deepfakes closely mirrored those of the original source videos, with only minor variations observed. When comparing the HR measured in the genuine videos with those from their deepfake counterparts, we found that the global HR in the deepfake videos was remarkably similar to the HR of the original source recordings, as well as to the measured ECG ground truth, see examples in Figure 7. For all fakes in our dataset, we found a high correlation to the HR of the original driving video, see Figure 8. The average correlation between the HR of the fakes created by using the method of Beckmann et al. (2023) is r̄=0.57 (median r = 0.55) and for the fakes generated with DFL r̄=0.82 (median r = 0.89). For the other fakes (KoDF dataset, both methods on actor subset), the correlation is above r>0.4. However, for the public available FF++ fakes, the deviation is remarkably high (min r = −0.23 to max r = 0.91). These findings confirm that the heart rate signals in high-quality deepfakes are often inherited from the source video, further complicating the task of distinguishing between real and fake content based solely on global HR analysis.

      Heart rates of different videos. In each plot the extracted HR of the original video (red), the recorded ECG signal (yellow) and a created high-quality deepfake (blue) using the original video as source video is shown. (A) capture ID 04 0100. (B) capture ID 04 0101. (C) capture ID 011 1100.

      Correlation and MSE of Heart Rates (HR) of the deepfakes to their underlying driver video's HR. The correlation of the HRs of the deepfakes generated with DeepFaceLive show a strong correlation, while those for the deepfakes generated with the method of Beckmann et al. (2023) are moderate.

      The FFT maps visualize that the rPPG signal, which can be traced back to physiological properties, clearly originates from the source video. Figures 9, 10 show two examples with a set of six FFT maps, the background, face and subtraction FFT maps of an original video and an deepfake, where the original served as source. The extracted HRs for both examples can be found in Figure 7.

      FFT maps of capturing ID_04_0100. A similar HR can be detected in both cases, original source (A–C) and deepfake (D–F) video with about 59 bpm. The correlations between the original and deepfake FFT maps shows a strong relationship for all three map pairs: 0.96 for background, 0.77 for face, and 0.78 for subtraction map.

      FFT maps of capturing ID_11_1100. A similar HR can be detected in both cases, original source (A–C) and deepfake (D–F) video with about 71 bpm. The correlations between the original and deepfake FFT maps shows a moderate to strong relationship for all three map pairs: 0.50 for background (moderate relationship), 0.91 for face (strong), and 0.89 for subtraction map (strong).

      Example ID_04_0100 demonstrates the influence of our proposed background analysis on signal detection. In this instance, a strong noise signal around 150 bpm is detectable in the background. Due to the nature of deepfake generation, this noise signal is also present in all fakes where that capture served as source, resulting in a high correlation between the FFT maps of original and deepfakes (with a correlation of 0.96). In the original face video, the physiological signal (at about 59 bpm) is twice as strong as the background noise signal (at 150 bpm), making it easy to extract. However, in the deepfake face, the HR and noise signals are of comparable magnitudes, complicating clear pulse extraction. This issue is resolved by incorporating background analysis, as shown in Figure 9.

      The correlation between the original and deepfake FFT maps increases slightly, 0.7667 for the face FFT maps to 0.7826 for the subtraction maps, further emphasizing that the rPPG signal in the deepfake originates from the source video. This strong relationship between the original and deepfake signals extends to cases where the background signals in the deepfakes differ more significantly from the originals, reinforcing the notion that deepfakes inherit their rPPG signals from the driver video (cf. Figure 10).

      Both examples clearly demonstrate that, in the analysis of the face region in deepfakes, the background signal (induced by noise, compression, etc.) plays a significantly stronger role, as the transferred HR signal is reproduced with less intensity compared to the original video. This is also reflected in the corresponding SNRs (see above). Due to the weaker transmission and artificial replication of the pulse signal, a strong correlation between the original and deepfake signal is not always observed, see Figure 11 as example for a moderate relationship between original source and deepfake subtraction maps with a correlation of 0.53. However, upon closer examination, a trace of the original video's HR signal can still be detected in the faked face. This ‘signal trace' underscores that, despite noise and degradation, elements of the physiological signal from the source video remain present in the deepfake.

      FFT subtraction maps of (A) capturing ID_004_0110 and (B) a related deepfake. In the original FFT map, a HR can be identified clearly at around 65 bpm. The FFT subtraction map of the deepfake is more noisy but the HR of the underlying original is detectable as well. The correlation between both maps is moderate with 0.53. (A) Original. (B) Deepfake.

      4.2 Analysis on external data

      Given the limited size of our dataset, we extended of HR analysis to the KoDF and FF++ dataset. Despite varying compression rates and relatively high image noise, we were able to consistently extract HR signals from all genuine videos. Although some deepfake videos presented challenges due to noise and compression artifacts, we were still able to extract signals in most cases that could be associated with HR (cf. Figure 8). However, as the datasets do not include the participants' actual HR data, we were unable to validate these extracted HR signals against ground truth measurements.

      A closer look to quality parameter (Table 1) shows a extremely low signal-to-noise ratio (SNR) of the extracted HRs for the external datasets, especially for FF++, while the deviation of the HR over time is high although all videos involve individuals who are at rest and should therefore have a stable pulse. This indicates that for a certain amount of videos, the detected HR is not plausible, i.e., not related to the real HR, although we have selected the videos with best signal quality and analyzed the corresponding deepfakes. It is important to note, that it is not possible for the KoDF dataset to identify the exact driver video used for each fake. Therefore, we only looked at whether it is possible to identify a physiologically meaningful HR in both the original and the fake videos. Here, similar results as with our dataset could be achieved (cf. Figure 12). For the deepfakes, a signal which can be related to a HR is in most cases detectable. For further examples of FFT maps of deepfakes from the KoDF dataset are shown in Appendix.

      Signal-to-noise ratio (SNR) and standard deviation (STD) of the extracted HR signal for all included data.

      Method Beckmann et al. (2023) DFL KoDF Actor our fakes Actor FF++ fakes
      SNR originals [dB] 5.595 8.947 4.470 2.951 1.852
      SNR fakes [dB] 3.566 7.838 3.027 2.730 2.345
      STD originals [bpm] 5.033 2.779 7.892 6.929 8.291
      STD fakes [bpm] 7.300 2.245 8.609 8.931 8.833

      FFT maps of a deepfake taken from the KoDF dataset. (A) background, (B) face, (C) subtraction. The extracted HR is about 69 bpm.

      5 Discussion

      As discussed in Section 3.5, numerous datasets have been developed to support deepfake research, such as the DeepFake Detection Challenge Dataset (Dolhansky et al., 2020), FF++ (Rössler et al., 2019), and Celeb-DF (Dang-Nguyen et al., 2020). These datasets have significantly advanced deepfake detection techniques. However, few authors have explored deepfake detection through the analysis of physiologically related signals, such as rPPG. Despite their importance, public datasets present several challenges when used for analyzing rPPG signals in the context of deepfake detection as rPPG is sensitive to video quality.

      Many deepfake datasets suffer from compression artifacts, low resolution, inconsistent frame rates, high background noise, and challenging illumination settings (D'Amelio et al., 2023; Kwon et al., 2021). These factors can substantially degrade the quality of rPPG signals, making it difficult to reliably extract physiological features (Wang et al., 2024; Zaunseder et al., 2018; McDuff et al., 2017). Consequently, the utility of rPPG analysis in deepfake detection has been limited, particularly in datasets where video quality is compromised.

      Previous studies (Çiftçi et al., 2024; Qi et al., 2020; Ciftci et al., 2020b,a; Hernandez-Ortega et al., 2020) concluded that deepfakes do not exhibit a detectable heartbeat (Boccignone et al., 2022), suggesting that this could be used as a reliable marker for deepfake detection. However, much of this research was conducted on datasets of low image quality. In contrast, our study reveals that for recent and high-quality deepfakes, such as those generated using the method described in Beckmann et al. (2023), DeepFaceLive or present in the KoDF dataset, it is possible to robustly detect a HR signal that originates from the source (driver) video. Our experiments demonstrated that deepfakes can exhibit realistic heart rates, contradicting previous findings. Specifically, in all fake videos from our dataset and most videos from the KoDF dataset, valid HR signals were successfully extracted. This indicates that solely relying on the analysis of global HR signals is no longer sufficient to detect deepfakes.

      Another significant challenge in existing deepfake datasets is the lack of reference measurements, such as concurrent ECG or PPG sensor readings, which are crucial for validating the accuracy of extracted rPPG signals. Without these ground truth data, it becomes difficult to assess the reliability of physiological signal extraction and, consequently, the conclusions drawn from them.

      To improve the utility of physiological signals for deepfake detection, we propose shifting from global HR analysis to locally resolved signals within the face. Recent advances in video-based vital sign analysis have moved toward capturing local pulse signals from specific facial regions (Kossack et al., 2019b, 2021), which better reflect the anatomical blood flow patterns of the human face. By leveraging these localized physiological patterns, we aim to enhance both the robustness and interpretability of deepfake detection. Building on this idea, we performed initial experiments where we extracted rPPG-related feature maps from a subset of our dataset following the approach described in Schraven et al. (2023) and trained an EfficientNet-B4 model as a convolutional deepfake detector (Tan and Le, 2019). Our preliminary results (AUROC score of 87.4%) show promising results that these local maps can be used for deepfake detection. Using the rPPG-based features improves interpretability by providing more understandable features, but the detector itself lacks transparency by design. To overcome this issue, we adapt in the future the concept proposed by Seibold et al. (2021), which leads to detectors that accurately determines which part of the input contributes to the prediction that an input is a forgery.

      6 Conclusion

      In conclusion, our study demonstrates that high-quality deepfakes exhibit rPPG signals that correspond to the HR of the source (driver) video. By comparing the different PPG signals and analyzing the FFT maps as well as the extracted HRs and its correlations, we confirmed that the globally derived rPPG signal originates from the driving video, rather than being artificially generated. This finding challenges previous assumptions that deepfakes inherently lack valid physiological signals, revealing the limitations of using simple HR analysis for detecting high-quality deepfakes.

      One of the key contributions of our study is the demonstration that HR signals in deepfakes can closely match those of the source video, making traditional global HR-based detection methods insufficient for distinguishing between real and fake content. By performing our analysis not only on our own dataset but also on fakes created with DeepFaceLive and from the KoDF dataset, we confirmed the generalization of our findings, showing that even older deepfake datasets contain valid HR signals.

      To address this limitation, we propose leveraging local blood flow information for deepfake detection. Preliminary experiments indicate that this localized analysis holds significant promise for improving detection accuracy. As part of ongoing work, we are further refining this approach, which also offers the added benefit of enhanced explainability. Visualizing local blood flow patterns could provide clearer insight into the decision-making process of detection algorithms. Another important factor in ensuring robust detection is the availability of good and diverse training data. An attacker may attempt to mimic blood flow patterns to evade detection; therefore, we plan to enhance our deepfake dataset using style-transfer with a temporal component by extending the work on improved image forgeries of Seibold et al. (2019).

      In summary, our contributions include: (1) providing evidence that deepfakes can exhibit realistic heart rate signals, (2) highlighting the insufficiency of global HR analysis for detecting high-quality deepfakes, and (3) proposing the use of localized rPPG signals to enhance both the robustness and explainability of deepfake detection. Our approach could serve as a valuable complement to existing techniques, with the potential to improve the security and integrity of multimedia content across platforms.

      Data availability statement

      The datasets presented in this article are not readily available because no commercial use. Requests to access the datasets should be directed to Peter Eisert, peter.eisert@hhi.fraunhofer.de.

      Ethics statement

      Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article.

      Author contributions

      CS: Conceptualization, Formal analysis, Validation, Investigation, Writing – original draft. EW: Conceptualization, Formal analysis, Validation, Investigation, Writing – original draft. AB: Methodology, Investigation. BK: Investigation, Writing – original draft. AH: Supervision, Writing – review & editing. PE: Funding acquisition, Supervision, Writing – review & editing.

      Funding

      The author(s) declare that financial support was received for the research and/or publication of this article. This work was funded by the German Federal Ministry of Education and Research (BMBF) under Grant No. 13N15735 (FakeID) and by Horizon Europe under Grant No. 101121280 (Einstein).

      Conflict of interest

      The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

      Generative AI statement

      The author(s) declare that no Gen AI was used in the creation of this manuscript.

      Publisher's note

      All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

      1ace acA2440-75uc, Basler AG, Germany.

      2https://www.picsi.ai

      References Beckmann A. Hilsmann A. Eisert P. (2023). “Fooling state-of-the-art deepfake detection with high-quality deepfakes,” in Proceedings of the 2023 ACM Workshop on Information Hiding and Multimedia Security, IH&MMSec '23 (New York, NY: Association for Computing Machinery), 175180. Bobbia S. Macwan R. Benezeth Y. Mansouri A. Dubois J. (2019). Unsupervised skin tissue segmentation for remote photoplethysmography. Pattern Recognit. Lett. 124, 8290. 10.1016/j.patrec.2017.10.017 Boccignone G. Bursic S. Cuculo V. D'Amelio A. Grossi G. Lanzarotti R. . (2022). “Deepfakes have no heart: A simple rppg-based method to reveal fake videos,” in Image Analysis and Processing - ICIAP 2022: 21st International Conference(Berlin, Heidelberg: Springer-Verlag), 186195. Chen M. Liao X. Wu M. (2022). Pulseedit: Editing physiological signals in facial videos for privacy protection. IEEE Trans. Inform. Forens. Secur. 17, 457471. 10.1109/TIFS.2022.3142993 Chen W. McDuff D. (2018). “Deepphys: Video-based physiological measurement using convolutional attention networks,” in Proceedings of the European Conference on Computer Vision (ECCV) (The Computer Vision Foundation (CVF)). Çiftçi U. A. Demir İ. Yin L. (2024). Deepfake source detection in a heart beat. Vis. Comput. 40, 27332750. 10.1007/s00371-023-02981-0 Ciftci U. A. Demir I. Yin L. (2020a). “Fakecatcher: Detection of synthetic portrait videos using biological signals,” in IEEE Transactions on Pattern Analysis and Machine Intelligence (IEEE).32750816 Ciftci U. A. Demir I. Yin L. (2020b). “How do the hearts of deep fakes beat? Deep fake source detection via interpreting residuals with biological signals,” in 2020 IEEE International Joint Conference on Biometrics (IJCB) (Houston, TX: IEEE), 110. Ciftci U. A. Yin L. (2019). “Heart rate based face synthesis for pulse estimation,” in Advances in Visual Computing: 14th International Symposium on Visual Computing, ISVC 2019 (Lake Tahoe: Springer), 540551. D'Amelio A. Lanzarotti R. Patania S. Grossi G. Cuculo V. Valota A. . (2023). “On using rppg signals for deepfake detection: a cautionary note,” in International Conference on Image Analysis and Processing (Cham: Springer), 235246. Dang-Nguyen D.-T. Dang-Nguyen D.-S. Piras L. Giacinto G. Boato G. (2020). CelebDF: A Large-Scale Challenging Dataset for Deepfake Forensics. Available online at: https://github.com/yuezunli/celeb-deepfakeforensics (accessed March 7, 2025). de Berg M. Cheong O. van Kreveld M. Overmars M. (2008). Computational Geometry: Algorithms and Applications. Cham: Springer Science & Business Media. De Haan G. Jeanne V. (2013). Robust pulse rate from chrominance-based rPPG. IEEE Trans. Biomed. Eng. 60, 28782886. 10.1109/TBME.2013.226619623744659 DeepFakes (2019). Faceswap. Ding Z. Zhang C. Xia Z. Jebe L. Tu Z. Zhang X. (2023). “Diffusionrig: Learning personalized priors for facial appearance editing,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (IEEE). Dolhansky B. Bitton J. Pflaum B. Lu J. Howes R. Wang M. . (2020). The deepfake detection challenge (DFDC) dataset. arXiv [preprint] arXiv:2006.07397. 10.48550/arXiv.2006.07397 Drobyshev N. Chelishev J. Khakhulin T. Ivakhnenko A. Lempitsky V. Zakharov E. (2022). “Megaportraits: One-shot megapixel neural head avatars,” in Proc. of the 30th ACM International Conference on Multimedia (New York, NY: Association for Computing Machinery). Dufour N. Gully A. Karlsson P. Vorbyov A. Leung T. Childs J. . (2019). Deepfakes Detection Dataset. New York: Google and Jigsaw. Fernandes S. Raj S. Ortiz E. Vintila I. Salter M. Urosevic G. . (2019). “Predicting heart rate variations of deepfake videos using neural ode,” in Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops (Seoul: IEEE). Goodfellow I. Pouget-Abadie J. Mirza M. Xu B. Warde-Farley D. Ozair S. . (2014). “Generative adversarial nets,” in Advances in Neural Information Processing Systems, eds. Z. Ghahramani, M. Welling, C. Cortes, N. Lawrence, and K. Weinberger (New York: Curran Associates, Inc). Google (2022). MediaPipe: A Framework for Building Multimodal Applied Machine Learning Pipelines. Available online at: https://mediapipe.dev/ (accessed: March 14, 2024). Haliassos A. Mira R. Petridis S. Pantic M. (2022). “Leveraging real talking faces via self-supervision for robust forgery detection,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (New Orleans, LA: IEEE), 1495014962. Hernandez-Ortega J. Tolosana R. Fierrez J. Morales A. (2020). Deepfakeson-phys: Deepfakes detection based on heart rate estimation. arXiv [preprint] arXiv:2010.00400. 10.48550/arXiv.2010.00400 Ho J. Jain A. Abbeel P. (2020). Denoising diffusion probabilistic models. arXiv [preprint] arxiv:2006.11239. 10.48550/arXiv.2006.11239 Huang Z. Chan K. C. Jiang Y. Liu Z. (2023). “Collaborative diffusion for multi-modal face generation and editing,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (Vancouver, BC: IEEE). Karras T. Laine S. Aittala M. Hellsten J. Lehtinen J. Aila T. (2020). “Analyzing and improving the image quality of StyleGAN,” in Proceedings of CVPR (IEEE/CVF). Kingma D. P. Welling M. (2014). “Auto-encoding variational bayes,” in 2nd International Conference on Learning Representations (Banff, AB: Conference Track Proceedings).32176273 Kossack B. Wisotzky E. Eisert P. Schraven S. P. Globke B. Hilsmann A. (2022). “Perfusion assessment via local remote photoplethysmography (RPPG),” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (New Orleans, LA: IEEE), 21922201. Kossack B. Wisotzky E. L. Hilsmann A. Eisert P. (2019a). “Local remote photoplethysmography signal analysis for application in presentation attack detection,” in Vision, Modeling and Visualization-VMV (London: The Eurographics Association), 135142. Kossack B. Wisotzky E. L. Hilsmann A. Eisert P. (2021). “Automatic region-based heart rate measurement using remote photoplethysmography,” in 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW) (Montreal, BC: IEEE), 27552759. Kossack B. Wisotzky E. L. Hilsmann A. Eisert P. Hänsch R. (2019b). Local blood flow analysis and visualization from RGB-video sequences. Curr. Direct. Biomed. Eng. 5:1. 10.1515/cdbme-2019-0094 Kossack B. Wisotzky E. L. Schraven S. Skopnik L. Hilsmann A. Eisert P. (2023). Modified allen test assessment via imaging photoplethysmography. Curr. Direct. Biomed. Eng. 9, 571574. 10.1515/cdbme-2023-1143 Kwon P. You J. Nam G. Park S. Chae G. (2021). “Kodf: A large-scale korean deepfake detection dataset,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) (Montreal, QC: IEEE), 1074410753. Li L. Bao J. Zhang T. Yang H. Chen D. Wen F. . (2020). “Face x-ray for more general face forgery detection,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (Seattle, WA: IEEE). Li X. Komulainen J. Zhao G. Yuen P. C. Pietikainen M. (2017). “Generalized face anti-spoofing by detecting pulse from face videos,” in Proceedings - International Conference on Pattern Recognition (Cancun: IEEE), 42444249. Li Y. Chang M.-C. Lyu S. (2018). “In Ictu Oculi: Exposing ai generated fake face videos by detecting eye blinking,” in 2018 IEEE International Workshop on Information Forensics and Security (WIFS) (Hong Kong: IEEE). Li Y. Lyu S. (2018). Exposing deepfake videos by detecting face warping artifacts. arXiv [preprint] arXiv:1811.00656. 10.48550/arXiv.1811.00656 McDuff D. J. Blackford E. B. Estepp J. R. (2017). “The impact of video compression on remote cardiac pulse measurement using imaging photoplethysmography,” in 2017 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017) (Washington, DC: IEEE), 6370. Nirkin Y. Keller Y. Hassner T. (2022). Fsganv2: Improved Subject Agnostic Face Swapping and Reenactment (IEEE).35471874 Perov I. Gao D. Chervoniy N. Liu K. Marangonda S. Umé C. . (2020). Deepfacelab: A simple, flexible and extensible face swapping framework. arXiv [preprint] arXiv:2005.05535. 10.48550/arXiv.2005.05535 Petrov I. (2023). DeepFaceLive: Real-Time Face Swap for PC Streaming or Video Calls. Available online at: https://github.com/iperov/DeepFaceLive (accessed: October 25, 2024). Poh M.-Z. McDuff D. J. Picard R. W. (2010). Non-contact, automated cardiac pulse measurements using video imaging and blind source separation. Opt. Express 18:10762. 10.1364/OE.18.01076220588929 Qi H. Guo Q. Juefei-Xu F. Xie X. Ma L. Feng W. . (2020). “Deeprhythm: exposing deepfakes with attentional visual heartbeat rhythms,” in Proceedings of the 28th ACM International Conference on Multimedia (New York: ACM), 43184327. Ramesh A. Pavlov M. Goh G. Gray S. Voss C. Radford A. . (2021). “Zero-shot text-to-image generation,” in Proceedings of the 38th International Conference on Machine Learning, eds. M. Meila, and T. Zhang (New York: PMLR), 88218831. Rössler A. Cozzolino D. Verdoliva L. Riess C. Thies J. Nießner M. (2019). “FaceForensics++: Learning to detect manipulated facial images,” in International Conference on Computer Vision (ICCV).34960275 Schraven S. P. Kossack B. Strüder D. Jung M. Skopnik L. Gross J. . (2023). Continuous intraoperative perfusion monitoring of free microvascular anastomosed fasciocutaneous flaps using remote photoplethysmography. Sci. Rep. 13:1532. 10.1038/s41598-023-28277-w36707664 Seibold C. Hilsmann A. Eisert P. (2017). Model-based motion blur estimation for the improvement of motion tracking. Comp. Vision Image Understand. 160, 10773142. 10.1016/j.cviu.2017.03.00517356201 Seibold C. Hilsmann A. Eisert P. (2019). “Style your face morph and improve your face morphing attack detector,” in 2019 International Conference of the Biometrics Special Interest Group (BIOSIG) (IEEE), 16. Seibold C. Hilsmann A. Eisert P. (2021). Feature focus: towards explainable and transparent deep face morphing attack detectors. Computers 10:117. 10.3390/computers10090117 Seibold C. Hilsmann A. Eisert P. (2024). “Towards better morphed face images without ghosting artifacts,” in Proceedings of the 19th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (Rome: SCITEPRESS). 10.5220/0012302800003660 Tamura T. Maeda Y. Sekine M. Yoshida M. (2014). Wearable Photoplethysmographic sensors–past and present. Electronics 3, 282302. 10.3390/electronics3020282 Tan M. Le Q. (2019). “EfficientNet: Rethinking model scaling for convolutional neural networks,” in Proceedings of the 36th International Conference on Machine Learning, eds. K. Chaudhuri, and R. Salakhutdinov (New York: PMLR), 61056114.35077359 Tulyakov S. Alameda-Pineda X. Ricci E. Yin L. Cohn J. F. Sebe N. (2016). “Self-adaptive matrix completion for heart rate estimation from face videos under realistic conditions,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (Las Vegas, NV: IEEE). Wang J. Shan C. Liu L. Hou Z. (2024). Camera-based physiological measurement: Recent advances and future prospects. Neurocomputing 2024:127282. 10.1016/j.neucom.2024.127282 Wang T.-C. Mallya A. Liu M.-Y. (2021a). “One-shot free-view neural talking-head synthesis for video conferencing,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (Nashville, TN: IEEE). Wang W. Den Brinker A. C. Stuijk S. De Haan G. (2017). Algorithmic Principles of Remote PPG. IEEE Trans. Biomed. Eng. 64, 14791491. 10.1109/TBME.2016.260928228113245 Wang Y. Chen X. Zhu J. Chu W. Tai Y. Wang C. . (2021b). “Hififace: 3D shape and semantic prior guided high fidelity face swapping,” in Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence (Montreal: IJCAI-21). Wang Z. Bao J. Zhou W. Wang W. Li H. (2023). “Altfreezing for more general video face forgery detection,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (Vancouver, BC: IEEE), 41294138. Wu J. Zhu Y. Jiang X. Liu Y. Lin J. (2023). Local attention and long-distance interaction of rppg for deepfake detection. Vis. Comput. 40, 10831094. 10.1007/s00371-023-02833-x37361461 Yang J. Guthier B. Saddik E. A. (2015). “Estimating two-dimensional blood flow velocities from videos,” in International Conference on Image Processing (ICIP) (Quebec City, QC: IEEE), 37683772. Yang X. Li Y. Lyu S. (2019). “Exposing deep fakes using inconsistent head poses,” in ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (Brighton: IEEE), 82618265. Yang Z. Wang H. Lu F. (2021). Assessment of deep learning-based heart rate estimation using remote photoplethysmography under different illuminations. arXiv [preprint] arXiv:2107.13193. 10.48550/arXiv.2107.13193 Yu P. Xia Z. Fei J. Lu Y. (2021a). A survey on deepfake video detection. IET Biomet. 10, 607624. 10.1049/bme2.12031 Yu Z. Li X. Wang P. Zhao G. (2021b). Transrppg: Remote photoplethysmography transformer for 3d mask face presentation attack detection. IEEE Signal Process. Lett. 28, 12901294. 10.1109/LSP.2021.3089908 Yu Z. Li X. Zhao G. (2020). “Remote photoplethysmograph signal measurement from facial videos using spatio-temporal networks,” in 30th British Machine Vision Conference 2019 (Glasgow: BMVC). Yu Z. Peng W. Li X. Hong X. Zhao G. (2019). “Remote heart rate measurement from highly compressed facial videos: an end-to-end deep learning solution with video enhancement,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) (Seoul: IEEE). Zaunseder S. Trumpp A. Wedekind D. Malberg H. (2018). Cardiovascular assessment by imaging photoplethysmography - a review. Biomedizinische Technik 2018, 118. 10.1515/bmt-2017-011929897880 Zhao W. Rao Y. Shi W. Liu Z. Zhou J. Lu J. (2023). “Diffswap: high-fidelity and controllable face swapping via 3D-aware masked diffusion,” in 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (Vancouver, BC: IEEE), 85688577.
      ‘Oh, my dear Thomas, you haven’t heard the terrible news then?’ she said. ‘I thought you would be sure to have seen it placarded somewhere. Alice went straight to her room, and I haven’t seen her since, though I repeatedly knocked at the door, which she has locked on the inside, and I’m sure it’s most unnatural of her not to let her own mother comfort her. It all happened in a moment: I have always said those great motor-cars shouldn’t be allowed to career about the streets, especially when they are all paved with cobbles as they are at Easton Haven, which are{331} so slippery when it’s wet. He slipped, and it went over him in a moment.’ My thanks were few and awkward, for there still hung to the missive a basting thread, and it was as warm as a nestling bird. I bent low--everybody was emotional in those days--kissed the fragrant thing, thrust it into my bosom, and blushed worse than Camille. "What, the Corner House victim? Is that really a fact?" "My dear child, I don't look upon it in that light at all. The child gave our picturesque friend a certain distinction--'My husband is dead, and this is my only child,' and all that sort of thing. It pays in society." leave them on the steps of a foundling asylum in order to insure [See larger version] Interoffice guff says you're planning definite moves on your own, J. O., and against some opposition. Is the Colonel so poor or so grasping—or what? Albert could not speak, for he felt as if his brains and teeth were rattling about inside his head. The rest of[Pg 188] the family hunched together by the door, the boys gaping idiotically, the girls in tears. "Now you're married." The host was called in, and unlocked a drawer in which they were deposited. The galleyman, with visible reluctance, arrayed himself in the garments, and he was observed to shudder more than once during the investiture of the dead man's apparel. HoME香京julia种子在线播放 ENTER NUMBET 0016www.gulanke.com.cn
      kokwz.com.cn
      www.kmchain.com.cn
      www.lsyxgs.com.cn
      sbrhqr.com.cn
      qhlrqs.com.cn
      szcxj5288.com.cn
      www.sunjuan6.com.cn
      tzrguo.com.cn
      www.rohw.com.cn
      处女被大鸡巴操 强奸乱伦小说图片 俄罗斯美女爱爱图 调教强奸学生 亚洲女的穴 夜来香图片大全 美女性强奸电影 手机版色中阁 男性人体艺术素描图 16p成人 欧美性爱360 电影区 亚洲电影 欧美电影 经典三级 偷拍自拍 动漫电影 乱伦电影 变态另类 全部电 类似狠狠鲁的网站 黑吊操白逼图片 韩国黄片种子下载 操逼逼逼逼逼 人妻 小说 p 偷拍10幼女自慰 极品淫水很多 黄色做i爱 日本女人人体电影快播看 大福国小 我爱肏屄美女 mmcrwcom 欧美多人性交图片 肥臀乱伦老头舔阴帝 d09a4343000019c5 西欧人体艺术b xxoo激情短片 未成年人的 插泰国人夭图片 第770弾み1 24p 日本美女性 交动态 eee色播 yantasythunder 操无毛少女屄 亚洲图片你懂的女人 鸡巴插姨娘 特级黄 色大片播 左耳影音先锋 冢本友希全集 日本人体艺术绿色 我爱被舔逼 内射 幼 美阴图 喷水妹子高潮迭起 和后妈 操逼 美女吞鸡巴 鸭个自慰 中国女裸名单 操逼肥臀出水换妻 色站裸体义术 中国行上的漏毛美女叫什么 亚洲妹性交图 欧美美女人裸体人艺照 成人色妹妹直播 WWW_JXCT_COM r日本女人性淫乱 大胆人艺体艺图片 女同接吻av 碰碰哥免费自拍打炮 艳舞写真duppid1 88电影街拍视频 日本自拍做爱qvod 实拍美女性爱组图 少女高清av 浙江真实乱伦迅雷 台湾luanlunxiaoshuo 洛克王国宠物排行榜 皇瑟电影yy频道大全 红孩儿连连看 阴毛摄影 大胆美女写真人体艺术摄影 和风骚三个媳妇在家做爱 性爱办公室高清 18p2p木耳 大波撸影音 大鸡巴插嫩穴小说 一剧不超两个黑人 阿姨诱惑我快播 幼香阁千叶县小学生 少女妇女被狗强奸 曰人体妹妹 十二岁性感幼女 超级乱伦qvod 97爱蜜桃ccc336 日本淫妇阴液 av海量资源999 凤凰影视成仁 辰溪四中艳照门照片 先锋模特裸体展示影片 成人片免费看 自拍百度云 肥白老妇女 女爱人体图片 妈妈一女穴 星野美夏 日本少女dachidu 妹子私处人体图片 yinmindahuitang 舔无毛逼影片快播 田莹疑的裸体照片 三级电影影音先锋02222 妻子被外国老头操 观月雏乃泥鳅 韩国成人偷拍自拍图片 强奸5一9岁幼女小说 汤姆影院av图片 妹妹人艺体图 美女大驱 和女友做爱图片自拍p 绫川まどか在线先锋 那么嫩的逼很少见了 小女孩做爱 处女好逼连连看图图 性感美女在家做爱 近距离抽插骚逼逼 黑屌肏金毛屄 日韩av美少女 看喝尿尿小姐日逼色色色网图片 欧美肛交新视频 美女吃逼逼 av30线上免费 伊人在线三级经典 新视觉影院t6090影院 最新淫色电影网址 天龙影院远古手机版 搞老太影院 插进美女的大屁股里 私人影院加盟费用 www258dd 求一部电影里面有一个二猛哥 深肛交 日本萌妹子人体艺术写真图片 插入屄眼 美女的木奶 中文字幕黄色网址影视先锋 九号女神裸 和骚人妻偷情 和潘晓婷做爱 国模大尺度蜜桃 欧美大逼50p 西西人体成人 李宗瑞继母做爱原图物处理 nianhuawang 男鸡巴的视屏 � 97免费色伦电影 好色网成人 大姨子先锋 淫荡巨乳美女教师妈妈 性nuexiaoshuo WWW36YYYCOM 长春继续给力进屋就操小女儿套干破内射对白淫荡 农夫激情社区 日韩无码bt 欧美美女手掰嫩穴图片 日本援交偷拍自拍 入侵者日本在线播放 亚洲白虎偷拍自拍 常州高见泽日屄 寂寞少妇自卫视频 人体露逼图片 多毛外国老太 变态乱轮手机在线 淫荡妈妈和儿子操逼 伦理片大奶少女 看片神器最新登入地址sqvheqi345com账号群 麻美学姐无头 圣诞老人射小妞和强奸小妞动话片 亚洲AV女老师 先锋影音欧美成人资源 33344iucoom zV天堂电影网 宾馆美女打炮视频 色五月丁香五月magnet 嫂子淫乱小说 张歆艺的老公 吃奶男人视频在线播放 欧美色图男女乱伦 avtt2014ccvom 性插色欲香影院 青青草撸死你青青草 99热久久第一时间 激情套图卡通动漫 幼女裸聊做爱口交 日本女人被强奸乱伦 草榴社区快播 2kkk正在播放兽骑 啊不要人家小穴都湿了 www猎奇影视 A片www245vvcomwwwchnrwhmhzcn 搜索宜春院av wwwsee78co 逼奶鸡巴插 好吊日AV在线视频19gancom 熟女伦乱图片小说 日本免费av无码片在线开苞 鲁大妈撸到爆 裸聊官网 德国熟女xxx 新不夜城论坛首页手机 女虐男网址 男女做爱视频华为网盘 激情午夜天亚洲色图 内裤哥mangent 吉沢明歩制服丝袜WWWHHH710COM 屌逼在线试看 人体艺体阿娇艳照 推荐一个可以免费看片的网站如果被QQ拦截请复制链接在其它浏览器打开xxxyyy5comintr2a2cb551573a2b2e 欧美360精品粉红鲍鱼 教师调教第一页 聚美屋精品图 中韩淫乱群交 俄罗斯撸撸片 把鸡巴插进小姨子的阴道 干干AV成人网 aolasoohpnbcn www84ytom 高清大量潮喷www27dyycom 宝贝开心成人 freefronvideos人母 嫩穴成人网gggg29com 逼着舅妈给我口交肛交彩漫画 欧美色色aV88wwwgangguanscom 老太太操逼自拍视频 777亚洲手机在线播放 有没有夫妻3p小说 色列漫画淫女 午间色站导航 欧美成人处女色大图 童颜巨乳亚洲综合 桃色性欲草 色眯眯射逼 无码中文字幕塞外青楼这是一个 狂日美女老师人妻 爱碰网官网 亚洲图片雅蠛蝶 快播35怎么搜片 2000XXXX电影 新谷露性家庭影院 深深候dvd播放 幼齿用英语怎么说 不雅伦理无需播放器 国外淫荡图片 国外网站幼幼嫩网址 成年人就去色色视频快播 我鲁日日鲁老老老我爱 caoshaonvbi 人体艺术avav 性感性色导航 韩国黄色哥来嫖网站 成人网站美逼 淫荡熟妇自拍 欧美色惰图片 北京空姐透明照 狼堡免费av视频 www776eom 亚洲无码av欧美天堂网男人天堂 欧美激情爆操 a片kk266co 色尼姑成人极速在线视频 国语家庭系列 蒋雯雯 越南伦理 色CC伦理影院手机版 99jbbcom 大鸡巴舅妈 国产偷拍自拍淫荡对话视频 少妇春梦射精 开心激动网 自拍偷牌成人 色桃隐 撸狗网性交视频 淫荡的三位老师 伦理电影wwwqiuxia6commqiuxia6com 怡春院分站 丝袜超短裙露脸迅雷下载 色制服电影院 97超碰好吊色男人 yy6080理论在线宅男日韩福利大全 大嫂丝袜 500人群交手机在线 5sav 偷拍熟女吧 口述我和妹妹的欲望 50p电脑版 wwwavtttcon 3p3com 伦理无码片在线看 欧美成人电影图片岛国性爱伦理电影 先锋影音AV成人欧美 我爱好色 淫电影网 WWW19MMCOM 玛丽罗斯3d同人动画h在线看 动漫女孩裸体 超级丝袜美腿乱伦 1919gogo欣赏 大色逼淫色 www就是撸 激情文学网好骚 A级黄片免费 xedd5com 国内的b是黑的 快播美国成年人片黄 av高跟丝袜视频 上原保奈美巨乳女教师在线观看 校园春色都市激情fefegancom 偷窥自拍XXOO 搜索看马操美女 人本女优视频 日日吧淫淫 人妻巨乳影院 美国女子性爱学校 大肥屁股重口味 啪啪啪啊啊啊不要 操碰 japanfreevideoshome国产 亚州淫荡老熟女人体 伦奸毛片免费在线看 天天影视se 樱桃做爱视频 亚卅av在线视频 x奸小说下载 亚洲色图图片在线 217av天堂网 东方在线撸撸-百度 幼幼丝袜集 灰姑娘的姐姐 青青草在线视频观看对华 86papa路con 亚洲1AV 综合图片2区亚洲 美国美女大逼电影 010插插av成人网站 www色comwww821kxwcom 播乐子成人网免费视频在线观看 大炮撸在线影院 ,www4KkKcom 野花鲁最近30部 wwwCC213wapwww2233ww2download 三客优最新地址 母亲让儿子爽的无码视频 全国黄色片子 欧美色图美国十次 超碰在线直播 性感妖娆操 亚洲肉感熟女色图 a片A毛片管看视频 8vaa褋芯屑 333kk 川岛和津实视频 在线母子乱伦对白 妹妹肥逼五月 亚洲美女自拍 老婆在我面前小说 韩国空姐堪比情趣内衣 干小姐综合 淫妻色五月 添骚穴 WM62COM 23456影视播放器 成人午夜剧场 尼姑福利网 AV区亚洲AV欧美AV512qucomwwwc5508com 经典欧美骚妇 震动棒露出 日韩丝袜美臀巨乳在线 av无限吧看 就去干少妇 色艺无间正面是哪集 校园春色我和老师做爱 漫画夜色 天海丽白色吊带 黄色淫荡性虐小说 午夜高清播放器 文20岁女性荫道口图片 热国产热无码热有码 2015小明发布看看算你色 百度云播影视 美女肏屄屄乱轮小说 家族舔阴AV影片 邪恶在线av有码 父女之交 关于处女破处的三级片 极品护士91在线 欧美虐待女人视频的网站 享受老太太的丝袜 aaazhibuo 8dfvodcom成人 真实自拍足交 群交男女猛插逼 妓女爱爱动态 lin35com是什么网站 abp159 亚洲色图偷拍自拍乱伦熟女抠逼自慰 朝国三级篇 淫三国幻想 免费的av小电影网站 日本阿v视频免费按摩师 av750c0m 黄色片操一下 巨乳少女车震在线观看 操逼 免费 囗述情感一乱伦岳母和女婿 WWW_FAMITSU_COM 偷拍中国少妇在公车被操视频 花也真衣论理电影 大鸡鸡插p洞 新片欧美十八岁美少 进击的巨人神thunderftp 西方美女15p 深圳哪里易找到老女人玩视频 在线成人有声小说 365rrr 女尿图片 我和淫荡的小姨做爱 � 做爱技术体照 淫妇性爱 大学生私拍b 第四射狠狠射小说 色中色成人av社区 和小姨子乱伦肛交 wwwppp62com 俄罗斯巨乳人体艺术 骚逼阿娇 汤芳人体图片大胆 大胆人体艺术bb私处 性感大胸骚货 哪个网站幼女的片多 日本美女本子把 色 五月天 婷婷 快播 美女 美穴艺术 色百合电影导航 大鸡巴用力 孙悟空操美少女战士 狠狠撸美女手掰穴图片 古代女子与兽类交 沙耶香套图 激情成人网区 暴风影音av播放 动漫女孩怎么插第3个 mmmpp44 黑木麻衣无码ed2k 淫荡学姐少妇 乱伦操少女屄 高中性爱故事 骚妹妹爱爱图网 韩国模特剪长发 大鸡巴把我逼日了 中国张柏芝做爱片中国张柏芝做爱片中国张柏芝做爱片中国张柏芝做爱片中国张柏芝做爱片 大胆女人下体艺术图片 789sss 影音先锋在线国内情侣野外性事自拍普通话对白 群撸图库 闪现君打阿乐 ady 小说 插入表妹嫩穴小说 推荐成人资源 网络播放器 成人台 149大胆人体艺术 大屌图片 骚美女成人av 春暖花开春色性吧 女亭婷五月 我上了同桌的姐姐 恋夜秀场主播自慰视频 yzppp 屄茎 操屄女图 美女鲍鱼大特写 淫乱的日本人妻山口玲子 偷拍射精图 性感美女人体艺木图片 种马小说完本 免费电影院 骑士福利导航导航网站 骚老婆足交 国产性爱一级电影 欧美免费成人花花性都 欧美大肥妞性爱视频 家庭乱伦网站快播 偷拍自拍国产毛片 金发美女也用大吊来开包 缔D杏那 yentiyishu人体艺术ytys WWWUUKKMCOM 女人露奶 � 苍井空露逼 老荡妇高跟丝袜足交 偷偷和女友的朋友做爱迅雷 做爱七十二尺 朱丹人体合成 麻腾由纪妃 帅哥撸播种子图 鸡巴插逼动态图片 羙国十次啦中文 WWW137AVCOM 神斗片欧美版华语 有气质女人人休艺术 由美老师放屁电影 欧美女人肉肏图片 白虎种子快播 国产自拍90后女孩 美女在床上疯狂嫩b 饭岛爱最后之作 幼幼强奸摸奶 色97成人动漫 两性性爱打鸡巴插逼 新视觉影院4080青苹果影院 嗯好爽插死我了 阴口艺术照 李宗瑞电影qvod38 爆操舅母 亚洲色图七七影院 被大鸡巴操菊花 怡红院肿么了 成人极品影院删除 欧美性爱大图色图强奸乱 欧美女子与狗随便性交 苍井空的bt种子无码 熟女乱伦长篇小说 大色虫 兽交幼女影音先锋播放 44aad be0ca93900121f9b 先锋天耗ばさ无码 欧毛毛女三级黄色片图 干女人黑木耳照 日本美女少妇嫩逼人体艺术 sesechangchang 色屄屄网 久久撸app下载 色图色噜 美女鸡巴大奶 好吊日在线视频在线观看 透明丝袜脚偷拍自拍 中山怡红院菜单 wcwwwcom下载 骑嫂子 亚洲大色妣 成人故事365ahnet 丝袜家庭教mp4 幼交肛交 妹妹撸撸大妈 日本毛爽 caoprom超碰在email 关于中国古代偷窥的黄片 第一会所老熟女下载 wwwhuangsecome 狼人干综合新地址HD播放 变态儿子强奸乱伦图 强奸电影名字 2wwwer37com 日本毛片基地一亚洲AVmzddcxcn 暗黑圣经仙桃影院 37tpcocn 持月真由xfplay 好吊日在线视频三级网 我爱背入李丽珍 电影师傅床戏在线观看 96插妹妹sexsex88com 豪放家庭在线播放 桃花宝典极夜著豆瓜网 安卓系统播放神器 美美网丝袜诱惑 人人干全免费视频xulawyercn av无插件一本道 全国色五月 操逼电影小说网 good在线wwwyuyuelvcom www18avmmd 撸波波影视无插件 伊人幼女成人电影 会看射的图片 小明插看看 全裸美女扒开粉嫩b 国人自拍性交网站 萝莉白丝足交本子 七草ちとせ巨乳视频 摇摇晃晃的成人电影 兰桂坊成社人区小说www68kqcom 舔阴论坛 久撸客一撸客色国内外成人激情在线 明星门 欧美大胆嫩肉穴爽大片 www牛逼插 性吧星云 少妇性奴的屁眼 人体艺术大胆mscbaidu1imgcn 最新久久色色成人版 l女同在线 小泽玛利亚高潮图片搜索 女性裸b图 肛交bt种子 最热门有声小说 人间添春色 春色猜谜字 樱井莉亚钢管舞视频 小泽玛利亚直美6p 能用的h网 还能看的h网 bl动漫h网 开心五月激 东京热401 男色女色第四色酒色网 怎么下载黄色小说 黄色小说小栽 和谐图城 乐乐影院 色哥导航 特色导航 依依社区 爱窝窝在线 色狼谷成人 91porn 包要你射电影 色色3A丝袜 丝袜妹妹淫网 爱色导航(荐) 好男人激情影院 坏哥哥 第七色 色久久 人格分裂 急先锋 撸撸射中文网 第一会所综合社区 91影院老师机 东方成人激情 怼莪影院吹潮 老鸭窝伊人无码不卡无码一本道 av女柳晶电影 91天生爱风流作品 深爱激情小说私房婷婷网 擼奶av 567pao 里番3d一家人野外 上原在线电影 水岛津实透明丝袜 1314酒色 网旧网俺也去 0855影院 在线无码私人影院 搜索 国产自拍 神马dy888午夜伦理达达兔 农民工黄晓婷 日韩裸体黑丝御姐 屈臣氏的燕窝面膜怎么样つぼみ晶エリーの早漏チ○ポ强化合宿 老熟女人性视频 影音先锋 三上悠亚ol 妹妹影院福利片 hhhhhhhhsxo 午夜天堂热的国产 强奸剧场 全裸香蕉视频无码 亚欧伦理视频 秋霞为什么给封了 日本在线视频空天使 日韩成人aⅴ在线 日本日屌日屄导航视频 在线福利视频 日本推油无码av magnet 在线免费视频 樱井梨吮东 日本一本道在线无码DVD 日本性感诱惑美女做爱阴道流水视频 日本一级av 汤姆avtom在线视频 台湾佬中文娱乐线20 阿v播播下载 橙色影院 奴隶少女护士cg视频 汤姆在线影院无码 偷拍宾馆 业面紧急生级访问 色和尚有线 厕所偷拍一族 av女l 公交色狼优酷视频 裸体视频AV 人与兽肉肉网 董美香ol 花井美纱链接 magnet 西瓜影音 亚洲 自拍 日韩女优欧美激情偷拍自拍 亚洲成年人免费视频 荷兰免费成人电影 深喉呕吐XXⅩX 操石榴在线视频 天天色成人免费视频 314hu四虎 涩久免费视频在线观看 成人电影迅雷下载 能看见整个奶子的香蕉影院 水菜丽百度影音 gwaz079百度云 噜死你们资源站 主播走光视频合集迅雷下载 thumbzilla jappen 精品Av 古川伊织star598在线 假面女皇vip在线视频播放 国产自拍迷情校园 啪啪啪公寓漫画 日本阿AV 黄色手机电影 欧美在线Av影院 华裔电击女神91在线 亚洲欧美专区 1日本1000部免费视频 开放90后 波多野结衣 东方 影院av 页面升级紧急访问每天正常更新 4438Xchengeren 老炮色 a k福利电影 色欲影视色天天视频 高老庄aV 259LUXU-683 magnet 手机在线电影 国产区 欧美激情人人操网 国产 偷拍 直播 日韩 国内外激情在线视频网给 站长统计一本道人妻 光棍影院被封 紫竹铃取汁 ftp 狂插空姐嫩 xfplay 丈夫面前 穿靴子伪街 XXOO视频在线免费 大香蕉道久在线播放 电棒漏电嗨过头 充气娃能看下毛和洞吗 夫妻牲交 福利云点墦 yukun瑟妃 疯狂交换女友 国产自拍26页 腐女资源 百度云 日本DVD高清无码视频 偷拍,自拍AV伦理电影 A片小视频福利站。 大奶肥婆自拍偷拍图片 交配伊甸园 超碰在线视频自拍偷拍国产 小热巴91大神 rctd 045 类似于A片 超美大奶大学生美女直播被男友操 男友问 你的衣服怎么脱掉的 亚洲女与黑人群交视频一 在线黄涩 木内美保步兵番号 鸡巴插入欧美美女的b舒服 激情在线国产自拍日韩欧美 国语福利小视频在线观看 作爱小视颍 潮喷合集丝袜无码mp4 做爱的无码高清视频 牛牛精品 伊aⅤ在线观看 savk12 哥哥搞在线播放 在线电一本道影 一级谍片 250pp亚洲情艺中心,88 欧美一本道九色在线一 wwwseavbacom色av吧 cos美女在线 欧美17,18ⅹⅹⅹ视频 自拍嫩逼 小电影在线观看网站 筱田优 贼 水电工 5358x视频 日本69式视频有码 b雪福利导航 韩国女主播19tvclub在线 操逼清晰视频 丝袜美女国产视频网址导航 水菜丽颜射房间 台湾妹中文娱乐网 风吟岛视频 口交 伦理 日本熟妇色五十路免费视频 A级片互舔 川村真矢Av在线观看 亚洲日韩av 色和尚国产自拍 sea8 mp4 aV天堂2018手机在线 免费版国产偷拍a在线播放 狠狠 婷婷 丁香 小视频福利在线观看平台 思妍白衣小仙女被邻居强上 萝莉自拍有水 4484新视觉 永久发布页 977成人影视在线观看 小清新影院在线观 小鸟酱后丝后入百度云 旋风魅影四级 香蕉影院小黄片免费看 性爱直播磁力链接 小骚逼第一色影院 性交流的视频 小雪小视频bd 小视频TV禁看视频 迷奸AV在线看 nba直播 任你在干线 汤姆影院在线视频国产 624u在线播放 成人 一级a做爰片就在线看狐狸视频 小香蕉AV视频 www182、com 腿模简小育 学生做爱视频 秘密搜查官 快播 成人福利网午夜 一级黄色夫妻录像片 直接看的gav久久播放器 国产自拍400首页 sm老爹影院 谁知道隔壁老王网址在线 综合网 123西瓜影音 米奇丁香 人人澡人人漠大学生 色久悠 夜色视频你今天寂寞了吗? 菲菲影视城美国 被抄的影院 变态另类 欧美 成人 国产偷拍自拍在线小说 不用下载安装就能看的吃男人鸡巴视频 插屄视频 大贯杏里播放 wwwhhh50 233若菜奈央 伦理片天海翼秘密搜查官 大香蕉在线万色屋视频 那种漫画小说你懂的 祥仔电影合集一区 那里可以看澳门皇冠酒店a片 色自啪 亚洲aV电影天堂 谷露影院ar toupaizaixian sexbj。com 毕业生 zaixian mianfei 朝桐光视频 成人短视频在线直接观看 陈美霖 沈阳音乐学院 导航女 www26yjjcom 1大尺度视频 开平虐女视频 菅野雪松协和影视在线视频 华人play在线视频bbb 鸡吧操屄视频 多啪啪免费视频 悠草影院 金兰策划网 (969) 橘佑金短视频 国内一极刺激自拍片 日本制服番号大全magnet 成人动漫母系 电脑怎么清理内存 黄色福利1000 dy88午夜 偷拍中学生洗澡磁力链接 花椒相机福利美女视频 站长推荐磁力下载 mp4 三洞轮流插视频 玉兔miki热舞视频 夜生活小视频 爆乳人妖小视频 国内网红主播自拍福利迅雷下载 不用app的裸裸体美女操逼视频 变态SM影片在线观看 草溜影院元气吧 - 百度 - 百度 波推全套视频 国产双飞集合ftp 日本在线AV网 笔国毛片 神马影院女主播是我的邻居 影音资源 激情乱伦电影 799pao 亚洲第一色第一影院 av视频大香蕉 老梁故事汇希斯莱杰 水中人体磁力链接 下载 大香蕉黄片免费看 济南谭崔 避开屏蔽的岛a片 草破福利 要看大鸡巴操小骚逼的人的视频 黑丝少妇影音先锋 欧美巨乳熟女磁力链接 美国黄网站色大全 伦蕉在线久播 极品女厕沟 激情五月bd韩国电影 混血美女自摸和男友激情啪啪自拍诱人呻吟福利视频 人人摸人人妻做人人看 44kknn 娸娸原网 伊人欧美 恋夜影院视频列表安卓青青 57k影院 如果电话亭 avi 插爆骚女精品自拍 青青草在线免费视频1769TV 令人惹火的邻家美眉 影音先锋 真人妹子被捅动态图 男人女人做完爱视频15 表姐合租两人共处一室晚上她竟爬上了我的床 性爱教学视频 北条麻妃bd在线播放版 国产老师和师生 magnet wwwcctv1024 女神自慰 ftp 女同性恋做激情视频 欧美大胆露阴视频 欧美无码影视 好女色在线观看 后入肥臀18p 百度影视屏福利 厕所超碰视频 强奸mp magnet 欧美妹aⅴ免费线上看 2016年妞干网视频 5手机在线福利 超在线最视频 800av:cOm magnet 欧美性爱免播放器在线播放 91大款肥汤的性感美乳90后邻家美眉趴着窗台后入啪啪 秋霞日本毛片网站 cheng ren 在线视频 上原亚衣肛门无码解禁影音先锋 美脚家庭教师在线播放 尤酷伦理片 熟女性生活视频在线观看 欧美av在线播放喷潮 194avav 凤凰AV成人 - 百度 kbb9999 AV片AV在线AV无码 爱爱视频高清免费观看 黄色男女操b视频 观看 18AV清纯视频在线播放平台 成人性爱视频久久操 女性真人生殖系统双性人视频 下身插入b射精视频 明星潜规测视频 mp4 免賛a片直播绪 国内 自己 偷拍 在线 国内真实偷拍 手机在线 国产主播户外勾在线 三桥杏奈高清无码迅雷下载 2五福电影院凸凹频频 男主拿鱼打女主,高宝宝 色哥午夜影院 川村まや痴汉 草溜影院费全过程免费 淫小弟影院在线视频 laohantuiche 啪啪啪喷潮XXOO视频 青娱乐成人国产 蓝沢润 一本道 亚洲青涩中文欧美 神马影院线理论 米娅卡莉法的av 在线福利65535 欧美粉色在线 欧美性受群交视频1在线播放 极品喷奶熟妇在线播放 变态另类无码福利影院92 天津小姐被偷拍 磁力下载 台湾三级电髟全部 丝袜美腿偷拍自拍 偷拍女生性行为图 妻子的乱伦 白虎少妇 肏婶骚屄 外国大妈会阴照片 美少女操屄图片 妹妹自慰11p 操老熟女的b 361美女人体 360电影院樱桃 爱色妹妹亚洲色图 性交卖淫姿势高清图片一级 欧美一黑对二白 大色网无毛一线天 射小妹网站 寂寞穴 西西人体模特苍井空 操的大白逼吧 骚穴让我操 拉好友干女朋友3p