SciELO - Scientific Electronic Library Online

 
vol.69 número5Hubbard’s parameter influence on Ba2GdReO6 properties, a promising ferromagnetic double Perovskite oxide for thermoelectric applicationsA comprehensive analysis of 26Mg(3H,2H)27 Mg reaction at 36 MeV índice de autoresíndice de materiabúsqueda de artículos
Home Pagelista alfabética de revistas  

Servicios Personalizados

Revista

Articulo

Indicadores

Links relacionados

  • No hay artículos similaresSimilares en SciELO

Compartir


Revista mexicana de física

versión impresa ISSN 0035-001X

Rev. mex. fis. vol.69 no.5 México sep./oct. 2023  Epub 28-Oct-2024

https://doi.org/10.31349/revmexfis.69.051101 

Medical Physics

Circadian cycles: A time-series approach

L. García-Iglesias1 

A. L. Rivera2  3 

R. Fossion2  3 

1 Posgrado en Ciencias, Universidad Autónoma del Estado de Morelos, 62209 Cuernavaca, Morelos, México.

2 Instituto de Ciencias Nucleares, Universidad Nacional Autónoma de México, 04510 Ciudad de Mexico, Mexico.

3 Centro de Ciencias de la Complejidad (C3), Universidad Nacional Autónoma de México, 04510 Ciudad de Mexico, Mexico. E-mail: ruben.fossion@nucleares.unam.mx


Abstract

The extraction of circadian cycles from experimental data can be interpreted as a specific case of time-series or signal analysis, but chronobiology and time-series analysis appear to have developed according to separate paths. Whereas some techniques such as continuous (CWT) and discrete wavelet analysis (DWT) are used frequently in rhythmobiology, other specialized methdos such as digital filters, nonlinear mode decomposition (NMD), singular spectrum analysis (SSA), empirical mode decomposition (EMD), ensemble empirical mode decomposition (EEMD) and complete ensemble empirical model decomposition with adaptive noise (CEEMDAN) have only occasionally been applied. No studies are available that compare the applicability between a wide variety of different methods or for different variables, and this is the purpose of the present contribution. These methods improve the goodness-of-fit of the circadian cycle with respect to the standard approach of cosinor analysis. They have the additional advantage of being able to quantify the day-to-day variability of the circadian parameters of mesor, amplitude, period and acrophase around their average values, with potential clinical applications to distinguish between healthy and unhealthy populations. Finally, the circadian parameters are interpreted within the context of homeostatic regulation with distinctive statistics for regulated and effector variables.

Keywords: Circadian parameters; day-to-day variability; homeostasis

1. Introduction

Circadian cycles are physical, mental and behavioral variations that follow an approximately 24-hour rhythm. The field of chronobiology has been dominated by basic research questions such as what biological variables obey regular circadian cycles, what are the central and/or peripheral anatomical substrates that generate these rhythms [1, 2], and how one can distinguish between the rhythm of an internal circadian clock and the effects of masking or entraining because of external perturbations [3, 4]. It has become clear that most - if not all - variables of living systems obey circadian cycles, including bacteria [5], plants [6], insects [7], vertebrates in general [8] and humans in particular [9, 10, 11, 12] and cosinor analysis was established as the standard method to quantify these regularities in experimental data. Cosinor analysis uses a predetermined mathematical model of periodicity, in particular a cosine function, which is fitted to the data to describe the average cyclic behaviour over a given time period [13, 14].

Presently, since the universality of circadian cycles has been accepted, also more applied research questions are attracting attention, such as the analysis of irregularities in circadian rhythms as a proxy to assess pathological or sub-healthy conditions in the clinic, e.g., age-associated frailty [15], preclinical [16] or advanced dementia [17], hyperactivity and attention-deficit disorder [18], psychological vulnerability [19], insomnia [20], etc. Such irregularities are difficult to describe using cosinor analysis, but may be quantified using non-parametric approaches such as intradaily stability (IS) and intradaily variability (IV) [21, 22]. These non-parametric approaches make no attempt however to extract circadian (24h) or ultradian (<24h) cyclic components from the data. This may be unfortunate, because a common research method to explore an unknown system, in particular in electronic engineering or material science, is to stimulate that system using a calibrated test signal at a specific frequency and to investigate the amplitude and phase of the response signal, which in linear systems occurs at the same frequency [23, 24]. Similarly, circadian cycles can be interpreted as the reaction of the variables of a physiological system to the universal forcing of the alternation between day and night. The circadian parameters of mesor, amplitude, period and acrophase are of particular interest to evaluate this response, which may also depend on the specific role these variables play in homeostatic regulation [25, 26, 27, 28]. Additionally, not only the average values of the circadian parameters but also their day-to-day variability gives valuable information to distinguish between healthy and unhealthy populations [20].

Altough the extraction of circadian cycles from experimental data can be interpreted as a particular case of time-series analysis, it would appear that chronobiology and time-series analysis have developed according to different paths, and whereas various of the specialized time series techniques have occasionally been applied to rhythmobiology, no studies are available that compare the applicability between a variety of methods or for different variables. One of the objectives of time-series analysis is to decompose continuous data as the sum of more simple components such as a trend, (approximately) periodic components and fluctuations, and/or extract one or more specific components, with the presupposition that these individual time-series components may reflect specific physical or physiological phenomena or processes.

Different families of time-series methods exist, differing in the mathematical principles by which this decomposition or extraction is realized. The most common decomposition method is Fourier spectral analysis, where the time series is represented as the sum of sine and cosine periodic functions of different frequencies and amplitudes [29]. A drawback of Fourier spectral analysis is that strictly it may only be applied to stationary time series, i.e., where statistical measures such as average and variance do not change over time, which may be an issue in experimental time series that often have, e.g., dominant trends or irregular behaviour in time. More advanced time-series techniques decompose time series either into more general mathematical functions that do not need to be periodic, or generate the basis in a numerical and data-adaptive way from the data itself, and are intensively used to study nonstationary time series. Examples of the former strategy are wavelet-based methods [30, 31], such as the continuous wavelet transform (CWT) or the discrete wavelet transform (DWT), and derived methods such as nonlinear mode decomposition (NMD) [32, 33]. Examples of the latter strategy are subspace-based methods such as singular spectrum analysis (SSA) [34, 35, 36], and envelope-based methods such as the empirical mode decomposition (EMD) [37, 38] and derived methods such as the ensemble empiricial mode decomposition (EEMD) [39] and the complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN) [40].

The purpose of the present contribution is to explore the application of the before-mentioned traditional and more recent time-series techniques to study the circadian cycles of selected time series for a variety of physiological variables. The performance of these time-series techniques will be compared to standard cosinor analysis. The circadian parameters and their day-to-day variability will be discussed and will be interpreted within the context of homeostatic regulation

2. Materials and methods

2.1. Experimental time series

The present contribution is a secondary analysis of simultaneous time series of 5 different variables showing the effect of the alternation of day and night on the physiology of a healthy male young adult during 1 week of continuous monitoring. Whereas in previous publications the high-frequency spontaneous fluctuations of these time series were studied from the perspective of homeostatic regulation [25, 27], the objective here is to analyze the circadian cycles. Figure 1 shows actigraphy (act), which measures the number of movements per sample time interval or ‘epoch’, in the present case arbitrarily chosen as 1min, using the Actigraph wGT3X-BT device, the cardiovascular variables of heart rate (HR) measured in beats per minute (bpm) and systolic blood pressure (BP) measured in millimeters of mercury (mmHg), both registered using the CONTEC ABPM50 device, and thermoregulatory variables of skin temperature (Tskin) and core temperature (Tcore), measured in degrees Celsius (°C) using Maxim Thermochron DS1922L iButtons (“thermochron8k”). All time series have been resampled at the same rate of 1/(15 min) and all start at midnight. Unless otherwise stated, all time-series analyses have been carried out with Wolfram Mathematica, version 12.0.0.0.

Figure 1 7-day continuous monitoring of a healthy male young adult showing (a) actigraphy (act) expressed as number of movements per minute (1/min), cardiovascular variables of (b) heart rate (HR) as beats per minute (bpm) and (c) systolic blood pressure (BP) as millimeters of mercury (mmHg), and thermoregulatory variables of (d) skin temperature (Tskin) and (e) core temperature (Tcore) as degrees Celsius (°C). Sample frequency is 1/(15 min) for all variables. Vertical gridlines indicate midnight. Also shown are the approx. 24h circadian cycle, estimated by cosinor analysis which shows the 7-day average properties (dashed curve) and SSA which also reflects the day-to-day variability (continuous curve). 

2.2. Time-series techniques

2.2.1. Cosinor

The traditional method to study the periodic aspects of circadian rhythms is cosinor analysis, see [13, 14]. The cosinor approach is based on regression techniques and is applicable to equidistant or non-equidistant time series x(n) of N discrete data points,

xn=x1,,xN. (1)

The procedure consists of fitting a periodic function y(t) to time series x(n),

yt=M+Acosωt+ϕ, (2)

where the angular frequency ω=2πf=2π/T corresponds to the period T ≈ 24h of the circadian cycle. Once a value for T has been fixed, minimizing the squared residual errors en2=xn-yn2 for all data points, allows to find values for the rhythm-adjusted mean or mesor M, the amplitude A and the phase ϕ. Because ϕ specifies the fraction of the cycle covered up to t = 0, this parameter does not give any physiological information. Instead, a more interesting variable is the acrophase ϕ0 which can be defined as the time of the day where the circadian cycle obtains its maximum, with respect to a fixed moment in time that is the same for all subjects, e.g. taking midnight as a reference, and which can be expressed as hours and minutes (hh:mm), or alternatively, as an angle (taking into account the relation 360=24 hrs), relative to this reference time. The values M, T, A and ϕ0 obtained by cosinor analysis represent the average values of the circadian parameters over the whole duration of the considered data, which are 7 successive days in the present contribution. Algorithms to calculate cosinor have been published in Python for actigraphy [41] and for biological variables in general [42].

2.2.2. Frequency domain filter (FDF)

Also called spectral domain filter, Fourier domain filter or FFT domain filter, or - more informally - Fourier filter, Fourier transform filter or FFT filter. Filtering is one of the most common forms of signal processing which has as a goal to remove specific frequencies or frequency bands, and/or improve the magnitude, phase or group delay in some part (s) of the spectrum of a signal [43]. Mathematically, a filter corresponds to the convolution of signal x(t) with the impulse response g(t) of the filter to obtain a filtered signal c(t),

ct=xt*gt-xτgt-τdτ, (3)

but this shift-and-multiply procedure may be very time-consuming for longer signals. In the Fourier domain, or frequency domain, the convolution reduces to a much simpler and efficient point-to-point multiplication [44],

c^ω=x^ωg^ω, (4)

over a certain angular frequency range ω, and where x^ω indicates the Fourier transform of x(t),

x^ω=12π-xteiωtdt, (5)

and similarly c^(ω) and g^(ω) are the transforms of c(t) and g(t). The Fourier transform itself is also a convolution and can be realized by very optimized techniques, such as the fast Fourier transform (FFT) [29]. The minimal essence of the FDF consists first in converting the signal to the frequency domain, then zeroing out all unwanted frequencies ω, i.e., g^(ω)=0, whereas desired frequencies are permitted to pass in an unalterated way, i.e., g^(ω)=1, and finally to apply the inverse-Fourier transform to obtain the filtered signal. This procedure readily allows to construct any low-pass, high-pass, band-pass or band-stop filter. One limitation of the Fourier transform is that it reflects the average frequency content of the whole time series, such that it may be inadequate to apply to non-stationary time series where the statistical properties vary over time. Another particularity is that using rectangular filter windows with abrupt frequency cut-offs as described above may cause artificial high-frequency “ripples” or “ringing” in the filter output, also called Gibbs phenomenon, such that often it is advised to use specific filter windows that interpolate smoothly between 0 and 1 [29, 43, 44]. Apart from signal analysis, FDF is also popular in digital image analysis [45].

FDF appears to have been applied to extract circadian cycles from experimental data only once [46]. Other types of digital filter have occasionally been applied, such as Butterworth filters [47], finite impulse response filters (FIR) [48], Kalman filters [49, 50], adaptive notch filters [51], particle filters [52, 53], Bayesian filters [54] and waveform filters [55]. Here, an FDF bandpass filter is used. In order to capture the circadian cycle at ω ≈ 7 (in units of number of oscillations during 7 successive days) and the day-to-day variability, a simple rectangular filter window is applied in the arbitrary range 0 ≤ ω ≤ 12, excluding the ultradian oscillations which may be expected to start to appear near ω ≈ 14 and where ω = 0 corresponds to the constant DC or mesor term.

2.2.3. Wavelet transforms (WT)

Wavelet analysis is an example of time-frequency analysis, i.e., the time and frequency domains are studied simultaneously, such that this approach is particularly well suited for the study of non-stationary processes [30, 31]. Two versions of wavelet analysis exist: (i) the continuous wavelet transform (CWT), which tends to be used more in scientific research to analyze complex signals and is superior from the viewpoint of visualizing results for individual signals, but has the drawback of using a non-orthonormal basis such that the CWT expresses excessive information and the values of the wavelet coefficients are correlated; (ii) the discrete wavelet transform (DWT) which is preferred in order to solve technical real-life problems and where orthonormal bases are available that allow a signal decomposition in an efficient and exact way, with considerable advantages such as computational speed, simpler procedure for the inverse transform and application to multiple signals [30, 31]. One important problem when using CWT or DWT is the choice of an appropriate mother wavelet or analizing wavelet, which will depend on the type of signal studied and/or the study objective, and where an inadequate choice may compromise the results.

The Continuous Wavelet Transform (CWT) [30, 31] is performed by convolution of the signal x(t) with the two parameter wavelet function ψ s,τ (t),

W(s,τ)=-x(t)ψs,τ*(t)dt, (6)

where the asterisk denotes the complex conjugate and the wavelet function is obtained from the mother wavelet or analyzing wavelet,

ψs,τ(t)=1sψ0t-τs, (7)

where the parameter t specifies the wavelet location on the time axis, τ expresses a shift or translation and s corresponds to an expansion (dilation) of the width of the wavelet in the time domain and represents the time scale of the wavelet transform. Often, the notion of “period” T is considered instead of the “time scale” s since it is more suitable in many studies, but one has to be very careful because the identity T = s is correct only for special choices of the mother wavelet and its parameters. The main difference between Eqs. (5) and (6) is that the Fourier transform x^(ω) represents the average contribution of the angular frequency ω over the whole time series, whereas the wavelet transform W(s,t) preserves information with respect to the timing t of contributions at specific scales s. Generally, it is regarded that the Morlet mother wavelet offers an optimal time-scale (frequency) resolution, but under certain conditions, the Lognormal wavelet outperforms the Morlet wavelet in terms of time-frequency resolution and the Lognormal wavelet is also inline with the logarithmic frequency resolution of the CWT [32, 56]. Of particular interest to describe approximately periodic phenomena, such as circadian cycles, is the ridge curve which runs along local maxima of the absolute value of W(s,t) with respect to s (for each fixed t) and thereby indicates the instantaneous frequency and analytic amplitude of the signal at each time point. Extracting that ridge corresponds to a filtering process of the signal x(t), similar to Eq. (4), but now based on the wavelet transform W(s,t) instead of the Fourier transform x^(ω), such that time localization is conserved. The CWT has been applied to analyze circadian cycles, especially for variables that can be expected to evolve smoothly and continuously in time because of restrictions imposed by physiological processes, such as body temperature [57]. In the present contribution, the freely available Multiscale Oscillatory Dynamics Analysis (MODA) toolkit for Matlab and Python from Lancaster University [32, 58, 59] was used to calculate the CWT with the Lognormal wavelet and to extract the ridge around period T = 24 h.

Discrete Wavelet Transform (DWT) [30, 31] DWT uses a logarithmic discretization of the mother wavelet function of Eq. (7),

ψm,nt=1s0mψt-nτ0s0ms0m, (8)

which may constitute an orthonormal basis and where commonly the discrete wavelet parameters are chosen as s0=2 and τ0=1 such that the dilation parameter m specifies the scale and the translation parameter n indicates timing. DWT also considers a scaling function or father wavelet function ϕm,n which obeys the same discretization relation as Eq. (8). The functions ψm,n and ϕm,n are convoluted with signal x(t), similar to Eq. (6) for CWT, and serve as highpass and lowpass filters, respectively, where

Tm,n=-xtψm,ntdt

are called detail coefficients or wavelet coefficients, and

Sm,n=-xtψm,ntdt

are approximation coefficients. The inverse transformations,

dm(t)=n=-Tm,nψm,n(t), (9)

xm(t)=n=-Sm,nϕm,n(t), (10)

are called the signal details and the smooth or continuous approximation of the signal at scale m. DWT works in a recursive way, decomposing the signal xm(t) at scale m as the sum of a lowpass filtered part xm+1(t) and a highpass filtered part dm+1(t),

xm(t)=xm+1(t)+dm+1(t), (11)

allowing to reconstruct the original signal as a multiresolution series expansion,

x(t)=xm0(t)+m=1m0dm(t), (12)

which is the sum of all the signal details and a single residual continuous approximation. In the case of a power-of-two logarithmic scaling as presented here, the maximum number of signal details in which the signal is decomposed is m0=log2(N) where N is the length of the time series. The DWT has been applied to analyze circadian cycles, especially for variables that contain relatively sharp discontinuities, jumping from zero to relatively high values, such as physical activity and in particular actigraphy data, for which it has been found that the Daubechies 4-tap mother wavelet identifies well discontinuities in the first derivative of a signal [60]. For the purpose of this contribution, several DWT mother wavelets have been explored, Haar and Daubechies 1-tap mother wavelets produced the best goodness-of-fit for all variables, but the resulting circadian cycles contained f>1/24 h contributions. Higher-order mother wavelets eliminated these ultradian contributions but tended to have lower goodness-of-fit. The Daubechies 4-tap wavelet was found to be a good trade-off between these two opposing limitations for all variables considered here and all results will be presented using this specific mother wavelet. Matlab and Mathematica have integrated algorithms to calculate DWT, and algorithms are available for Python as well [61].

2.2.4. Singular Spectrum Analysis (SSA)

SSA is an example of a subspace or phase-space method and is closely related to embedding and attractor reconstruction of dynamical systems [34]. SSA has been discussed in detail in a number of textbooks [34, 35, 36, 62]. A standard and very extensive reference is [63], whereas a short and very accessible introduction can be found in [64]. Unlike Fourier or wavelet analysis, which express a signal as a superposition of predefined functions, SSA is considered to be data adaptative or model and user independent, because the basis functions are generated from the data itself. SSA can be explained as a 3-step process. First, the univariate signal x(n) with length N is embedded into a so-called trajectory matrix X, by sliding a window with length L over the data using unitary steps, such that the dimension of the matrix is K × L where K=N-L+1. This matrix may be interpreted as a multivariate or multi-channel signal [32, 33] or as the underlying phase space of the time series [34]. By construction, the matrix will obey a Hankel symmetry, i.e., the upsloping diagonals consist of identical elements. Second, the matrix decomposition method of singular value decomposition (SVD) is applied to express X as a sum of elementary or rank-1 matrices Xk, or - similarly - to express the original phase space as a superposition of “sub phase-spaces” [65],

X=k=1rσkXk . (13)

Here, σ1σ2σr are the ordered singular values, where rmin(K,L)=rank(X) is the matrix rank of the trajectory matrix. SVD is similar to an approach with correlation matrices or principal component analysis (PCA) [66], hence the alternative name of “single-channel PCA” for SSA [32, 33]. Third, the inverse of step 1 is applied, converting the matrices Xk back into so-called reconstructed time-series componentsgk(n), where in general the matrices Xk are not Hankel such that care needs to be taken to average over the upsloping diagonals and to apply weights to compensate for the difference in length of these diagonals [64]. Finally, the original time series can be decomposed as a sum over reconstructed components,

x(n)=k=1rσkgk(n), (14)

where λk=σk2 corresponds to the partial variance of the component gk(n) and k=1rλk=Var is the total variance of time series x(n). Not all gk(n) correspond to independent components, instead they should be “grouped” into “modes” together with other gl(n) with l=1,,r that have similar partial variance and/or average frequency, according to the corresponding elementary matrices Xk and Xl with k,l=1,,r that toghether approximate the Hankel symmetry. In particular, approximate periodic behaviour in a time series will be described by 2 components gk(n) that share the same partial variance and average frequency, and that together determine the phase, similar to the mathematical description of periodicity, Acos(ωt+ϕ)=Bcosωt+Csinωt, where the phase can be expressed either explicitely as ϕ (left-hand expression) or as a weighted sum of sin(ωt) and cos(ωt) functions (right-hand expression). SSA has been applied to the analysis of circadian cycles only in few occasions [20, 46, 67, 68]. Because of the ordering of the partial variances and given that the trend and the circadian cycle constitute the dominant features of the time series, it has been found that g1 corresponds to the mesor and g2+g3 to the circadian cycle [20, 46, 68]. A Python code for SSA analysis is available for the particular variable of actigraphy [41] and a general algoritm is available in R [69]. It is suggested to chose the length of the embedding window L as a multiple of the (average) periodicity of the data, i.e. L = mT, where m is an integer number and T the period, such that an obvious choice for the L parameter is L=T=24 h=1440 min, which is the value used also here [20, 46, 68].

2.2.5. Empirical Mode Decomposition (EMD) and related methods

Similar to SSA, EMD is a data-adaptive method, but its approach is empirical instead of analytical and focused on the upper and lower envelopes of the time series [37, 38]. The goal of this method is to decompose the original signal into a collection of intrinsic mode functions (IMFs), which are simple oscillatory modes with meaningful instantaneous frequencies, and a residual trend. This requirement is enforced by the following conditions: (1) the number of zero crossings and the number of local extrema of the function must differ by at most one and (2) the “the local mean” of the function is zero. The sifting process by which the signal is decomposed is the following: First, the local maxima are identified and connected by a cubic spline line, constructing the upper envelope. This procedure is repeated with the local minima to construct the lower envelope. Then, the average of these envelopes is designated as a local mean of the signal, designated as m1, which can be used as a reference that separates the lower frequency oscillations in the signal (the part included in the local mean) from the highest frequency oscillations (the oscillations around the local mean). The difference between the signal and m1 is the first component h1, i.e. x(t)-m1=h1. By subtracting the local mean, the high (local) frequency oscillations can be separated from the rest of the signal. However, this subtraction can create new local extrema, foiling the requirements set for an IMF. To actually recover the highest frequency IMF component, the sifting procedure is applied again and again, until some stopping criterion is fulfilled and a sufficiently pure IMF results. In the second sifting process, h1 is treated as the data, then h1-m11=h11. In general,

h1(k-1)-m1k=h1k . (15)

which can be repeated k times until h1kc1 is an IMF. Subtracting the first IMF from the signal the first residue r1=x(t)-c1 is obtained. This can be repeated on all subsequent rj, with as a result r1-c2=r2 , ,rn-1-cn=rn. The sifting process is stopped when the residue rn becomes a monotonic function and thus no further IMFs can be extracted. The n IMFs then constitute a decomposition of the original signal,

x(t)=i=1nci+rn . (16)

EMD separates different frequency scales of the signal into separate IMFs, but it is not guaranteed that (when analyzing data from some natural process) each IMF represents a physical time scale of the process. Often ranges of IMFs need to be added together to reconstruct information pertaining to a single natural time scale [39], constituting a problem with the so-called mode mixing, and some IMF components may represent the properties of measurement noise instead of the underlying physical process. To assist in selecting the IMFs with a physical meaning [70] a statistical significance test has been proposed which compares the IMFs against a null hypothesis of white noise. The EMD has only rarely been applied before to analyze circadian, ultradian and infradian rhytyms [16,71].

In order to solve in great part the mode-mixing problem and extract more robust and physically meaningful IMFs, a variant of EMD is used, called Ensemble Empirical Mode Decomposition (EEMD) [39], which performs EMD over an ensemble of the signal plus Gaussian white noise. The EEMD has been applied before to analyze circadian and infradian rhythms [72, 73]. However, the EEMD does not allow a complete reconstruction as in Eq. (16), since the original signal cannot be exactly recovered by adding together its EEMD components. In order to overcome this situation, a second variant called the Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN) is used [40, 74], this procedure adds a particular noise at each stage, and achieves a complete decomposition with no reconstruction error. Apparently, CEEMDAN has never been applied to analyze circadian rhythms.

All calculations for EMD, EEMD and CEEMDAN in this contribution were carried out with the free Python package libeemd [75].

2.2.6. Nonlinear mode decomposition

NMD is another data-adaptive method, but in contrast to SSA and EMD it focuses on obtaining time-series components that have a more obvious physical meaning [32, 33]. It is based on the combination of time-frequency analysis [32, 76], surrogate data tests [77] and the idea of harmonic identification [78]. Given that experimental signals are rarely pure sinusoidal, the NMD considers the full nonlinear modes (NM) in the signal, which are defined as the sum of all components that correspond to the same activity,

c(t)=A(t)vϕ(t)=A(t)hcoshϕ(t)+ϕh , (17)

where A(t) and ϕ(t) are the instantaneous amplitude and phase, respectively, and vϕ(t) is some periodic function, called “wave-shape” function. The components of the NM are referred to as harmonics, with the h th harmonic being represented by a term ahcoshϕ(t)+ϕh. It is assumed that a signal can be represented as a sum of the NMs of the form (17) plus some noise η(t):

s(t)=ici(t)+η(t) . (18)

To do this, four steps are necessary:

  1. Extract the fundamental harmonic of a nonlinear mode (NM) accurately from the signal’s time-frequency representation (TFR), in this case, using the same Lognormal wavelet as in CWT.

  2. Find candidates for all its possible harmonics, based on its properties.

  3. Identify the true harmonics (i.e., corresponding to the same mode), and confirm by surrogate testing.

  4. Reconstruct the full NM by summing together all the true harmonics; subtract it from the signal, and iterate the procedure on the residual until a preset stopping criterion is met.

NMD has been applied only in a few occasions to analyze circadian cycles [68, 79].

2.3. Goodness of fit

The coefficient of determination or goodness of fit R 2 is an important parameter to evaluate the quality of the extracted circadian rhythms. R 2 compares the variance of the residual errors en around the estimate of the circadian cycle y to the variance of the time series x(n) around its average value x-,

R2=1-Var(e)Var(x)=1-n=1N(xn-yn)2n=1N(xn-x-)2 . (19)

2.4. Correlation coefficients

In the following, correlation coefficients are used to compare the circadian cycles extracted by different methods. The Pearson correlation coefficient is a measure of the linear correlation between two variables. It has a value between +1 and -1, where 1 is total positive linear correlation, 0 is no linear correlation, and -1 is total negative linear correlation. The Pearson correlation coefficient rx,y between time series x and y is defined as,

rx,y=cov(x,y)σxσy=i=1Nxi-x-yi-y-i=1Nxi-x-2i=1Nyi-y-2 (20)

where cov(x, y) is the covariance between time series x and y, and σx and σy are the corresponding standard deviations. The Spearman correlation coefficient ρx,y assesses monotonic relationships (whether linear or not) and is defined as the Pearson correlation coefficient between the ranks rkx and rky of the corresponding time series,

ρx,y=cov(rkx,rky)σxσy (21)

If there are no repeated data values, a perfect Spearman correlation of +1 or -1 occurs when each of the variables is a perfect monotone function of the other.

3. Results

3.1. Circadian parameters

Approx. 24 h circadian cycles were extracted for the continuous data of all variables considered: actigraphy (act), heart rate (HR), blood pressure (BP), skin temperature (Tskin) and core temperature (Tcore), see Fig. 1 for a plot of these cycles as extracted by cosinor analysis and SSA. Whereas by construction cosinor analysis represents the circadian cycle as a periodic function with constant circadian parameters that reflect the average behaviour over the whole period considered, 7 successive days in the present case, the other methods show a circadian cycle that varies over time and allow to extract daily estimates of the circadian parameters, e.g., {A}={A1,A2,,A7} for the amplitude, and similarly for the other parameters. All methods agree rather well on the week-average values of the circadian parameters: mean({T}), mean({M}), mean({A}), and mean({ϕ0}), see Fig. 2 for the specific variable of actigraphy. Results are similar for the other variables of heart rate, blood pressure and core and skin temperature. In particular, the average values of the circadian parameters according to each method tend to be within the error bars of the standard deviations SD({T}), SD({M}), SD({A}) and SD({ϕ0}) as evaluated by all other methods. These standard deviations can be interpreted as a quantification of the day-to-day variability. There are large differences in the evaluation of the day-to-day variability according to the different methods: some methods systematically giving large estimates (EMD, EEMD, CEEMDAN), whereas other methods consistently give much smaller results (CWT, DWT, NMD).

Figure 2 Circadian parameters for actigraphy. a) period T, b) mesor M, c) amplitude A and d) acrophase ϕ 0. Week-average values are shown and, where possible, also error bars with the standard deviation of the day-to-day variability. Also the mean value for each parameter over all methods is indicated (horizontal lines). 

3.2. Goodness-of-fit of circadian cycles

Table I shows the estimations of the coefficient of determination R 2 by the different methods considered for the circadian cycles for all variables. An ANOVA test indicates that the between-group variance for R 2, i.e., variance between different variables, is larger than the within-group variance, i.e., the variance of R 2 values between different methods for the same variable (Fratio=1255 with p=10-41) and post-hoc tests with Bonferroni, Tukey, Duncan and Student-Newman-Keuls all indicate that differences in R 2 values between different variables are statistically significant. Therefore, R 2 primarily depends on the type of variable and only secondarily on the method applied, see Fig. 3a).

Table I Coefficient of determination or goodness-of-fit parameter R 2 of Eq. (19) shown in percentage, for the circadian cycle estimated by different methods for all variables. 

R 2 (%)
act (1/min) HR (bmp) BP (mmHg) Tskin (°C) Tcore (°C)
Cosinor 16.0 38.7 50.2 29.5 69.3
FDF 20.5 48.6 57.4 41.9 75.2
CWT 17.8 39.3 51.9 31.4 68.4
DWT 19.9 44.3 56.4 35.1 74.9
SSA 21.3 49.4 57.4 42.7 76.7
NMD 16.7 43.3 53.3 28.9 74.9
EMD 14.6 42.5 52.7 24.0 76.0
EEMD 27.2 51.7 66.3 49.9 72.0
CEEMDAN 20.5 53.1 45.1 33.8 80.1

Figure 3 Selected parameters to characterize circadian cycles of actigraphy (act), heart rate (HR), blood pressure (BP), skin temperature (Tskin) and core temperature (Tcore): a) coefficient of determination R 2 and b) modulability A/M. The graphs illustrate that the parameter values depend primarily on the variable and much less on the specific method used to calculate the circadian parameters. A/M is shown in logarithmic scale to more clearly show differences in the rather small values for core and skin temperature. Variables are separated according behavioural, cardiovascular and thermoregulatory mechanisms (vertical gridlines). 

For specific variables, compared to cosinor analysis, the other methods tend to improve the description of the circadian cycle with up to 10 - 20 %. Exceptions are the EMD analysis of actigraphy and skin temperature, and CEEMDAN for blood pressure, where the description is worse down to -5% than for cosinor analysis, possibly due to the mode-mixing problem typical for these methods. Results for CWT and NMD only slightly improve the cosinor description, whereas DWT only gives an average performance when compared to the other methods. Maximal R 2 values are obtained with EEMD for the particular variables of actigraphy, blood pressure and skin temperature, and with CEEMDAN for heart rate and core temperature. On the other hand, only SSA and FDF give a consistent above-average description in general for all variables.

3.3. Spectral content of circadian cycles

Figure 4 shows the spectral content, i.e., the Fourier power spectra, of the circadian cycles as found by the different methods for actigraphy. Similar results are obtained for the other variables of heart rate, blood pressure and core and skin temperature. By construction, circadian cycles described by the cosinor analysis of Eq. (2) should correspond to a spectrum with a single peak at the circadian frequency of f ≈ 7 (cyles/7 days), but because of the phenomenon of spectral leakage small power contributions are also present at neighbouring frequencies [80]. Comparing the spectra of other methods to the spectrum of cosinor analysis gives a measure of how different estimates of the circadian cycle are constituted of different combinations of lower and higher frequencies. Frequencies of spectra of filter-based methods such as FDF and DWT cut off abruptly and are exactly 0 outside of a specific bandpass range. Spectra are rather narrow for DWT, NMD and CWT, reflecting more regular circadian cycles. The spectrum for EMD, on the other hand, is very wide, corresponding to a more irregular estimate of the circadian cycle. Spectra for SSA, EEMD and CEEMDAN have in-termediate widths, indicating a description of the circadian cycles that may be moderately irregular. EMD, EEMD and CEEMDAN also tend to have frequency contributions that are not contained within the original signal.

Figure 4 Fourier power spectra of time series of actigraphy (green shaded curves), circadian cycles as extracted by different methods (black continuous curves), and compared to standard cosinor analysis (black dashed curve). All spectra are presented in semilogarithmic scale. Angular frequencies ω are shown in units of number of cycles for 7 consecutive days. 

3.4. Correlation between circadian cycles extracted by different methods

Table II shows the Spearman correlation coefficients between circadian cycles as estimated by the different methods applied to actigraphy data. Similar results are obtained for the other variables of heart rate, blood pressure and core and skin temperature. The correlation coefficients are rather large and mostly in the range 0.75-0.95, which is unsurprising because it is always the same process of the circadian cycle that is described and which is only observed from the perspectives of different methods. For all variables, DWT and SSA are the methods for which the circadian cycles correlate always above average with the other methods. Another method for which the circadian cycles tend to correlate above average with those of other methods are CWT and NMD. Methods for which extracted cycles tend to correlate less than average with those of other methods are EMD, EEMD and CEEMDAN.

Table II Spearman’s rank correlation coefficient ρ of Eq. (21) for circadian cycles estimated by different methods for actigraphy. 

ρ
Cosinor FDF CWT DWT SSA NMD EMD EEMD CEEM-DAN
Cosinor 1 0.876 0.936 0.936 0.938 0.950 0.814 0.767 0.822
FDF 0.876 1 0.930 0.945 0.950 0.908 0.745 0.852 0.893
CWT 0.936 0.930 1 0.971 0.979 0.949 0.865 0.867 0.938
DWT 0.936 0.945 0.971 1 0.982 0.971 0.789 0.868 0.895
SSA 0.938 0.950 0.979 0.982 1 0.960 0.813 0.874 0.911
NMD 0.950 0.908 0.949 0.971 0.960 1 0.753 0.837 0.877
EMD 0.814 0.745 0.865 0.789 0.813 0.753 1 0.729 0.812
EEMD 0.767 0.852 0.867 0.868 0.874 0.837 0.729 1 0.895
CEEM-DAN 0.822 0.893 0.938 0.895 0.911 0.877 0.812 0.895 1

4. Discussion

4.1. Time-series description of circadian cycles

Compared to standard cosinor analysis, the other methods in general improve the goodness-of-fit R 2 of the circadian cycle. Comparing between all methods, SSA, EEMD, FDF and CEEMDAN perform above average, DWT on average, and NMD, EMD and CWT below average. In particular, the good performance of FDF is surprising given that it is one of the simplest filter techniques available and that here a crude rectangular filter window was applied. This good performance may be explained by the broad bandpass frequency range that was chosen. An alternative approach in the literature is to describe the circadian cycle by the fundamental frequency of ≈ 1/24h together with its higher harmonics [17]. Also the NMD description is based on harmonics, but appears to underperform in comparison with other methods. A possible reason is that all frequency-based methods utilized for rhythm detection suffer from the lack of specification [71, 81], such that harmonics may be due to ultradian rhythms or non-stationarities and may not be appropriate to describe the circadian cycle. Whereas in the case of FDF and other digital filters the frequency range needs to be defined by the user and may be arbitrary, methods such as SSA, EEMD and CEEMDAN generate their time-series components and the corresponding frequency ranges in a data-adaptive way. These frequency ranges are moderately wide, and appear to maximize R 2 in comparison with either narrower (cosinor, CWT, DWT and NMD) or wider frequency ranges (EMD), see Fig. 4. The circadian cycles derived by EEMD and CEEMDAN correlate less well with the cycles extracted by other methods, it is is therefore possible that these EMD-based techniques capture more specific features of experimental data which are not accessible to other methods, and it seems that EEMD and CEEMDAN are more applicable to some variables than to others. In contrast, circadian cycles described by SSA correlate above average with results from other methods and therefore appear to capture general properties of experimental data. Also, SSA appears to give a good description of circadian cycles independently from the specific variable that is studied, perhaps because the circadian cycle may be a limit circle [82] or a chaotic attractor [83] which subspace or phase-space methods such as SSA may detect very well. Overall, it appears that data-adaptive methods are particularly suited to extract the circadian cycle from experimental data. In this section, the goodness-of-fit obtained by specific methods was emphasized because it is hypothesized that larger R 2 values will allow to statistically distinguish more easily between healthy and unhealthy populations in clinical applications of circadian cycles.

4.2. Circadian parameters and day-to-day variability

All methods considered agree rather well on the average values of the circadian parameters for all methods. These average values are mostly within the error bars representing the day-to-day variability according to each method. Ref. [20] presented evidence that not only the average values of the circadian parameters but also the day-to-day variability contribute to distinguish between healthy and unhealthy populations. The present contribution demonstrates that one of the main differences between various methods is their quantification of the day-to-day variability. It is not clear what part of this variability is due to the data itself (intrinsic) and what part may be artificially produced by the specific method applied (extrinsic). Here, it is hypothesized that if a method is consistent in the quantification of day-to-day variability, and the extrinsic part of the variability is similar for different populations, then differences in this variability between the populations must be due to physiological effects. If this condition is fulfilled, then day-to-day variability may indeed be applied for diagnostic purposes as proposed before [20].

4.3. Homeostatic interpretation of circadian parameters

The range of R 2 for the circadian cycles for all variables considered here is very wide, 15 - 80%, see Table I, and in the following this wide range will be interpreted. Refs. [25, 26, 27, 28, 84, 85, 86] proposed that physiological variables that play different roles in homeostatic regulation exhibit different time-series behavior with distinct statistical properties. In particular, it was suggested that regulated variables such as blood pressure and core temperature fluctuate within a narrow range around a specific setpoint representing Claude Bernard’s constant internal environment; on the other hand, effector variables such as heart rate and skin temperature are responsible for Walter Cannon’s adaptive responses to internal and external perturbances and are much less restricted in space and time. Therefore, it was proposed that in optimal conditions of youth and health effector variables are more variable and less regular than regulated variables. This paradigm was investigated for the spontaneous fluctuations of physiological variables in rest and in absence of dominant stimuli. It is unclear whether a similar phenomenology can be expected in the presence of an external forcing such as the alternation of day and night. The coefficient of determination R 2 as determined by the different methods appears to respond affirmatively to this question, see Table I and Fig. 3a), the regulated variables considered in this contribution (blood pressure and core temperature) indeed appear to be associated to more regular time series (higher R 2 values), and effector variables (heart rate and skin temperature) to more irregular or variable time series (lower R 2 values). Actigraphy could be an example of a third category of variable related to behavior or conductome, see [87], corresponding to very irregular time series (minimal R 2 values). To investigate this paradigm further, a derived measure A/M is proposed, called modulability for the purpose of this contribution, which expresses amplitude A as a percentage of mesor M, and therefore is dimensionless allowing comparisons between different variables. Table III and Fig. 3b) show that for a specific homeostatic mechanism, modulability appears to be larger for the effector variable than for the corresponding regulated variable. For the thermoregulatory mechanism A/M ≈ 3 for skin temperature vs. ≈ 1.3 for core temperature. For the cardiovascular mechanism A/M ≈ 21 for heart rate vs. ≈ 15 for blood pressure. This sounds very reasonable because regulation tries to maintain regulated variables within a very narrow homeostatic range, whereas the range of regulating variables is contained only by what is physiologically possible and what not. Modulability for behavioural variables such as actigraphy is even larger, A/M ≈ 65, because here there are no physiological limitations that must be respected. It is hypothesized that there may be a relation between this homeostatic time-series paradigm and the phenomena of masking and entrainment in rhythmobiology. In particular, it is suggested that effector variables are more prone to the effects of masking and entrainment than regulated variables, such that the latter may be expected to reflect more easily the “true” cycles of the internal circadian clocks.

Table III Modulability A/M shown in percentage, for the circadian cycles estimated by different time-series techniques for all variables. 

A/M (%)
act (1/min) HR (bmp) BP (mmHg) Tskin (°C) Tcore (°C)
Cosinor 66.9 20.4 15.6 3.07 1.39
FDF 71.4 22.2 16.1 3.21 1.44
CWT 65.2 18.0 13.2 2.64 1.09
DWT 56.7 17.7 13.3 2.76 1.25
SSA 67.8 20.6 15.8 3.07 1.35
NMD 64.8 21.6 14.1 3.41 1.25
EMD 63.0 24.1 15.4 2.75 1.40
EEMD 65.9 21.1 13.7 3.44 1.15
CEEMDAN 61.4 22.5 11.8 1.92 1.23

4.4. Strengths and limitations

The strengths of the present contribution are the inclusion of 5 different variables (actigraphy, heart rate, blood pressure, skin temperature and core temperature) from 3 different mechanisms (behavior, cardiovascular and thermoregulatory) and 9 different methods (cosinor, FDF, CWT, DWT, SSA, EMD, EEMD, CEEMDAN, NMD) from at least 4 different domains (mathematical model, time-frequency, analytical data-adaptive and empirical data-adaptive). The limitation is obviously that only data of a single subject was included. Therefore, the results of the present contribution are presented as hypotheses which should be rather straightforward to verify and which may inspire further investigations.

5. Conclusion

A good description of circadian cycles appears to depend on including an adequate broad frequency range around the central frequency of 1/24h, rather than including harmonics of this fundamental frequency. Recent time-series techniques such as SSA, EEMD and CEEMDAN generate their components and the corresponding frequency content in a data-adaptive way, maximize goodness-of-fit R 2 values and appear to be particularly suited to extract circadian cycles from experimental data. Whereas EEMD and CEEMDAN may be more applicable to some variables than to others, the applicability of SSA appears to be more independent of the specific variable under study. Another advantage with respect to the traditional cosinor approach is that these time-series techniques allow to describe the day-do-day variability of the circadian parameters apart from their average values. Finally, several of the circadian parameters were interpreted within the context of homeostatic regulation and a possible link was proposed with the phenomena of masking and entrainment in rhythmobiology.

Acknowledgements

Financial funding for this work was supplied by the Dirección General de Asuntos del Personal Académico (DGAPA) from the Universidad Nacional Autónoma de México (UNAM) with grant PAPIIT IN110321 and IN110321. We are also gratefully acknowledge grants Fronteras 2020/263377 and 2020/610285 and 610285 from the Consejo Nacional de Humanidades Ciencia y Tecnologías (CONAHCyT). The authors are thankful to Markus Müller for fruitful discussions.

References

1. U. Schibler and P. Sassone-Corsi, A Web of Circadian Pacemakers, Cell 111 (2002) 919. [ Links ]

2. J. Mohawk, C. Green, and T. JS, Central and peripheral circadian clocks in mammals, Annu Rev Neurosci. 35 (2012) 445. [ Links ]

3. W. J. Rietveld, D. S. Minors, and J. M. Waterhouse, Circadian Rhythms and Masking: An Overview, Chronobiology International 10 (1993) 306. [ Links ]

4. A. Martinez-Nicolas, et al., Uncovering Different Masking Factors on Wrist Skin Temperature Rhythm in Free-Living Subjects, PLoS ONE 8 (2013) e61142. [ Links ]

5. J. L. Ditty, S. R. Mackey, and C. H. Johnson, Bacterial Circadian Programs (Springer, Heidelberg, 2009). [ Links ]

6. D. Staiger, Plant Circadian Networks Methods and Protocols (Humana Press, New York, 2014). [ Links ]

7. D. Denlinger, J. Giebultowicz, and D. Saunders, Insect timing: Circadian rhythmicity to seasonality (Elsevier, Amsterdam, 2001). [ Links ]

8. J. Aschoff, S. Daan, and G. G. (eds.), Vertebrate Circadian Systems: Structure and Physiology (Springer-Verlag, Heidelberg, Berlin, 1982). [ Links ]

9. D. S. Minors and J. M. Waterhouse, Circadian Rhythms and the Human (Wright PSG, Bristol, 1981). [ Links ]

10. R. Refinetti, Circadian Rhythms and the Human, 2nd ed. (CRC Press, Boca Rato, 2006). [ Links ]

11. M. L. Gumz, Physiology in Health and Disease (Springer, Heidelberg, 2016). [ Links ]

12. S. M. Jazwinski, V. P. Belancio, and S. M. H. (eds.), Circadian Rhythms and Their Impact on Aging (Springer, Heidelberg, 2017). [ Links ]

13. F. Halberg, Y. L. Tong, and E. A. Johnson, Circadian System Phase - An Aspect of Temporal Morphology ; Procedures and Illustrative Examples, In Springer Berlin Heidelberg, pp. 20-48 (1967). [ Links ]

14. G. Cornelissen, Cosinor-based rhythmometry, Theoretical Biology and Medical Modelling 11 (2014) 1. [ Links ]

15. P. V. d. N. Nóbrega, et al., Sleep and frailty syndrome in elderly residents of long-stay institutions: A cross-sectional study, Geriatrics & Gerontology International 14 (2014) 605. [ Links ]

16. E. S. Musiek, et al., Circadian Rest-Activity Pattern Changes in Aging and Preclinical Alzheimer Disease, JAMA Neurology 75 (2018) 582. [ Links ]

17. K. Suibkitwanchai, et al., Nonparametric time series summary statistics for high-frequency accelerometry data from individuals with advanced dementia, PLOS ONE 15 (2020) [ Links ]

18. K. L Gamble, et al., Delayed Sleep Timing and Symptoms in Adults With Attention-Deficit/Hyperactivity Disorder: A Controlled Actigraphy Study, Chronobiology International 30 (2013) 598. [ Links ]

19. S. A. Tafoya, et al., Indicators of vulnerability associated with less healthy circadian rhythms in undergraduate medical interns, Chronobiology International 36 (2019) 1782 [ Links ]

20. R. Fossion, et al., Multiscale adaptive analysis of circadian rhythms and intradaily variability: Application to actigraphy time series in acute insomnia subjects, PLoS ONE 12 (2017). [ Links ]

21. W. Witting, et al., Alterations in the circadian rest-activity rhythm in aging and Alzheimer’s disease, Biological Psychiatry 27 (1990) 563. [ Links ]

22. B. S. Gonçalves, et al., Nonparametric methods in actigraphy: An update, Sleep Science 7 (2014) 158 [ Links ]

23. T. Ashish, Modern control design (John Wiley & Sons, Ltd., Chichester, 2002). [ Links ]

24. D. T. Chen, et al., Rheology of Soft Materials, Annual Review of Condensed Matter Physics 1 (2010) 301. [ Links ]

25. R. Fossion, A. L. Rivera, and B. Esta nol, A physicist’s view of homeostasis: How time series of continuous monitoring reflect the function of physiological variables in regulatory mechanisms, Physiological Measurement 39 (2018) [ Links ]

26. R. Fossion, et al., Homeostasis from a Time-Series Perspective: An Intuitive Interpretation of the Variability of Physiological Variables, In L. Olivares-Quiroz and O. Resendis-Antonio, eds., Quantitative Models for Microscopic to Macroscopic Biological Macromolecules and Tissues, pp. 87-109 (Springer International Publishing AG, 2018). [ Links ]

27. R. Fossion, A. Sáenz-Burrola, and L. Zapata-Fonseca, On the stability and adaptability of human physiology: Gaussians meet heavy-tailed distributions, INTERdisciplina 8 (2020). [ Links ]

28. R. Fossion, et al., A Time-Series Approach to Assess Physiological and Biomechanical Regulatory Mechanisms, In D. R. Wood, et al., eds., 2019-20 MATRIX Annals, vol. 4, chap. 14 (Springer, 2021). [ Links ]

29. T. Butz, Fourier Transformation for Pedestrians (Springer- Cham, Heidelberg, 2015). [ Links ]

30. P. S. Addison, The Illustrated Wavelet Transform Handbook: Introductory Theory and Applications in Science, Engineering, Medicine and Finance (Institute of Physics Publishing (IOP), Bristol, 2002). [ Links ]

31. A. E. Hramov, et al., Wavelets in Neuroscience (Springer- Verlag, Heidelberg, Heidelberg, 2015). [ Links ]

32. D. Iatsenko, Nonlinear Mode Decomposition: Theory and Applications (Springer-Verlag, Heidelberg, 2015). [ Links ]

33. D. Iatsenko, P. V. E. McClintock, and A. Stefanovska, Nonlinear mode decomposition: A noise-robust, adaptive decomposition method, Phys. Rev. E 92 (2015) 032916. [ Links ]

34. J. B. Elsner and A. A. Tsonis, Singular Spectrum Analysis: A New Tool in Time Series Analysis (Springer Science, New York, 1996). [ Links ]

35. N. Golyandina, V. Nekrutkin, and A. Zhigljavsky, Analysis of Time Series Structure: SSA and Related Techniques (Chapman & Hall, Boca Raton, 2000). [ Links ]

36. N. Golyandina and A. Zhigljavsky, Singular Spectrum Analysis for Time Series (Springer, Heidelberg, 2013). [ Links ]

37. N. E. Huang, et al., The Empirical Mode Decomposition and the Hilbert Spectrum for Nonlinear and Non- Stationary Time Series Analysis, Proceedings of Royal Society of London A 454 (1998) 903. [ Links ]

38. N. E. Huang, et al., confidence limit for the empirical mode decomposition and Hilbert spectral analysis, Proceedings of Royal Society of London A 4592 (2003) 2317-2345. [ Links ]

39. Z. Wu and N. E. Huang, Ensemble Empirical Mode Decomposition: A noise-assisted data anlysis method, Advances in Adaptative Data Analysis 1 (2009) 1. [ Links ]

40. M. E. Torres, et al., A complete ensemble empirical mode decomposition with adaptive noise, ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings (2011) 4144. [ Links ]

41. G. Hammad, et al., pyActigraphy: Open-source python package for actigraphy data visualization and analysis, PLOS Computational Biology 17 (2021) 1. [ Links ]

42. M. MoÅ¡kon, CosinorPy: a python package for cosinor-based rhythmometry, BMC Bioinformatics 21 (2020) 485. [ Links ]

43. B. A. Shenoi, Introduction to digital signal processing and filter design (JohnWiley & Sons, Ltd., Hoboken, New Jersey, 2006). [ Links ]

44. T. a O’Haver, Pragmatic introduction to signal processing (Kindle Direct Publishing, 2021). [ Links ]

45. R. Klette, Concise Computer Vision: An Introduction into Theory and Algorithms (Springer-Verlag, London, 2014). [ Links ]

46. R. Fossion, et al., Quantification of Irregular Rhythms in Chronobiology: A Time-Series Perspective, In M. A. El-Esawi, ed., Circadian Rhythm - Cellular and Molecular Mechanisms, pp. 33-58 (INTECH, 2018). [ Links ]

47. H. Dowse, Maximum entropy spectral analysis for circadian rhythms: theory, history and practice, Journal of Circadian Rhythms 11 (2013) 6. [ Links ]

48. J. V. Jasien, et al., Cyclic Pattern of Intraocular Pressure (IOP) and Transient IOP Fluctuations in Nonhuman Primates Measured with Continuous Wireless Telemetry, Current Eye Research 44 (2019) 1244. [ Links ]

49. L. Z. and G. W., Modeling diurnal hormone profiles by hierarchical state space models, Stat Med. 34 (2015) 3223. [ Links ]

50. S. Y. Sim, et al., Estimation of Circadian Body Temperature Rhythm Based on Heart Rate in Healthy, Ambulatory Subjects, IEEE Journal of Biomedical and Health Informatics 21 (2017) 407. [ Links ]

51. J. Yin, et al., Actigraphy-based parameter tuning process for adaptive notch filter and circadian phase shift estimation, Chronobiology International 37 (2020) 1552 [ Links ]

52. J. Bonarius, C. Papatsimpa, and J.-P. Linnartz, Parameter Estimation in a Model of the Human Circadian Pacemaker Using a Particle Filter, IEEE Transactions on Biomedical Engineering 68 (2021) 1305. [ Links ]

53. C. Mott, et al., Model-Based Human Circadian Phase Estimation Using a Particle Filter, IEEE Transactions on Biomedical Engineering 58 (2011) 1325. [ Links ]

54. D. S. Wickramasuriya and R. T. Faghih, A Cortisol-Based Energy Decoder for Investigation of Fatigue in Hypercortisolism, In 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) (2019) pp. 11-14. [ Links ]

55. B. E. Westerhof, et al., Variable day/night bias in 24-h noninvasive finger pressure against intrabrachial artery pressure is removed by waveform filtering and level correction, Journal of hypertension 20 (2002) 1981-1986. [ Links ]

56. P. Boškoski, A. Debenjak, and B. M Boshkoska, Fast Electrochemical Impedance Spectroscopy: As a Statistical Condition Monitoring Tool (Springer-Verlag, Heidelberg, 2017). [ Links ]

57. T. L. Leise, et al., Wavelet Meets Actogram, Journal of Biological Rhythms 28 (2013) 62. [ Links ]

58. P. T. Clemson, G. Lancaster, and A. Stefanovska, Reconstructing Time-Dependent Dynamics, Proceedings of the IEEE 104 (2016) 223. [ Links ]

59. G. Lancaster, et al., Surrogate data for hypothesis testing of physical systems, Physics Reports 748 (2018) 1. [ Links ]

60. D. B. Percival and A. T. Walden, Wavelet methods for time series analysis (Cambridge University Press, London, 2013). [ Links ]

61. G. R. Lee, et al., PyWavelets: A Python package for wavelet analysis, Journal of Open Source Software 4 (2019) 1237 [ Links ]

62. S. Sanei and H. Hassani, Singular Spectrum Analysis of Biomedical Signals (CRC Press, Boca Raton, 2016). [ Links ]

63. M. Ghil, et al., Singular Spectrum Analysis: Methodology and Comparison, Reviews of Geophysics 40 (2002) 1. [ Links ]

64. H. Hassani, Singular Spectrum Analysis: Methodology and Comparison, Journal of Data Science 5 (2007) 239. [ Links ]

65. R. Fossion, G. T. Vargas, and J.-C. L. Vieyra, Randommatrix spectra as a time series, Physical Review E 88 (2013) 060902(R). [ Links ]

66. R. Fossion, A time-series approach to dynamical systems from classical and quantum worlds, AIP Conf. Proc. 1575 (2014) 89. [ Links ]

67. S. Yamazaki, et al., Rhythmic Properties of the Hamster Suprachiasmatic NucleusIn Vivo, Journal of Neuroscience 18 (1998) 10709, 10.1523/JNEUROSCI.18-24-10709. 1998. [ Links ]

68. L. M. García-Iglesias, et al., Quantification of irregular circadian cycles using time-series methods, AIP Conf. Proc. 2090 (2017) 050009. [ Links ]

69. N. Golyandina and A. Korobeynikov, Basic Singular Spectrum Analysis and forecasting with R, Computational Statistics & Data Analysis 71 (2014) 934. [ Links ]

70. Z. Wu and N. E. Huang, A study of the characteristics of white noise using the empirical mode decomposition method, Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences 460 (2004) 1597. [ Links ]

71. D. A Guzmán, et al., The fractal organization of ultradian rhythms in avian behavior, Scientific Reports 7 (2017) 684. [ Links ]

72. Y. Chen and W. Chen, Detection of circaseptan rhythm and the “Monday effect” from long-term pulse rate dynamics, Annual International Conference of the IEEE Engineering in Medicine and Biology Society 2011 (2011) 3780 [ Links ]

73. E. Shimizu, et al., Application of Empirical Mode Decomposition to Mother and Infant Physical Activity, Methods Inf Med 57 (2018) 152 [ Links ]

74. M. A. Colominas, et al., Noise-Assisted EMD Methods in Action, Advances in Adaptive Data Analysis 04 (2012) 1250025 [ Links ]

75. P. J. J. Luukko, J. Helske, and E. Räsänen, Introducing libeemd: a program package for performing the ensemble empirical mode decomposition, Computational Statistics 31 2015545. [ Links ]

76. D. Iatsenko, P. McClintock, and A. Stefanovska, On the extraction of instantaneous frequencies from ridges in time-frequency representations of signals, Signal Processing 125 (2013). [ Links ]

77. J. Theiler, et al., Testing for nonlinearity in time series: the method of surrogate data, Physica D: Nonlinear Phenomena 58 (1992) 77. [ Links ]

78. L.W. Sheppard, A. Stefanovska, and P. V. McClintock, Detecting the harmonics of oscillations with time-variable frequencies, Physical Review E - Statistical, Nonlinear, and Soft Matter Physics 83 (2011) 1. [ Links ]

79. T. L. Leise, Analysis of Nonstationary Time Series for Biological Rhythms Research, Journal of Biological Rhythms 32 (2017) 187. [ Links ]

80. D. Lyon, The Discrete Fourier Transform, Part 4: Spectral Leakage, Journal of Object Technology 8 (2009) 23. [ Links ]

81. T. L. Leise, Wavelet analysis of circadian and ultradian behavioral rhythms, J. Circadian Rhythms 11 (2013) 5. [ Links ]

82. J.-C. Leloup, D. Gonze, and A. Goldbeter, Limit Cycle Models for Circadian Rhythms Based on Transcriptional Regulation in Drosophila and Neurospora, Journal of Biological Rhythms 14 (1999) 433. [ Links ]

83. A. L. Lloyd and D. Lloyd, Hypothesis: the central oscillator of the circadian clock is a controlled chaotic attractor, Biosystems 29 (1993) 77. [ Links ]

84. Yuichi Nakazato, et al. Estimation of homeostatic dysregulation and frailty using biomarker variability: a principal component analysis of hemodialysis patients, Scientific Reports 10 (2020) 10314 [ Links ]

85. J. L. Rector, et al., Dynamical indicators of resilience from physiological time series in geriatric inpatients: Lessons learned, Experimental Gerontology 149 (2021) 111341. [ Links ]

86. S. Wainio-Theberge, et al., Variability and task-responsiveness of electrophysiological dynamics: Scale-free stability and oscillatory flexibility, Neuro Image 256 (2022) 119245 [ Links ]

87. C. Stephens, “Ome” Sweet “ome”: From the Genome to the Conductome, In D. R Wood, et al., eds., 2019-20 MATRIX Annals, vol. 4, chap. 16 (Springer, 2021). [ Links ]

88. R. Polikar, The Wavelet Tutorial: The Engineer’s Ultimate Guide to Wavelet Analysis (2nd ed.), (Glassboro, New Jersey 2021). [ Links ]

89. I. Dolu and N. O. Nahcivan, Feasibility of the duration of actigraphy data to illustrate circadian rhythm among cognitively intact older people in nursing home: cosinor analysis, Sleep and Biological Rhythms 18 (2020) 59. [ Links ]

90. O. Minaeva, et al., Level and timing of physical activity during normal daily life in depressed and non-depressed individuals, Translational Psychiatry 10 (2020) 11. [ Links ]

91. J. E. Stone, et al., Application of a Limit-Cycle Oscillator Model for Prediction of Circadian Phase in Rotating Night Shift Workers, Scientific Reports 9 (2019) 11032. [ Links ]

92. T. L. Leise and M. E. Harrington, Wavelet-based time series analysis of Circadian rhythms, Journal of Biological Rhythms 26 (2011) 454. [ Links ]

93. I. Daubechies, Ten Lectures on Wavelets (Society for Industrial and Applied Mathematics, Philadelphia, 1992). [ Links ]

94. N. E. Huang and Z. Wu [ Links ]

95. M. Dätig and T. Schlurmann [ Links ]

96. H. Modell, et al., A physiologist’s view of homeostasis, Advances in Physiology Education 39 (2015) 259. [ Links ]

97. A. Seely and P. T. Macklem, Seeley AJE, Macklem PTComplex systems and the technology of variability analysis, Critical care 8 (2005) R367. [ Links ]

98. D. Iatsenko, P. V. McClintock, and A. Stefanovska, Linear and synchrosqueezed time-frequency representations revisited: Overview, standards of use, resolution, reconstruction, concentration, and algorithms, Digital Signal Processing: A Review Journal 42 (2015) 1. [ Links ]

99. K. A. Thomas and R. L. Burr, Circadian research in mothers and infants: how many days of actigraphy data are needed to fit cosinor parameters?, J. Nurs Meas 16 (2008) 201. [ Links ]

100. E. Bedrosian, A product theorem for Hilbert transforms, Proceedings of the IEEE 51 (1963) 868. [ Links ]

101. Y.-C. Chen, M.-Y. Cheng, and H.-T. Wu, Non-parametric and adaptive modelling of dynamic periodicity and trend with heteroscedastic and dependent errors, Journal of the Royal Statistical Society: Series B (Statistical Methodology) 76 (2014) 651 [ Links ]

102. T. O’Haver, A Pragmatic Introduction to Signal Processing 2014. [ Links ]

103. T. C. O’Haver, An introduction to signal processing in chemical measurement, Journal of chemical education 67 (1990) [ Links ]

104. D. Johnson, Fundamentals of Electrical Engineering I (2008). [ Links ]

105. T. Butz, Fourier transformation for pedestrians (2006). [ Links ]

106. O. Marques, Practical Image and Video Processing Using MATLAB (John Wiley & Sons, Inc., Hoboken, 2011). [ Links ]

107. J. Newman, G. Lancaster, and A. Stefanovska, Multiscale Oscillatory Dynamics Analysis User Manual (2017). [ Links ]

Received: October 01, 2022; Accepted: May 20, 2023

Creative Commons License This is an open-access article distributed under the terms of the Creative Commons Attribution License