SciELO - Scientific Electronic Library Online

 
vol.63 número4Logística del comercio internacional de la región de la Cuenca del Pacífico a través del Análisis Envolvente de Datos NetworkEfectos del crecimiento económico en el comportamiento de los costos pegajosos de las empresas pertenecientes a los países BRICS índice de autoresíndice de materiabúsqueda de artículos
Home Pagelista alfabética de revistas  

Servicios Personalizados

Revista

Articulo

Indicadores

Links relacionados

  • No hay artículos similaresSimilares en SciELO

Compartir


Contaduría y administración

versión impresa ISSN 0186-1042

Contad. Adm vol.63 no.4 Ciudad de México oct./dic. 2018

https://doi.org/10.22201/fca.24488410e.2018.1341 

Articles

A hybrid alpha-stable model development for high frequency trading markets

José Antonio Climent Hernández1  * 

Luis Fernando Hoyos-Reyes1 

Marissa R. Martínez-Preece1 

1Universidad Autónoma Metropolitana, México.


Abstract

Business activities require to obtain, organize and manage information from large amounts of data. In hedge funds, short selling trade and derivatives valuation, agents change their strategies to improve pro- fits, and therefore to increase their possibilities to remain in the market, as a result of finding more accu- rate methods to process ever larger volume of information, considering that the information is not evenly distributed among markets participants. In this paper, a hybrid three stage model is formulated consisting of: a high frequency market model through a non-stationary Compound Poisson Process, a multilayer perceptron trained by backpropagation and, finally, estimators based on alpha-stable distributions, as an initial overview to develop a high frequency trading market operating system.

Keywords: α-stable processes; non-stationary compound Poisson processes; multilayer Perceptron; High frequency markets; Big Data

JEL classification: C14; C46; C61; D81; G12; G24

Resumen

Las actividades de negocios requieren obtener, organizar y administrar información a partir de una gran cantidad de datos. En los fondos de cobertura, en transacciones de ventas en corto y en la valuación de derivados, los agentes cambian sus estrategias para mejorar sus beneficios, y con esto estar en posibi- lidades de mantenerse en el mercado, como consecuencia de encontrar métodos más precisos para pro- cesar volúmenes de información cada vez mayores, en el entendido que la información no se distribuye homogéneamente entre los participantes del mercado.En este artículo se formula un modelo híbrido de 3 etapas que consiste en el planteamiento de: un modelo de mercado para transacciones de alta frecuencia mediante procesos de Poisson compuestos no-estacionarios, un perceptrón multicapa entrenado con re- tro-propagación y finalmente estimadores basados en distribuciones alfα-estables, como un primer paso en el desarrollo de un sistema de operación para mercados con transacciones de alta frecuencia.

Palabras clave: procesos α-estables; procesos de Poisson compuestos no-estacionarios; perceptrón multicapa; mercados de alta frecuencia; Big Data

Códigos JEL: C14; C46; C61; D81; G12; G24

Introduction

The capacity to produce data is much greater than the capacity to analyze and extract useful information. In other words, given the large data volume, cannot be captured, stored, managed or analyzed with traditional processes or tecniques, so new technologies must be created to make better use of them.

Among the sources that generate data are sensors, intelligent devices, social media, Internet users, smartphones, and computer applications, among others. For example, a commercial aircraft generates 10 Terabytes (1 Terabyte = 1012 bytes) every 30 minutes of flight through different sensors (Oracle, 2012). These data generating sources are continuously connected and exchange information with other devices. The data created in this manner can become useful if analyzed using the appropriate algorithms and technology (Kuznetsov et al., 2011). The term Big Data began to be used in the contexts of information management and administration in extremely large databases, such as those managed by Google, Yahoo, Amazon or Facebook.

Data are considered ‘big’ in terms of volume, speed, variety, and variability (V4) (Zikopoulus et al., 2013; and Vivísimo, 2012). Regarding the volume, Zettabytes (1 ZB = 1021 bytes) are generally used instead of Terabytes (TB). Considering that information changes constantly (in the form of streaming instead of batch), unstructured or semi-structured data and different applications may require different access methods, security protocols, and mapping.

There are some studies that have focused on different aspects of high frequency transactions. Labadie and Lehalle (2012) have studied alternative risk measures for the optimization of trading algorithms. On the other hand, Colliard and Foucalt (2012) analyze the relationship between the spread and the probability of executing limit orders, concluding that HFT operations benefit market orders, since there is a reduction in their transaction costs, while Menkveld (2013) characterizes HFT through a cross-market strategy considering the study market and a small, fast-growing market, and states that inventory losses can be incurred that are offset by gains in the spread between supply and demand prices.

It is becoming increasingly common to find real-time forecast of prices and consumption, as well as systems that improve the management of this information. For example, Google Trends improves the predictive capacity of high frequency economic indicators through auto- regression models. An example of this is the analysis, carried out between 1991 and 2007, of the behavior of individual mortgages and loans and defaults by type of credit. Prior to the subprime crisis, the results showed that the counties with the greatest credit constraints were those where the number of loans granted and housing prices increased the most. After the crisis, in those same counties, real estate prices fell the most and the number of defaults increased the most. Twenty-four million loans were used to analyze the impact of monetary policy on the risk assumption of the financial institutions involved.

In hedge funds, short selling, and derivatives valuation, brokers change their strategies to improve their profits and be able to stay in the market, as a result of finding more accurate methods to process increasing volumes of information, with the understanding that the information is not distributed evenly among market participants.

Large volumes of data provide benefits because access to them provides more information that translates into a more robust decision-making process. Considering the above, high frequency operators are constantly analyzing methods to obtain more accurate and manageable data faster, even at millisecond scales.

In financial markets, the proper use of information allows for advantages because it is difficult to find efficient and frictionless markets. In most cases information plays a crucial role. One of these is price discovery, where new information coming to the market influences pricing. The above occurs in a very marked way in the derivatives markets. The new information is incorporated very quickly, causing even these derivatives markets to be considered as anticipated indicators of the prices of the underlying goods or instruments. A large number of techniques have been used to take advantage of the relationship or integration that may exist between different markets for price discovery, such as Yan and Zivot (2007), who use co-integration techniques for this purpose. However, in this paper, the use of large volumes of information, rather than price discovery, is oriented to the analysis of information behavior for predictive purposes.

Among the main characteristics of High Frequency Trading (HFT), it is worth mentioning that distributors are anonymous and although transactions are continuously accumulated, it is not known how many transactions each individual operator aggregates, only the total volumes corresponding to each price are observed. For regulatory purposes, stock markets have one identifier per trader and per order. Online markets have the following characteristics:

  1. Several suppliers and operators with limit orders.

  2. Several consumers and operators with market orders.

  3. Operators can go from being suppliers to consumers and vice versa.

  4. There are no dedicated distributors.

Considering the financial context in general and the characteristics of online markets mentioned above, the methodology proposed is a non-traditional hybrid model, which includes three parts: a market model for HFT that uses non-stationary Poisson composite processes; a multilayer perceptron trained with backpropagation; and finally, estimators based on alpha- stable distributions. The objective of this work is to formulate a three-stage mathematical model under the assumption that it will be used in high frequency negotiations, which will serve as the core of a predictive algorithm for HFT decision making, based on the assumption that financial markets are highly volatile and, therefore, parameters that are monitored frequently and adjusted if necessary are required.

To achieve the aforementioned, this work was organized as follows: The following section presents the model of high frequency transactions using Poisson non-stationary composite processes with discontinuous intensity measurement. The third section show the neural network model called Multilayer Perceptron (MLP) with four outputs that allows the identification of the parameters of the alpha-stable distribution that must be updated. The fourth section discusses similar self-similarity processes, illustrating the need for alpha-stable estimators to be adjusted over time with a predictive example for 2017 of the US dollar/euro parities with which fixed parameters were calculated. The last part presents the conclusions.

The model for HFT markets

Considering that the number of transactions in a given period is a counting process, specifically that it is a Poisson process with a variable arrival rate which represents a function of time, a characteristic that allows for the adequate modeling of speculation in markets where traders can generate a considerable number of positions and transactions of a specific product, at different times and with different frequencies, this process is considered the first stage of the proposed hybrid model.

As explained above, in the case of online markets, investors are anonymous, and the real- time data are aggregated. All that is observed are the total volumes of each price, and it is not known how many bids or assets each trader adds.

To model the volume the empirical distribution of transactions is used {Xi} (in some predetermined monetary unit), and thus, the total volume of operations in a time horizon t is Ztn=0NtXn.

Definition 2.1. A stochastic process {Nt}t≥0 is a counting process if:

  1. Nt ≥ 0.

  2. Nt takes whole values.

  3. If s < t, then N sNt.

  4. For s < t, Nt - Ns is the number of events that occurred in interval (s,t].

Definition 2.2. The counting process {Nt}t≥0 is a Poisson non-stationary process with intensity function λ(t)>0, ≥ 0 if.

  1. N0 = 0

  2. {Nt}t≥0. Has independent increases.

  3. P(Nt+h - Nt ≥ 2) = o(h)

  4. P(Nt + h - Nt = 1) = λ(t)h + o(h)

In other words,

at0tλtds, (1)

the measure of intensity of {Nt}t≥0.

Theorem 2.3. {Nt}t≥0 being a Poisson non-stationary process and t, s ≥ 0, then process Nt - Ns is Poisson distributed with expectation a(t + s) - a(t).

The demonstration can be found in Hoyos-Reyes et al. (2011), as well as Proposal 2.4.

Theorem 2.3. implies that E(Nt) = E(Nt -N0) = a(t).

Proposal 2.4. a(t) is a non-decreasing and right continuous function.

Definition 2.5. The inverse of the intensity measurement is,

a-1tsups|a(s)t. (2)

It is observed that a-1 is right continuous. If a is continuous, a-1 is growing and

a ° a-1t=t, t<a. (3)

Observation 2.6. If N is considered a Poisson non-stationary process with continuous intensity measurement a, with a(∞) = ∞. Since a -1 is growing, : = Na -1 presents independent increases. Theorem 2.3. implies that Ṉt - Ṉs = Na-1 distributed with expectation:

aa-1t-aa-1s=t-s. (4)

Therefore, counting process is a Poison stationary process with λ = 1.

By Proposition 2.4., a is necessarily non-decreasing and right continuous, then the left limit:

at-limsta(s) (5)

exists for every t. Assuming that for a particular point t, a(t⁻) ≠ a(t), and also that α := a(t) - a(t⁻) Then the number of arrivals Nt - Nt presents, by Theorem 2.3., expectation α. Applying Definition 2.2. in particular to subparagraphs 3 and 4, it is understood that Nt - Nt is binary: 0 or 1, thus:

ENt-Nt=0PNt-Nt=0+1PNt-Nt=1=PNt-Nt=1=at-at=α. (6)

Point t can be thought of as the moment in which an arrival is programmed and that it can arrive with probability α and with additional probability 1-α of not arriving. If a has jumps in magnitude a1, a2,... in fixed times t1, t2,... ,then from the previous arrival there is an arrival exactly on ti with probability t1, t2,....

Observation 2.7. A Poisson non-stationary process Nt can be interpreted as the sum of two counting processes,

Nt=Ntf+Ntc,t0, (7)

the jump times Ntf are fixed, these are the discontinuity points of a and the probability for a jump to occur in fixed time t is is a(t) - a(t⁻). If af (t) defined as the sum of all the jumps of a in [0,t], that is:

afstas-as-, (8)

Then

act=at-aft,  t0, (9)

is a continuous, non-decreasing function and the second component in the decomposition (7) is a Poisson non-stationary process with intensity measurement ac. This allows formulating algorithms (Hoyos-Reyes et al., 2011) to make estimations through Monte Carlo methods:

  1. P(Zt > θinf) represents the probability that the total volume of operations exceeds the minimum threshold θinf

  2. P(Zt > θsup) represents the probability that the total volume of operations is below the maximum threshold θsup

  3. EZt represents the expectation of the process.

The use of a composite Poisson process has an additional advantage: if the times between transactions do not follow an exponential distribution, then it is possible to generalize the model to a renewal process with the appropriate distribution for the times between transactions.

Multilayer perceptron

The multilayer perceptron (MLP) is a well-known and widely used neural network architecture and uses the three estimators from the previous section: P^Zt>inf),P^(Zt  sup, and ÊZt. These types of networks have the capacity to generalize and to quickly obtain a result, even for situations that did not occur during the training stage; these particularities are important and consistent with the V4 characteristics (volume, speed, variety, and variability) of the data. The idea behind the use of perceptron is that it will allow the use of updated estimators, unlike fixed parameter estimators, which cause the model to lose precision and, after a certain period of use, significant errors occur. On the other hand, the use of perceptron avoids the disadvantages of using dynamic estimators, which must be updated as often as previously specified, which causes the algorithm to become inefficient, a situation that is incompatible with the nature of high frequency markets. The perceptron, as it is susceptible to being trained, recognizes the parameter or parameters that require updating by the unit of time, which means that only those estimators that need it are modified, thus avoiding, on the one hand, the loss of time that occurs with the use of dynamic estimators and, on the other hand, makes the model more precise than a model with fixed estimators.

The structure is simple and consists of three layers of neurons: input layer, hidden layers, and output layer. The activation functions can be: hyperbolic tangent, sigmoid or step function. The input layer has three inputs, one for each calculated estimator.

According to Hetch-Nielsen (1990) the number of neurons in the hidden layer does not need to be greater than two going by the number of inputs.

Using the White and Gallant theorem, formulated by Kur Hornik (1989), which states that two hidden layers can represent functions with any shape, implies that it is redundant to employ networks with more than two hidden layers.

Therefore, the MLP architecture consists of an input layer of 3 neurons, two hidden layers of 3 neurons each, and an output layer of 4 neurons (one for each parameter of the α-stable distributions αt, βt, γt, δt).

For the sample size for the training database, Baum and Haussler (1989) suggest that the number of training examples is approximately equal to the number of weights in the network multiplied by the inverse of the error, where generally ε = 0.1 is the error used.

The training algorithm for MLP(3,3,3,4) is Backpropagation, which reduces the mean square error (MSE) of the outputs generated with regard to the real outputs.

In this way, only the estimators that require it are updated. Formally, the output indicates the parameter or parameters to be updated in the α-stable distribution, for example: if it is (1,0,0,0), it requires the update of , if it is (1,1,1,1) all the parameters of the distribution need to be updated.

α-stable distributions

Using parameters estimated with α-stable distributions allows detecting whether the processes are self-similar and anti-persistent, which is consistent with short-term memory, medium reversion, negative correlation, and high variation. These characteristics make α-stable distributions ideal for use in markets with turbulence and high volatility, as is often the case in financial markets.

Definition 4.1. (Self-similarity process). Process X t is self-similar with exponent H 0, if for every a ϵ (0,∞), the finite-dimensional distributions of are identical to the finite- dimensional distributions of aHX t:

X  at1 ,K ,X atn d__ aH X t1 K  ,aH X tn (10)

Climent et al. (2016) propose that the relation between the stability parameter and the self- similarity exponent generates the index H that allows inferring the risk of random α-stable events and indicates whether the yields are anti-persistent, independent or persistent and represent movements with pink, white or black noise, respectively. These characteristics can relate to instantaneous period series for real-time instrument valuation, which allows Big Data to have useful applications in financial engineering, risk management, and valuation of derivative products in high frequency trading (HFT) markets. The authors analyze the yields of the exchange rate parities of the US dollar, Canadian dollar, Euro and Yen. They estimate the basic statistics, the α-stable parameters, carry out the Kolmogorov-Smirnov and Anderson-Darling goodness of fit tests, estimate the self-similarity exponents, discarding the fact that the series of the parities are multi-fractional, estimate the confidence intervals of the exchange rate parities, and conclude that the estimated α-stable distributions are more efficient than the Gaussian distribution to quantify the market risks, because the latter is a particular case of the α-stable distributions, the series are self-similar, and indicate that the yields of the parities are anti-persistent and as such present a short-term memory, medium reversion, and negative correlation with high risk in the short and medium term. Rodríguez (2014) indicates the use of the estimation of the stability parameters of the α-stable distributions and the self- similarity exponent to explore the violation of a priori assumptions of Gaussian distribution and independence; identifies leptokurtic characteristics in the FIX exchange rate; estimates the Hurst exponent and rejects the independence hypothesis in 80% of the periods; estimates the stability parameter; and concludes that through an index, the modeling of financial series is improved.

The previous section mentions that the dynamics of the parameters are fundamental to obtain adequate quantifications of the financial risks. The self-similarity exponents of the US dollar and euro exchange rates were estimated through the generalized Hurst exponent (GHE) as proposed by Climent et al. (2016), using daily FIX exchange rate parity data obtained from the Banco de México (Banxico) website from January 2nd, 2014, to November 30th, 2016, and concludes that the parities are anti-persistent because they do not present the expected yields for series α-stable with. The estimators obtained presented expected positive yields according to the average and the localization parameter with positive tendency, but with medium reversion, that is, H 1.

The parameter estimates show that the domains of attraction are α-stable, the self-similarity exponents calculated through the GHE indicate that the processes are self-similar and anti- persistent and so they present short-term memory, medium reversion, negative correlation and high variation, with an elevated risk in the short and medium term because they are related to turbulence processes (pink noise), which are consistent with the results obtained by Climent et al. (2016).

The updating of the α-stable parameters is essential to obtain adequate estimates of the financial risks involved. Graph 1 shows a 99% confidence interval level with fixed α-stable parameters for the US dollar and Euro exchange rate parity for the period from November 30th, 2016, to December 25th, 2017, considering the estimated α-stable parameters with the yields for the period from January 2nd, 2014, to November 30th, 2016, using the data from Table 1, Table 2 and Table 3:

Figure 1 MLP(3,3,3,4) 

Table 1: Self-similarity exponents and stability parameters. 

Parity Min GHE(1) Max
USD 0.5058 0.5202 0.5588
Euro 0.4881 0.5082 0.5413
Parity Min α Max
USD 1.6095 1.7181 1.8267
Euro 1.5942 1.7043 1.8144

Source: On elaboration with data from Banco de México.

Table 2: Exchange rate parities and risk-free interest rates 

USD Euro
M0 20.5155 21.8418
i 0.0517 0.0517
r 0.0070 0.0125

Source: Own elaboration with data from Banco de México. Federal Reserve, and www.google.com.mx

Table 3: Estimation of the α-stable parameters of the parities 

Parity α β γ δ
USD 1.7181 0.0447 0.00408471 0.000585076
Euro 1.7043 0.0101 0.00485289 0.000194362

Source: Own elaboration with data from Banco de México and the STABLE program.

Table 1 shows the self-similarity exponents through the GHE(q) where the exponent for q=1 is shown and the minimum and maximum are obtained through the regressions for t = 5, ..., 19 and the stability parameters are shown with 95% confidence intervals.

The estimates of the self-similarity exponents obtained by the GHE method(1) have the expected value E H 1 as the limit between anti-persistence and persistence of the α-stable processes through the pair, H and whether the process is anti-persistent, independent or persistent and in both cases, the US dollar - euro parities are anti-persistent.

The α-stable parameters presented in Table 1 are consistent with the results obtained and presented in the researches of Dostoglou and Rachev (1999), Čížek et al. (2005), Scalas and Kim (2006), Climent-Hernández and Venegas-Martínez (2013), Climent-Hernández and Cruz- Matú (2017), and Climent-Hernández et al. (2016).

Table 2 shows the exchange rate parities as of November 30th, 2016, the domestic and foreign risk-free interest rates with which the 99% confidence intervals are calculated for the period of November 30th, 2016, to November 30th, 2017.

Table 3 shows the estimation of the α-stable parameters for the exchange rate parities of the US dollar and euro using the maximum likelihood method.

Figure 2 shows the α-stable confidence intervals at 99% for the period from November 30th, 2016, to November 30th, 2017, where it is visible that the Euro is more volatile than the US dollar. Also presented are the averages of ten thousand simulations with the α-stable parameters for the exchange rate parities mentioned above, where it is observed that positive asymmetry models the depreciation of the national currency against the US dollar and euro.

Source: Own elaboration.

Figure 2 α-stable confidence intervals at 99% 

Climent et al. (2016) indicate that α-stable distributions are more suitable for modeling financial series with high volatility accumulations, which are extreme values that are more frequent than those indicated by the Gaussian distribution and that have a greater financial and economic impact with respect to probable income statements derived from yields, satisfying the generalized central limit theorem where the Gaussian distribution is a particular case that cannot adequately model extreme values and asymmetry. Furthermore, the α-stable distributions allow obtaining more accurate estimates of confidence intervals for financial engineering and risk management projects, using the relationship between the stability parameter and the self- similarity exponent.

The estimations of the α-stable parameters and of the self-similarity exponents allow to efficiently infer financial risks in an adequate manner, so it is important to be certain that the domains of α-stable attraction are stationary through the proposal of the hybrid model that allows reviewing them through the perceptron proposed in this work, and which can be validated in future research.

Conclusions

As the amount of digital data increases and feeds the different Big Data systems, it is necessary to create systems with the ability to make estimates and approximations to determine, with a certain degree of precision, the amount of information handled in different periods. In this sense, it is also very useful to create standards and formats that allow the storage, computation, and analysis of information handled by different organizations based on specific needs. The model proposed here meets the need for a more efficient model that detects what parameters need to be adjusted and when.

In order to analyze and make decisions in high frequency markets, it is necessary to develop and implement scalable analysis algorithms for distributed and fault tolerant systems. Real- time use and intelligent and comprehensive exploitation of Big Data are recent approaches to scenario analysis and decision-making in the studied context.

The proposed hybrid model facilitates the systematization of information to obtain significant results for decision-making. It is a versatile model, as its application is not limited to the foreign exchange market, but, as explained above, the proposed model can be used in all types of financial markets and for the valuation of derivatives. The great advantages of the proposed hybrid model are that it avoids wasting resources, as it has the ability to detect the parameters that need to be modified and to make the necessary adjustments. This characteristic is its main contribution and is also the characteristic that differentiates it from other models used for the management of large volumes of information.

As mentioned above, the formulation of the model contributes to the proposal of a hybrid methodology for the management of large volumes of information in markets that have traditionally been characterized by a low level of disclosure of its methodology, as is the case of markets where high frequency transactions (HFT) are handled, thus opening the discussion on the formulation and application of new methodologies. This is a hybrid model in more than one way. On the one hand, stochastic processes with non-stationary behavior in the volume of transactions are used, and on the other, neuronal networks allow qualitative results to be obtained, which can be exploited in predictive models of alpha-stable distributions with neuronal updating of parameters. In this manner, this theoretical formulation serves as a basis for the creation of empirical research lines to implement the proposed model.

REFERENCES

Baum, E.B., Haussler, D., (1989). What size net gives valid generalization? Neural Computation, Santa Cruz, CA. [ Links ]

Banco de México (2016). Información estadística, disponible en: Información estadística, disponible en: http://www.banxico.org.mx/SieInternet/consultarDirectorioInternetAction.do?sector=6&accion=consultarCuadro&idCuadro=CF307&locale=es . Consultado: 1/12/2016. [ Links ]

Čížek, P., W. Härdle & R. Weron (2005). Stable Distributions. Statistical Tools for Finance and Insurance. Berlin, Springer: 21-44. http://dx.doi.org/10.1007/3-540-27395-6_1Links ]

Climent-Hernández, J. A. & F. Venegas-Martínez (2013). Valuación de opciones sobre subyacentes con rendimientos α-estables, Contaduría y Administración, 58(4):119-150. https://doi.org/10.1016/s0186-1042(13)71236-1Links ]

Climent-Hernández, J. A. & C. Cruz-Matú (2017). Valuación de un producto estructurado de compra sobre el SX5E cuando la incertidumbre de los rendimientos está modelada con procesos log-estables, Contaduría y Administración, 62(4): 1136-1159. https://doi.org/10.1016/j.cya.2017.06.004Links ]

Climent-Hernández, J. A., Rodríguez-Benavides, D., & Hoyos Reyes, L. F., (2016). Los procesos α-estables y su relación con el exponente de auto-similitud: paridades de los tipos de cambio Dólar americano, Dólar canadiense, Euro y Yen, Contaduría y Administración, Publicación próxima. https://doi.org/10.1016/j.cya.2017.09.003Links ]

Colliard Jean-Edouard & Foucalt T. (2012). “Trading fees and efficient in limit orders markets”, The Review of Financial Studies, 25 (11), pp. 3389-3421. https://doi.org/10.1093/rfs/hhs089Links ]

Dostoglou, S. & S. T. Rachev (1999). Stable Distributions and Term Structure of Interest Rates, Mathematical and Computer Modelling, 29(10):57-60. https://doi.org/10.1016/s0895-7177(99)00092-8Links ]

Hetch-Nielsen, R. (1990). Neurocomputing. Addison-Wesley Publishing Company. [ Links ]

Hornik, K., Stinchcombe, M., & White, H. (1989). Multilayer feedforward networks are universal approximators. Neural Networks, 2(5), 359-366. https://doi.org/10.1016/0893-6080(89)90020-8Links ]

Hoyos-Reyes L.F., Martínez-Preece M., & López-Herrera F., (2011). Estimación de la probabilidad de ruina en tiempo finito bajo un proceso de Poisson no estacionario con medida de intensidad discontinua. Capítulo del libro Administración de Riesgos. Volumen III. Modelos y entorno financiero. Páginas 231-244. Serie Estudios, UAM-AZCAPOTZALCO. ISBN 978-607-477-570-9. México. [ Links ]

Kuznetsov, A., Kyprianou, A. E., Pardo & J. C., van Schaik, K., (2011). A Wiener-Hopf Monte Carlo simulation technique for Levy processes. The Annals of Applied Probability, 21 (6): 2171-2190. Institute of Mathematical Statistics. https://doi.org/10.1214/10-aap746Links ]

Labadie, M. & L. Charles-Albert (2012). Optimal starting times, stopping times and risk measures for algorithmic trading. <hal-00705056>. Disponible en: Disponible en: https://hal.archives-ouvertes.fr/hal-00705056/ . Consultado: 24/02/2017. [ Links ]

Menkveld, Albert J. (2013). High frequency trading and new market makers. Journal of financial Markets, 16 (4), pp. 712-740. https://doi.org/10.1016/j.finmar.2013.06.006Links ]

Oracle (2012) Oracle information Architecture: An Architect’s Guide to Big Data. http://www.oracle.com/technetwork/topics/entarch/articles/oea-big-data-guide-1522052.pdf . Consultado: 11/04/2013. [ Links ]

Rodríguez Aguilar, R. (2014). El coeficiente de Hurst y el parámetro a-estable para el análisis de series financieras: Aplicación al mercado cambiario mexicano, Contaduría y Administración, 59(1):149-173 https://doi.org/10.1016/s0186-1042(14)71247-1Links ]

Scalas, E. & K. Kim (2006). The Art of Fitting Financial Time Series with Levy Stable Distributions, Munich Personal RePEc Archive August (336):1-17. https://mpra.ub.uni-muenchen.de/336/ . Consultado: 8/2/2012. [ Links ]

Shen, J., & Yu, Y. (2014). Styled Algorithmic Trading and the MV-MVP Style. Available at SSRN 2507002. [ Links ]

Vivísimo (2012). Optimizing Big Data, http://www.fstsummit.com/media/whitepapers/2012/Vivisimoi.Optimizing_Big_Data.pdf. Consultado: 11/04/2013. [ Links ]

Yan, Bingcheng & Zivot, Eric, The Dynamics of Price Discovery (2007). AFA 2005 Philadelphia Meetings. Disponible en SSRN: Disponible en SSRN: https://ssrn.com/abstract=617161 . Consultado: 23/02/2017. Disponible en SSRN: https://ssrn.com/abstract=617161. Consultado: 23/02/2017. http://dx.doi.org/10.2139/ssrn.617161 . [ Links ]

Zikopoulos, P.C., deRoos, D., Parasuraman, K., Deutsch, T., Corrigan, D. & Giles, J., (2013). Harness the Power of Big Data. The IBM Big Data Platform. McGraw-Hill. [ Links ]

Peer review under the responsibility of Universidad Nacional Autónoma de México

Received: December 14, 2016; Accepted: March 06, 2017

*Autores para correspondencia. Correo electrónico: jach@azc.uam.mx (J. A. Climent Hernández)

Creative Commons License Este es un artículo publicado en acceso abierto bajo una licencia Creative Commons