Markov Chain Monte Carlo Method for Estimating Implied Volatility in Option Pricing

Using market covered European call option prices, the Independence Metropolis-Hastings Sampler algorithm for estimating Implied volatility in option pricing was proposed. This algorithm has an acceptance criteria which facilitate accurate approximation of this volatility from an independent path in the Black Scholes Model, from a set of finite data observation from the stock market. Assuming the underlying asset indeed follow the geometric brownian motion, inverted version of the Black Scholes model was used to approximate this Implied Volatility which was not directly seen in the real market: for which the BS model assumes the volatility to be a constant. Moreover, it is demonstrated that, the Implied Volatility from the options market tends to overstate or understate the actual expectation of the market. In addition, a 3-month market Covered European call option data, from 30 different stock companies was acquired from Optionistic.Com, which was used to estimate the Implied volatility. This accurately approximate the actual expectation of the market with low standard errors ranging between 0.0035 to 0.0275.


Introduction of Problem
The revolution in exchanging and valuing financial securities started around the mid-1970s. In Kwon & Lee (2011) , Black and Scholes theory of option pricing and the partial differential equation delineating these option prices was given as: knowing that the volatility function takes into consideration a superior comprehension of the underlying stochastic process of option values. However, the volatility function is not straightforwardly perceptible from option values. On the off chance that volatility is a constant, (1) turns into the established Black-Scholes model. Nonetheless, in the genuine market, volatility is evolving (Franks & Schwartz, 1991;Heynen, 1994). Volatility measure changes in the asset prices and in this research, volatility is standard deviation. It is important to gauge it precisely in a portfolio, risk estimating, asset pricing, and fiscal policy. The estimation of volatility has been an essential research theme of present day financial markets. To better gauge the volatility function of a genuine market, one takes volatility to be time dependent, that is, one considers σ(t) rather than the constant σ (Egger & Engl, 2005;Egger, Hein & Hoffman, 2006). Few reviews on volatility functions which are spatially reliant, that is σ = σ(X) have been exploited in recent years (Deng, Yu & Lang., 2008;Jiang & Tao., 2001).

Aims
Accurately predicting the Implied Volatility function allows for a clearer insight of the behavior of option values in the market. However, this Implied Volatility function is not directly obtained from option values in the real financial market. The use of Implied volatility in the financial market tends to overstate or understate the actual expectation of the market in nearly all long term cases. The estimation of Implied volatility has been a critical research area of modern financial market. In this research, one estimates the volatility function of a financial market by taking the volatility to be a time dependent function, that is σ(t), instead of σ been constant in Black Scholes model. Hence the main aim of this research is: • To estimate the Implied Volatility dependent on time, using Options data.
• To estimate the standard errors of this MCMC method.

Related Work
Ball and Roma (1994) examine alternative methods for option pricing when the underlying security's volatility is stochastic. They use power series methods, which they show to be quite accurate and easy to implement for alternative volatility specifications. They conclude that the solution methods are only applicable when the correlation between innovations in security prices and volatility is zero. Therefore, the estimation problem in implementing stochastic volatility pricing is a promising focus for further studies. Chang & Chang (1996) investigate the implications that, arrival time of volatility is stochastic in calendar-time, and thus becomes stationary in informative time in option pricing. They then outline the steps in changing from a calendar-time to informative-time and show that the simulations may outperform BSM model in pricing current options. They show that, their information-time valuation approach is potentially a viable alternative to the calendar-time norm. Nevertheless, it points out that the consideration or pricing of stochastic volatility in asset valuation remains a difficult but increasingly important problem. Bouchouev and Isakov (1999) utilizes information from the market price of financial derivates to recoup an un-detected local volatility function for the basic stochastic process by method for a systematic guess. Utilizing the double condition, they give a broad scientific plan of this inverse problem, demonstrating its uniqueness and soundness in comes about by checking on different ways to deal with the numerical arrangement. They expect that, the local volatility function is Holder continuous. They stress that, stability assumes an essential part for hearty alignment of the pricing model. By choosing appropriate volatility parameterization, the market pricing on a reasonable day can be predicted well with a finite number of parameters. Numerical methods such as entropy minimization, dynamic programming, discrepancy method, least discrepancy estimation method, the smoothest volatility approaches which are optimization-based methods have been used for determining the local volatility of option prices. Nevertheless, these issues are under-decided and the ideal neighborhood volatility is been the nearest one to some subjectively determined estimate. They appear in their preparatory trial that relative execution relies on upon the particular economic situations, time skyline and the coveted ex-change o between the exactness, soundness and computational effectiveness. They leave behind the systematic comparative analysis of these methods for further research. Borovkov & Novikov (2002) describes a modest approach to estimating expectations for option pricing of a specific form using the Vanilla case of option. The expectation was estimated by simply integrating the various moment generating function with a certain weight. In areas like barrier options, it needed only one more integration. Their suggestions appear to proceed all others(and, aside Monte Carlo Simulations, the first) to factor discretely monitored exotic options pricing under a general Levy process. In this case, they assume that volatility is known, which makes it similar to a BS model. This therefore does not model a real market situation with this assumption. Crepey (2003) concedes that, volatility gauge is a major test in the financial market. For authentic appraisals of volatility, which depend on perceptions of the time-series of the option value, Implied or Calibration evaluations of volatility depend on the expectation of the exchanging operators mirroring the price of the exchanged option items gotten from the stock. Adopting priority from the strategy examined by Lagnado & Osher (1997), they apply tikhonov regularization to the inverse problem by adjusting the local volatility in a summed up Black and Scholes model as seen in the Vanilla option values. Past methodologies received a numerical and exact perspective of the tikhonov regularization. This paper builds up a thorough hypothetical platform for inverse problem in a partial differential equation system. Risk directors esteem risk presentation and figure hedge proportions reliably with the market utilizing the adjusted neighborhood volatility function. It treats two issue cases in its writing. Initially, the coordinating of the watched option prices required to happen on the real, subsequently limited, arrangement of sets (K, T ) as the discrete alignment issue and the coordinating required to happen in general (K, T ) with the end goal that T ≥ t 0 , k > 0, alluding to this as the constant adjustment issue. These issues are not well posed on the grounds that the arrangement relies on the information in a flimsy way. They initially set up W 1,2 p estimate, for Black and Scholes and Dupire equations with quantifiable fixings and apply tikhonov regularization strategy for ill-posed non-linear inverse problems to get the general outcomes accessible in the hypothesis. It comments in its decision, as an open issue whether their application to American option prices can be stretched out to the alignment from the European option prices and whether, the congruity suspicion in its hypothesis is important. Hein and Hofmann (2003), breaks down the inverse problem from a term-structure of option values by aligning an absolutely time-dependent volatility function to understand an ill-posed nonlinear administrator condition in consistent space and power-integrable capacity over a limited interim. As to issue as a forward operator by BS model, the concentration is on ill-posedness, the effect of information hedging and no arbitrage situations. It dissects the forward operator of the in-verse problem (IP) in two ways, that is, breaking down the IP into an internal straight convolution operator prompting a differentiation problem which is ill-posed in a worldwide way with external nonlinear Nemytskii operator given by a BS function prompting an ill-posed impact, restricted at little circumstances in view of the reversal of the external operator. It highlights out that, numerous discourses on regularization approaches for stable arrangements of an IP have been inquired about in detail, without breaking down the ill-posedness marvels of such issues in actuality. Furthermore, volatility functions dependent on time in mix with groups of option values dependent on maturity does not appear to be a lot of enthusiasm, though the model is sometimes limited. They attempt to fill a hole in the writing by breaking down illposed circumstances and extra conditions implementing very well-posed sub issues related with option values dependent on time and volatility functions in space of nonstop and power-integrable capacities within limited time interim. They additionally give an inside and out information into the effect of data hedging, no arbitrage situations and characteristics of at-the-money options. Not one of the phenomenon ends up noticeably sensible on the o chance that one factors volatilities dependent on price and option prices dependent on agreed prices. They utilize tikhonov regularization L 2 and their decision of the regularization parameter depends on Hansen's L-Curve basis. In their decision, they saw that, because of totally unique problem structure, the numerical examination utilized as a part of their paper could not sum up the instance of adjusting volatility functions dependent on the price. The watched ill-posedness impacts additionally impact the odds of these handy issues of fitting volatility smiles overall. Papanicolaou et al (2003), in the wake of appearing in Borokov & Novikov (2002) that, within the scope of a detachment of time ranges between the principle watched process and the volatility driving procedure, asymptotic strategies are proficient in catching the impacts of random volatility in straightforward strong amendments to steady volatility equations. Knowledge in partial differential equations shows this strategy relates to a singular perturbation examination. They intended to manage the non-smoothness of the result inborn to option pricing. Be that as it may, this flopped looking into it of call options which is especially essential, since the alignment of models depends on these instruments for which they cleared out these sorts of singularities open for further studies. Broadie M.and, Detemple J. B.(2004), in their research, studies the writing of option pricing from its birthplace to the present. They likewise stress on late patterns and improvement in procedure and demonstrating. They pinpoint essential perceptions in option pricing, taking note that, not exclusively do Implied Volatility curves depend deliberately on the strike value (K) and maturity time (T ), additionally this reliance deviates in an unusual route with the entry in calendar time. Moreover, to address these experimental perceptions, investigation depicts volatility as a changing procedure as opposed to it been a constant. They expand on the major numerical methodologies and group them into four classifications specifically: i. In recent studies, noteworthy advances have been in the use of change strategies and asymptotic development procedures. Fourier, Laplace and Generalized Transform Methods have been connected to Stochastic Volatility (SV) models, the evaluation of Asian options and numerous other option pricing quantities. Asymptotic extension and singular perturbation strategies have demonstrated particularly valuable in creating scientific equations and approximations for SV models. Likewise, Lattice strategies utilize discrete-time and discrete-state approximations to stochastic differential equations (SDE's) to figure financial prices. Lattice approach methodologies were first proposed in Parkinson (1977) and Black & Scholes (1973). Lattice techniques are anything but difficult to clarify and actualize, and has been portrayed, for all intents and purposes in each course reading on financial assets. The triangular lattice forestalls the requirement for spatial limit conditions needed for finite difference techniques. These components make lattice techniques appealing for educational reasons and in the calculation of financial prices in more straightforward models. Finite difference techniques give numerical answers for the basic pricing partial differential equations (PDE's), and are normally the strategy for decision for models and securities that are confounded. Once more, Monte Carlo Simulation is a characteristic method for mean calculations. A Monte Carlo strategy comprises of three fundamental strides: the initial step is to create n random ways of the hidden state variables. The next is to figure the relating n marked down alternative option payouts. The last stride is to normalize the outcomes to evaluate the mean, and as a rule a standard error of the gauge is also estimated.
Tree or Monte Carlo methods are based on the probabilistic problem formulation whereas finite difference methods are based on the PDE's formulation.
It concludes with recent ongoing researches on using the numerical approaches, that is: i. Using path simulation methods when there are non-linearity in the financial SDE's.
ii. Computational advancement through variance reduction techniques.
iii. Extending the Monte Carlo methods to evaluate financial asset of American option value. Egger and Engl (2005) utilizes Tikhonov regularization to distinguish the stability of local volatility surfaces (S , t) operating at a Black and Scholes/Dupire condition from market observed prices of European vanilla option. The strategy gives a steady and convergent technique for volatility evaluation by aligning the model, however put accentuation on the way that, for a sensible remaking of the term structure of volatility, extra information must be incorporated. Deng, Yu and Yang (2008) utilizes the Green Function technique in ideal control structure to expand on inverse problem (IP) in deciding the Implied Volatility when the normal option cost, that is, the estimation of the option cost comparing with an agreed price and all conceivable developments from the present date to a picked maturity time is known. This is changed into a terminal control issue by the strategy for which the ideal control technique addresses the presence and uniqueness of the base control utilitarian when the fundamental conditions are fulfilled. In the event that the model for option pricing were impeccably reasonable, the Implied Volatility is equal for all options on the same fundamental resources with various K and T. Sadly, it is not the situation. Implied Volatilities fluctuate with K and T, which are individually termed "smile effect" and "term structure". The oblige Implied Volatilities that fluctuate with K by making the volatility asset price subordinate. By that, they can change the inverse problem to an ideal control problem, for which the presence and the important states of the base for the control functional were built up. However, due to no convexity of the optimal control problem, they could not obtain a global unique solution of the problem but only proved the local uniqueness of it. Lu and Yi (2009) introduces the standard tikhonov method as a regularization scheme for getting back the Implied Volatility of European option prices under some conditions, to bring about the local uniqueness and stability of the Implied Volatility of an integral equation obtained from the Dupire equation. They establish these empirical facts that, Implied Volatility decreases with strike level and increases with time to the maturity of the option, which is called smile effect. The inverse problem to recover Implied Volatilities are usually ill-posed and therefore many problems arises from it, the solution uniqueness and stability, the various approaches in numerical experiment. Knowing that volatility depends on both time and price, they consider the case that; volatility does not only depend on prices, but also relates it to the term structure of the volatility to prove the uniqueness and stability of their algorithm. The proposed regularization scheme used to find the Fredholm integral equation of the first kind does not discuss the choice of the parameter used for the numerical illustration because the choice of a suitable regularization parameter in this techniques is still a problem. Another shortcoming of this work is that, in the real market, option prices may not follow their assumptions completely, and the noise of the option data may be very large. Therefore, it is necessary in future research to apply their algorithm to data collected from the real market and get further conclusions. Rhambarat & Brockwell (2010) introduces Sequential Monte Carlo as Markov Chain Monte Carlo method for American option pricing under stochastic volatility models. This method is not dependent on volatility been observed but expresses them as functions of conditional distributions of volatility for optimal decision functions using dynamic programming problems, given observed data. This algorithm generates nearer good solutions under these conditions by combining the optimal decision. They use sequential Monte Carlo particle filtering scheme to make inference on the unseen volatility process at time intervals and given present and past data of the price process, they find an exact solution of the conditional distribution of the unseen volatility at that time interval using dynamic programming problem. They illustrate two variants of the algorithm, the least square Monte Carlo algorithm of Longstaff & Schwartz (2001) and using a brute force gridding approach. They show the use of the algorithm by estimating the posterior distribution of the market price of volatility risk for three equities [Dell Inc., The Walt Disney Company and Xerox Corporation]. They remark that, when working with models that may not permit exact simulations, first and high order discretization techniques needs to be sorted to, to facilitate option-pricing methods. Hence, the limitation with sequential Monte Carlo algorithm is its heavy computational demands.
As indicated by Kwon and Lee (2011), they build up a limited strategy to unravel partial integro-differential equation (PIDE) which depicts the conduct of option prices under jump diffusion models. They saw that, in BS demonstrate, the volatility parameter of the BS model is thought to be constant. All things considered, this supposition of the BS model could not represent the volatility smiles or skews, which is typically seen in the option markets. Their technique has a tendency to evade emphases at each time step and has the second order convergence rate that comprehends the PIDE under the jump diffusion model. It additionally concentrates on the development of direct framework whose coefficient networks are not thick lattices but rather tri-diagonal matrices, with three time levels rather than a two time levels utilizing the certain strategy. Their outcomes demonstrate that, the point wise errors are diminishing and that the errors have the second-order convergence rate. This, they neglected to demonstrate and left it for further research.
De Cezaro et al (2012) builds up the hypothetical part of the reasonable issue of deciding from market watched prices volatility of the European option calls. It is not linear and also an ill-posed issue whose arrangement needs regularization.
They use the tikhonov regularization by methods for a convex regularizing capacity as an expansion to the quadratic regularization that has appeared in past works in converse issue literature. They address this issue from the con-vex investigation techniques viewpoint and Bregman separations. On the hypothetical angle, their outcomes appears to have better convergence rate and considers convergence in space unique in relation to the quadratic regularization technique. The Bregman distance together with some regularization functional (Kullback-Leibler), with noise level at zero, shows the stability and convergence of the regularized solution. One advantage of using this method is that, it requires feeble conditions to those in previous literatures. In this paper also, it recommends more information on the measurement data µ δ and remarks that further research such as connection to risk measure should be explored. Lastly, a numerical implementation of their work with actual market data was also opened for further studies. Trong et al (2014) constructs elementary regularization to recover the Implied Volatility σ = σ(t) from an European option prices data using the Black-Scholes formula, giving a precise equation of the regularization parameter and error evaluation, thereby improving the results obtained by Hein and Hoffman (2003) and Kramer and Richter (2008). By doing so, the task of answering certain questions such as: how to formulate the regularization scheme is considered by eliminating the ill-conditioning effect through truncating the small values of the function. The choice of the regularization parameter is also considered. The size of the deviation between the real and regularization functions is considered by looking at the inner (finding the Implied Volatility σ(t) from the underlying asset S (t) by recovering a derivative from its integral) and outer (ill-conditioning of the scheme in case of at-the-money option with others by regularizing the problem) problems proposed in Hein and Hofman (2003). They justified from the outer problem to be optimal with no doubt by having estimates of lesser values for the solution.
Wang, Yang & Zeng (2014) calls attention to fact that, the estimation of Implied Volatility is a commonplace PDE inverse problem for which they proposed Total Variation, TV-L for distinguishing the Implied Volatility. They found the ideal volatility function by limiting the cost utilitarian measuring the inconsistency. They utilize the utilization of the Adjoint strategy to locate the correct estimation of the slope for the minimization methodology. It presumes that, TV-L model was utilized for explaining the Implied Volatility under the structure of the BS model and this is accomplished by taking care of an ideal control issue utilizing regularization methods. Trong et al (2016) proposed a residual technique as a regularization instrument for the inverse problem for finding the absolute volatility dependent on time of option pricing. It expresses that volatility is not constant as said and the market is worried with issues of recognizing a non-consistent volatility. They are faced with issues of adjusting the Implied Volatility σ(tX) from a reasonable value of the option which is an inverse problem in option valuation and it is not wellposed. Methods such as Green Function, Mollification and Tikhonov (Minimization methods) among others have been used to regularize the problem in previous literatures. It first looks at the problem of consistency, which is of finding an operator R so that a positive function δ satisfying the continuous and inverse property of the regularization scheme exist. Again, they look at the convergence rate, which establishes the error value between the regularized and real solution. However, this they could not achieve, as they did not obtain a good outcome for rate of convergence but compared their results with other results in previous works and concluded that, the logarithmic rate in their research is acceptable. It further decomposes the problem into an outer and inner one, for which they constructed a regularization scheme for the outer problem and then recover a derivative from its integral using the inner problem, there-by proving the instability of the problem. It concludes by constructing a regularization scheme that does not use a prior constant, as they are able to justify their first solution to the problem of consistency. Nevertheless, could not justify their second problem to obtain optimal results for the rate of convergence but only gave their analysis for the rate of convergence.

Independence Metropolis-Hastings (IMH) Sampler
The IMH algorithm is a unique instance of the M-H algorithm. It contrasts from the M-H algorithm in the view that new proposed values for the Markov Chain are completely autonomous of the past value in the chain. In M-H algorithm, the point draw, Φ (i+1) t is drawn from the assumed distribution,r(Φ (i+1) t |Φ (i) t ) , which independent on the previous markov state Φ (i) t . Thus the name "Independence Metropolis-Hastings Sampler". Despite the fact that the hopeful draw Φ (i+1) t is sampled independent of the previous state, the succession (Φ (i) t ) M i=1 would not be autonomous since the acknowledgment likelihood relies on upon the previous draw. The technique still outcomes the end result in spite of this independence property through the acceptance likelihood of each new draw.
On account of option valuing, the financial market watches the time arrangement of financial prices,X = X 1 , ..., X T , option prices,N = N 1 , ..., N T also, ceaselessly aggravated returns, V. The combined posterior is p(µ, σ|X, N, V) and the Hammersley-Clifford theorem propose that p (µ|σ, X, N, V) and p (σ|µ, X, N, V) completely depicts this combined posterior. ),S , 1] Proceeding with, this technique give draws from the combined posterior. This illustration unmistakably demonstrates the strength of the IMH technique. The conditional density of p(σ t |µ, V, X, N) can be assessed yet not specifically inspected, because of nonlinearity of σ t as it enters the Black and Scholes option valuation and is constantly limited by the hidden, (σ, X t ) ≤ X t . The tail conduct of π(σ t ) is controlled by the likelihood price and the IMH technique is geometrically merged.

Data
In this research, 30 expensive covered European call options acquired from Optionistics.com was considered as market option information. Data provided was considered from 18th November, 2016 to 17th February, 2017, a three (3) months expiration option data with different strike prices ranging from 2.00 to 17.50. It gives account of the historical and implied volatilities of the options as well as the biasness in these values.

Results
Below are the Implied volatility graphs of the 30 option data from November, 2016 to February, 2017.

Discussion
The plot in Figure 1 clearly depicts the fact that, the Implied Volatility tends to deviate from the actual expected move of the real market. Again, if Implied Volatility understate the actual move of the market, premium buying is advantageous. If Implied Volatility overstates the actual move of the market, premium selling is advantageous. Option spreads are positions made by option traders by purchasing and selling an equivalent number of option contracts, with various strikes, on the same underlying security. In this situation, option spreads are utilized to reduce overall risk by guaranteeing that profits and losses are limited to a range.
A plot of the Historical Volatility and IMH Volatility approximated by the Independence Metropolis-Hastings Sampler algorithm is shown in Figure 2 and this plot shows highly insignificant difference between the Historical Volatility and IMH Volatility.
The plot of the various volatilities are plotted together against the strike prices in Figure 3. This clearly shows that, the Independence Metropolis-Hastings Sampler algorithm accurately approximate better, the market expectation as compared to the Implied volatility. This is due to the fact, the algorithm updates the volatility of the stock price with time, thereby removing all the jumps that might occur in the real market.
Also, the bigger the standard error in connection to the size of the estimate, the less reliable the estimate. The Independence Metropolis Hastings algorithm shows low standard error for estimating the Implied Volatility. Therefore, the Independence Metropolis Hastings algorithm has better approximation to the market expectation with low standard errors between 0.0035 and 0.027.

Conclusion
Black Scholes Model expects a fixed volatility after some time whiles traders realize that volatility is a long way from being steady and attempt to anticipate their patterns and levels. In this manner, Implied Volatility joins the desire of the market exchanging on the option and the fundamental resources. Despite the fact that volatility is undetectable in the market, different variables, for example, a sudden political event or surprising news with respect to a specific stock additionally drive costs from hypothetically expected esteem. These elements cannot be measured and can affect both option prices and the exactness of price modeling.
With the absence of these components influencing the market, the Independence Metropolis-Hastings Sampler algorithm can precisely estimate with low standard errors between 0.0035 and 0.027 the market expectations of stock and option prices given every single other factors in the Black-Scholes Model. With these data nearby, one can utilize this algorithm to precisely surmised the Implied Volatility of an investment opportunity available.