Part of a series on 
Finance 

Mathematical finance, also known as quantitative finance and financial mathematics, is a field of applied mathematics, concerned with mathematical modeling of financial markets.
In general, there exist two separate branches of finance that require advanced quantitative techniques: derivatives pricing on the one hand, and risk and portfolio management on the other.^{[1]} Mathematical finance overlaps heavily with the fields of computational finance and financial engineering. The latter focuses on applications and modeling, often by help of stochastic asset models, while the former focuses, in addition to analysis, on building tools of implementation for the models. Also related is quantitative investing, which relies on statistical and numerical models (and lately machine learning) as opposed to traditional fundamental analysis when managing portfolios.
French mathematician Louis Bachelier's doctoral thesis, defended in 1900, is considered the first scholarly work on mathematical finance. But mathematical finance emerged as a discipline in the 1970s, following the work of Fischer Black, Myron Scholes and Robert Merton on option pricing theory. Mathematical investing originated from the research of mathematician Edward Thorp who used statistical methods to first invent card counting in blackjack and then applied its principles to modern systematic investing.^{[2]}
The subject has a close relationship with the discipline of financial economics, which is concerned with much of the underlying theory that is involved in financial mathematics. While trained economists use complex economic models that are built on observed empirical relationships, in contrast, mathematical finance analysis will derive and extend the mathematical or numerical models without necessarily establishing a link to financial theory, taking observed market prices as input. See: Valuation of options; Financial modeling; Asset pricing. The fundamental theorem of arbitragefree pricing is one of the key theorems in mathematical finance, while the Black–Scholes equation and formula are amongst the key results.^{[3]} Though mathematical finance models may generate a profit in the shortrun, this type of modeling is often in conflict with a central tenet of modern macroeconomics, the Lucas critique or rational expectations, which states that observed relationships may not be structural in nature and thus may not be possible to exploit for public policy or for profit unless we have identified relationships using causal analysis and econometrics.^{[4]} Mathematical finance models do not, therefore, incorporate complex elements of human psychology that are critical to modeling modern macroeconomic movements such as the selffulfilling panic that motivates bank runs.
Today many universities offer degree and research programs in mathematical finance.
There are two separate branches of finance that require advanced quantitative techniques: derivatives pricing, and risk and portfolio management. One of the main differences is that they use different probabilities such as the riskneutral probability (or arbitragepricing probability), denoted by "Q", and the actual (or actuarial) probability, denoted by "P".
Goal  "extrapolate the present" 
Environment  riskneutral probability 
Processes  continuoustime martingales 
Dimension  low 
Tools  Itō calculus, PDEs 
Challenges  calibration 
Business  sellside 
Main article: Riskneutral measure 
Further information: Black–Scholes model, Brownian model of financial markets, Martingale pricing, and Quantitative analysis (finance) § History 
The goal of derivatives pricing is to determine the fair price of a given security in terms of more liquid securities whose price is determined by the law of supply and demand. The meaning of "fair" depends, of course, on whether one considers buying or selling the security. Examples of securities being priced are plain vanilla and exotic options, convertible bonds, etc.
Once a fair price has been determined, the sellside trader can make a market on the security. Therefore, derivatives pricing is a complex "extrapolation" exercise to define the current market value of a security, which is then used by the sellside community. Quantitative derivatives pricing was initiated by Louis Bachelier in The Theory of Speculation ("Théorie de la spéculation", published 1900), with the introduction of the most basic and most influential of processes, the Brownian motion, and its applications to the pricing of options.^{[5]}^{[6]} The Brownian motion is derived using the Langevin equation and the discrete random walk.^{[7]} Bachelier modeled the time series of changes in the logarithm of stock prices as a random walk in which the shortterm changes had a finite variance. This causes longerterm changes to follow a Gaussian distribution.^{[8]}
The theory remained dormant until Fischer Black and Myron Scholes, along with fundamental contributions by Robert C. Merton, applied the second most influential process, the geometric Brownian motion, to option pricing. For this M. Scholes and R. Merton were awarded the 1997 Nobel Memorial Prize in Economic Sciences. Black was ineligible for the prize because of his death in 1995.^{[9]}
The next important step was the fundamental theorem of asset pricing by Harrison and Pliska (1981), according to which the suitably normalized current price P_{0} of a security is arbitragefree, and thus truly fair only if there exists a stochastic process P_{t} with constant expected value which describes its future evolution:^{[10]}

(1) 
A process satisfying (1) is called a "martingale". A martingale does not reward risk. Thus the probability of the normalized security price process is called "riskneutral" and is typically denoted by the blackboard font letter "".
The relationship (1) must hold for all times t: therefore the processes used for derivatives pricing are naturally set in continuous time.
The quants who operate in the Q world of derivatives pricing are specialists with deep knowledge of the specific products they model.
Securities are priced individually, and thus the problems in the Q world are lowdimensional in nature. Calibration is one of the main challenges of the Q world: once a continuoustime parametric process has been calibrated to a set of traded securities through a relationship such as (1), a similar relationship is used to define the price of new derivatives.
The main quantitative tools necessary to handle continuoustime Qprocesses are Itô's stochastic calculus, simulation and partial differential equations (PDE's). ^{[11]}
Goal  "model the future" 
Environment  realworld probability 
Processes  discretetime series 
Dimension  large 
Tools  multivariate statistics 
Challenges  estimation 
Business  buyside 
Risk and portfolio management aims at modeling the statistically derived probability distribution of the market prices of all the securities at a given future investment horizon.
This "real" probability distribution of the market prices is typically denoted by the blackboard font letter "", as opposed to the "riskneutral" probability "" used in derivatives pricing. Based on the P distribution, the buyside community takes decisions on which securities to purchase in order to improve the prospective profitandloss profile of their positions considered as a portfolio. Increasingly, elements of this process are automated; see Outline of finance § Quantitative investing for a listing of relevant articles.
For their pioneering work, Markowitz and Sharpe, along with Merton Miller, shared the 1990 Nobel Memorial Prize in Economic Sciences, for the first time ever awarded for a work in finance.
The portfolioselection work of Markowitz and Sharpe introduced mathematics to investment management. With time, the mathematics has become more sophisticated. Thanks to Robert Merton and Paul Samuelson, oneperiod models were replaced by continuous time, Brownianmotion models, and the quadratic utility function implicit in mean–variance optimization was replaced by more general increasing, concave utility functions.^{[12]} Furthermore, in recent years the focus shifted toward estimation risk, i.e., the dangers of incorrectly assuming that advanced time series analysis alone can provide completely accurate estimates of the market parameters.^{[13]} See Financial risk management § Investment management.
Much effort has gone into the study of financial markets and how prices vary with time. Charles Dow, one of the founders of Dow Jones & Company and The Wall Street Journal, enunciated a set of ideas on the subject which are now called Dow Theory. This is the basis of the socalled technical analysis method of attempting to predict future changes. One of the tenets of "technical analysis" is that market trends give an indication of the future, at least in the short term. The claims of the technical analysts are disputed by many academics.^{[citation needed]}
Further information: Financial economics § Challenges and criticism, and Financial engineering § Criticisms 
See also: Financial models with longtailed distributions and volatility clustering 
Over the years, increasingly sophisticated mathematical models and derivative pricing strategies have been developed, but their credibility was damaged by the financial crisis of 2007–2010. Contemporary practice of mathematical finance has been subjected to criticism from figures within the field notably by Paul Wilmott, and by Nassim Nicholas Taleb, in his book The Black Swan.^{[14]} Taleb claims that the prices of financial assets cannot be characterized by the simple models currently in use, rendering much of current practice at best irrelevant, and, at worst, dangerously misleading. Wilmott and Emanuel Derman published the Financial Modelers' Manifesto in January 2009^{[15]} which addresses some of the most serious concerns. Bodies such as the Institute for New Economic Thinking are now attempting to develop new theories and methods.^{[16]}
In general, modeling the changes by distributions with finite variance is, increasingly, said to be inappropriate.^{[17]} In the 1960s it was discovered by Benoit Mandelbrot that changes in prices do not follow a Gaussian distribution, but are rather modeled better by Lévy alphastable distributions.^{[18]} The scale of change, or volatility, depends on the length of the time interval to a power a bit more than 1/2. Large changes up or down are more likely than what one would calculate using a Gaussian distribution with an estimated standard deviation. But the problem is that it does not solve the problem as it makes parametrization much harder and risk control less reliable.^{[14]}