This article includes a list of general references, but it lacks sufficient corresponding inline citations. Please help to improve this article by introducing more precise citations. (November 2017) (Learn how and when to remove this template message)

Function |
---|

x ↦ f (x) |

Examples of domains and codomains |

Classes/properties |

Constructions |

Generalizations |

In mathematical analysis and its applications, a **function of several real variables** or **real multivariate function** is a function with more than one argument, with all arguments being real variables. This concept extends the idea of a function of a real variable to several variables. The "input" variables take real values, while the "output", also called the "value of the function", may be real or complex. However, the study of the complex-valued functions may be easily reduced to the study of the real-valued functions, by considering the real and imaginary parts of the complex function; therefore, unless explicitly specified, only real-valued functions will be considered in this article.

The domain of a function of n variables is the subset of for which the function is defined. As usual, the domain of a function of several real variables is supposed to contain a nonempty open subset of .

A **real-valued function of n real variables** is a function that takes as input

Some functions are defined for all real values of the variables (one says that they are everywhere defined), but some other functions are defined only if the value of the variable are taken in a subset *X* of **R**^{n}, the domain of the function, which is always supposed to contain an open subset of **R**^{n}. In other words, a real-valued function of *n* real variables is a function

such that its domain *X* is a subset of **R**^{n} that contains a nonempty open set.

An element of *X* being an *n*-tuple (*x*_{1}, *x*_{2}, …, *x _{n}*) (usually delimited by parentheses), the general notation for denoting functions would be

It is also common to abbreviate the *n*-tuple (*x*_{1}, *x*_{2}, …, *x _{n}*) by using a notation similar to that for vectors, like boldface

A simple example of a function in two variables could be:

which is the volume *V* of a cone with base area *A* and height *h* measured perpendicularly from the base. The domain restricts all variables to be positive since lengths and areas must be positive.

For an example of a function in two variables:

where *a* and *b* are real non-zero constants. Using the three-dimensional Cartesian coordinate system, where the *xy* plane is the domain **R**^{2} and the z axis is the codomain **R**, one can visualize the image to be a two-dimensional plane, with a slope of *a* in the positive x direction and a slope of *b* in the positive y direction. The function is well-defined at all points (*x*, *y*) in **R**^{2}. The previous example can be extended easily to higher dimensions:

for *p* non-zero real constants *a*_{1}, *a*_{2}, …, *a _{p}*, which describes a

The Euclidean norm:

is also a function of *n* variables which is everywhere defined, while

is defined only for * x* ≠ (0, 0, …, 0).

For a non-linear example function in two variables:

which takes in all points in *X*, a disk of radius √8 "punctured" at the origin (*x*, *y*) = (0, 0) in the plane **R**^{2}, and returns a point in **R**. The function does not include the origin (*x*, *y*) = (0, 0), if it did then *f* would be ill-defined at that point. Using a 3d Cartesian coordinate system with the *xy*-plane as the domain **R**^{2}, and the z axis the codomain **R**, the image can be visualized as a curved surface.

The function can be evaluated at the point (*x*, *y*) = (2, √3) in *X*:

However, the function couldn't be evaluated at, say

since these values of *x* and *y* do not satisfy the domain's rule.

The **image** of a function *f*(*x*_{1}, *x*_{2}, …, *x _{n}*) is the set of all values of

The preimage of a given real number *c* is called a level set. It is the set of the solutions of the equation *f*(*x*_{1}, *x*_{2}, …, *x*_{n}) = *c*.

This section does not cite any sources. Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed. (November 2017) (Learn how and when to remove this template message)

The domain of a function of several real variables is a subset of **R**^{n} that is sometimes, but not always, explicitly defined. In fact, if one restricts the domain *X* of a function *f* to a subset *Y* ⊂ *X*, one gets formally a different function, the *restriction* of *f* to *Y*, which is denoted . In practice, it is often (but not always) not harmful to identify *f* and , and to omit the restrictor |_{Y}.

Conversely, it is sometimes possible to enlarge naturally the domain of a given function, for example by continuity or by analytic continuation.

Moreover, many functions are defined in such a way that it is difficult to specify explicitly their domain. For example, given a function *f*, it may be difficult to specify the domain of the function If *f* is a multivariate polynomial, (which has as a domain), it is even difficult to test whether the domain of *g* is also . This is equivalent to test whether a polynomial is always positive, and is the object of an active research area (see Positive polynomial).

The usual operations of arithmetic on the reals may be extended to real-valued functions of several real variables in the following way:

- For every real number
*r*, the constant functionis everywhere defined. - For every real number
*r*and every function*f*, the function:has the same domain as*f*(or is everywhere defined if*r*= 0). - If
*f*and*g*are two functions of respective domains*X*and*Y*such that*X*∩*Y*contains a nonempty open subset of**R**^{n}, thenandare functions that have a domain containing*X*∩*Y*.

It follows that the functions of *n* variables that are everywhere defined and the functions of *n* variables that are defined in some neighbourhood of a given point both form commutative algebras over the reals (**R**-algebras). This is a prototypical example of a function space.

One may similarly define

which is a function only if the set of the points (*x*_{1}, …,*x*_{n}) in the domain of *f* such that *f*(*x*_{1}, …, *x*_{n}) ≠ 0 contains an open subset of **R**^{n}. This constraint implies that the above two algebras are not fields.

One can easily obtain a function in one real variable by giving a constant value to all but one of the variables. For example, if (*a*_{1}, …, *a _{n}*) is a point of the interior of the domain of the function

whose domain contains an interval centered at *a*_{1}. This function may also be viewed as the restriction of the function *f* to the line defined by the equations *x*_{i} = *a*_{i} for *i* = 2, …, *n*.

Other univariable functions may be defined by restricting *f* to any line passing through (*a*_{1}, …, *a _{n}*). These are the functions

where the *c*_{i} are real numbers that are not all zero.

In next section, we will show that, if the multivariable function is continuous, so are all these univariable functions, but the converse is not necessarily true.

Until the second part of 19th century, only continuous functions were considered by mathematicians. At that time, the notion of continuity was elaborated for the functions of one or several real variables a rather long time before the formal definition of a topological space and a continuous map between topological spaces. As continuous functions of several real variables are ubiquitous in mathematics, it is worth to define this notion without reference to the general notion of continuous maps between topological space.

For defining the continuity, it is useful to consider the distance function of **R**^{n}, which is an everywhere defined function of 2*n* real variables:

A function *f* is **continuous** at a point * a* = (

If a function is continuous at *f*(* a*), then all the univariate functions that are obtained by fixing all the variables

The functions *x* ↦ *f*(*x*, 0) and *y* ↦ *f*(0, *y*) are both constant and equal to zero, and are therefore continuous. The function *f* is not continuous at (0, 0), because, if *ε* < 1/2 and *y* = *x*^{2} ≠ 0, we have *f*(*x*, *y*) = 1/2, even if |*x*| is very small. Although not continuous, this function has the further property that all the univariate functions obtained by restricting it to a line passing through (0, 0) are also continuous. In fact, we have

for *λ* ≠ 0.

The limit at a point of a real-valued function of several real variables is defined as follows.^{[1]} Let * a* = (

if the following condition is satisfied:
For every positive real number *ε* > 0, there is a positive real number *δ* > 0 such that

for all * x* in the domain such that

If the limit exists, it is unique. If * a* is in the interior of the domain, the limit exists if and only if the function is continuous at

When * a* is in the boundary of the domain of

A symmetric function is a function *f* that is unchanged when two variables *x _{i}* and

where *i* and *j* are each one of 1, 2, …, *n*. For example:

is symmetric in *x*, *y*, *z* since interchanging any pair of *x*, *y*, *z* leaves *f* unchanged, but is not symmetric in all of *x*, *y*, *z*, *t*, since interchanging *t* with *x* or *y* or *z* gives a different function.

Suppose the functions

or more compactly * ξ* =

Then, a function *ζ* of the functions * ξ*(

is a **function composition** defined on *X*,^{[2]} in other terms the mapping

Note the numbers *m* and *n* do not need to be equal.

For example, the function

defined everywhere on **R**^{2} can be rewritten by introducing

which is also everywhere defined in **R**^{3} to obtain

Function composition can be used to simplify functions, which is useful for carrying out multiple integrals and solving partial differential equations.

Elementary calculus is the calculus of real-valued functions of one real variable, and the principal ideas of differentiation and integration of such functions can be extended to functions of more than one real variable; this extension is multivariable calculus.

Main article: Partial derivative |

Partial derivatives can be defined with respect to each variable:

Partial derivatives themselves are functions, each of which represents the rate of change of *f* parallel to one of the *x*_{1}, *x*_{2}, …, *x _{n}* axes at all points in the domain (if the derivatives exist and are continuous—see also below). A first derivative is positive if the function increases along the direction of the relevant axis, negative if it decreases, and zero if there is no increase or decrease. Evaluating a partial derivative at a particular point in the domain gives the rate of change of the function at that point in the direction parallel to a particular axis, a real number.

For real-valued functions of a real variable, *y* = *f*(*x*), its ordinary derivative *dy*/*dx* is geometrically the gradient of the tangent line to the curve *y* = *f*(*x*) at all points in the domain. Partial derivatives extend this idea to tangent hyperplanes to a curve.

The second order partial derivatives can be calculated for every pair of variables:

Geometrically, they are related to the local curvature of the function's image at all points in the domain. At any point where the function is well-defined, the function could be increasing along some axes, and/or decreasing along other axes, and/or not increasing or decreasing at all along other axes.

This leads to a variety of possible stationary points: global or local maxima, global or local minima, and saddle points—the multidimensional analogue of inflection points for real functions of one real variable. The Hessian matrix is a matrix of all the second order partial derivatives, which are used to investigate the stationary points of the function, important for mathematical optimization.

In general, partial derivatives of higher order *p* have the form:

where *p*_{1}, *p*_{2}, …, *p _{n}* are each integers between 0 and

The number of possible partial derivatives increases with *p*, although some mixed partial derivatives (those with respect to more than one variable) are superfluous, because of the symmetry of second order partial derivatives. This reduces the number of partial derivatives to calculate for some *p*.

Main article: differentiable function |

A function *f*(* x*) is

where as . This means that if *f* is differentiable at a point * a*, then

for *i* = 1, 2, …, *n*, which can be found from the definitions of the individual partial derivatives, so the partial derivatives of *f* exist.

Assuming an *n*-dimensional analogue of a rectangular Cartesian coordinate system, these partial derivatives can be used to form a vectorial linear differential operator, called the gradient (also known as "nabla" or "del") in this coordinate system:

used extensively in vector calculus, because it is useful for constructing other differential operators and compactly formulating theorems in vector calculus.

Then substituting the gradient ∇*f* (evaluated at * x* =

where · denotes the dot product. This equation represents the best linear approximation of the function *f* at all points * x* within a neighborhood of

which is defined as the **total differential**, or simply **differential**, of *f*, at * a*. This expression corresponds to the total infinitesimal change of

Geometrically ∇*f* is perpendicular to the level sets of *f*, given by *f*(* x*) =

in which *d x* is an infinitesimal change in

In arbitrary curvilinear coordinate systems in *n* dimensions, the explicit expression for the gradient would not be so simple - there would be scale factors in terms of the metric tensor for that coordinate system. For the above case used throughout this article, the metric is just the Kronecker delta and the scale factors are all 1.

If all first order partial derivatives evaluated at a point * a* in the domain:

exist and are continuous for all * a* in the domain,

exist and are continuous, where *p*_{1}, *p*_{2}, …, *p _{n}*, and

If *f* is of differentiability class *C ^{∞}*,

Main article: Multiple integration |

Definite integration can be extended to multiple integration over the several real variables with the notation;

where each region *R*_{1}, *R*_{2}, …, *R _{n}* is a subset of or all of the real line:

and their Cartesian product gives the region to integrate over as a single set:

an *n*-dimensional hypervolume. When evaluated, a definite integral is a real number if the integral converges in the region *R* of integration (the result of a definite integral may diverge to infinity for a given region, in such cases the integral remains ill-defined). The variables are treated as "dummy" or "bound" variables which are substituted for numbers in the process of integration.

The integral of a real-valued function of a real variable *y* = *f*(*x*) with respect to *x* has geometric interpretation as the area bounded by the curve *y* = *f*(*x*) and the *x*-axis. Multiple integrals extend the dimensionality of this concept: assuming an *n*-dimensional analogue of a rectangular Cartesian coordinate system, the above definite integral has the geometric interpretation as the *n*-dimensional hypervolume bounded by *f*(* x*) and the

While bounded hypervolume is a useful insight, the more important idea of definite integrals is that they represent total quantities within space. This has significance in applied mathematics and physics: if *f* is some scalar density field and * x* are the position vector coordinates, i.e. some scalar quantity per unit

With the definitions of multiple integration and partial derivatives, key theorems can be formulated, including the fundamental theorem of calculus in several real variables (namely Stokes' theorem), integration by parts in several real variables, the symmetry of higher partial derivatives and Taylor's theorem for multivariable functions. Evaluating a mixture of integrals and partial derivatives can be done by using theorem differentiation under the integral sign.

One can collect a number of functions each of several real variables, say

into an *m*-tuple, or sometimes as a column vector or row vector, respectively:

all treated on the same footing as an *m*-component vector field, and use whichever form is convenient. All the above notations have a common compact notation * y* =

A real-valued implicit function of several real variables is not written in the form "*y* = *f*(…)". Instead, the mapping is from the space **R**^{n + 1} to the zero element in **R** (just the ordinary zero 0):

is an equation in all the variables. Implicit functions are a more general way to represent functions, since if:

then we can always define:

but the converse is not always possible, i.e. not all implicit functions have an explicit form.

For example, using interval notation, let

Choosing a 3-dimensional (3D) Cartesian coordinate system, this function describes the surface of a 3D ellipsoid centered at the origin (*x*, *y*, *z*) = (0, 0, 0) with constant semi-major axes *a*, *b*, *c*, along the positive *x*, *y* and *z* axes respectively. In the case *a* = *b* = *c* = *r*, we have a sphere of radius *r* centered at the origin. Other conic section examples which can be described similarly include the hyperboloid and paraboloid, more generally so can any 2D surface in 3D Euclidean space. The above example can be solved for *x*, *y* or *z*; however it is much tidier to write it in an implicit form.

For a more sophisticated example:

for non-zero real constants *A*, *B*, *C*, *ω*, this function is well-defined for all (*t*, *x*, *y*, *z*), but it cannot be solved explicitly for these variables and written as "*t* =", "*x* =", etc.

The **implicit function theorem** of more than two real variables deals with the continuity and differentiability of the function, as follows.^{[4]} Let *ϕ*(*x*_{1}, *x*_{2}, …, *x*_{n}) be a continuous function with continuous first order partial derivatives, and let *ϕ* evaluated at a point (* a*,

and let the first partial derivative of *ϕ* with respect to *y* evaluated at (* a*,

Then, there is an interval [*y*_{1}, *y*_{2}] containing *b*, and a region *R* containing (* a*,

Substituting *dy* into the latter differential and equating coefficients of the differentials gives the first order partial derivatives of *y* with respect to *x _{i}* in terms of the derivatives of the original function, each as a solution of the linear equation

for *i* = 1, 2, …, *n*.

A complex-valued function of several real variables may be defined by relaxing, in the definition of the real-valued functions, the restriction of the codomain to the real numbers, and allowing complex values.

If *f*(*x*_{1}, …, *x _{n}*) is such a complex valued function, it may be decomposed as

where *g* and *h* are real-valued functions. In other words, the study of the complex valued functions reduces easily to the study of the pairs of real valued functions.

This reduction works for the general properties. However, for an explicitly given function, such as:

the computation of the real and the imaginary part may be difficult.

Multivariable functions of real variables arise inevitably in engineering and physics, because observable physical quantities are real numbers (with associated units and dimensions), and any one physical quantity will generally depend on a number of other quantities.

Examples in continuum mechanics include the local mass density *ρ* of a mass distribution, a scalar field which depends on the spatial position coordinates (here Cartesian to exemplify), **r** = (*x*, *y*, *z*), and time *t*:

Similarly for electric charge density for electrically charged objects, and numerous other scalar potential fields.

Another example is the velocity field, a vector field, which has components of velocity **v** = (*v _{x}*,

Similarly for other physical vector fields such as electric fields and magnetic fields, and vector potential fields.

Another important example is the equation of state in thermodynamics, an equation relating pressure *P*, temperature *T*, and volume *V* of a fluid, in general it has an implicit form:

The simplest example is the ideal gas law:

where *n* is the number of moles, constant for a fixed amount of substance, and *R* the gas constant. Much more complicated equations of state have been empirically derived, but they all have the above implicit form.

Real-valued functions of several real variables appear pervasively in economics. In the underpinnings of consumer theory, utility is expressed as a function of the amounts of various goods consumed, each amount being an argument of the utility function. The result of maximizing utility is a set of demand functions, each expressing the amount demanded of a particular good as a function of the prices of the various goods and of income or wealth. In producer theory, a firm is usually assumed to maximize profit as a function of the quantities of various goods produced and of the quantities of various factors of production employed. The result of the optimization is a set of demand functions for the various factors of production and a set of supply functions for the various products; each of these functions has as its arguments the prices of the goods and of the factors of production.

Some "physical quantities" may be actually complex valued - such as complex impedance, complex permittivity, complex permeability, and complex refractive index. These are also functions of real variables, such as frequency or time, as well as temperature.

In two-dimensional fluid mechanics, specifically in the theory of the potential flows used to describe fluid motion in 2d, the complex potential

is a complex valued function of the two spatial coordinates *x* and *y*, and other *real* variables associated with the system. The real part is the velocity potential and the imaginary part is the stream function.

The spherical harmonics occur in physics and engineering as the solution to Laplace's equation, as well as the eigenfunctions of the *z*-component angular momentum operator, which are complex-valued functions of real-valued spherical polar angles:

In quantum mechanics, the wavefunction is necessarily complex-valued, but is a function of *real* spatial coordinates (or momentum components), as well as time *t*:

where each is related by a Fourier transform.