As applied in the field of computer vision, graph cut optimization can be employed to efficiently solve a wide variety of low-level computer vision problems (early vision[1]), such as image smoothing, the stereo correspondence problem, image segmentation, object co-segmentation, and many other computer vision problems that can be formulated in terms of energy minimization. Many of these energy minimization problems can be approximated by solving a maximum flow problem in a graph[2] (and thus, by the max-flow min-cut theorem, define a minimal cut of the graph). Under most formulations of such problems in computer vision, the minimum energy solution corresponds to the maximum a posteriori estimate of a solution. Although many computer vision algorithms involve cutting a graph (e.g., normalized cuts), the term "graph cuts" is applied specifically to those models which employ a max-flow/min-cut optimization (other graph cutting algorithms may be considered as graph partitioning algorithms).
"Binary" problems (such as denoising a binary image) can be solved exactly using this approach; problems where pixels can be labeled with more than two different labels (such as stereo correspondence, or denoising of a grayscale image) cannot be solved exactly, but solutions produced are usually near the global optimum.
The theory of graph cuts used as an optimization method was first applied in computer vision in the seminal paper by Greig, Porteous and Seheult[3] of Durham University. Allan Seheult and Bruce Porteous were members of Durham's lauded statistics group of the time, led by Julian Besag and Peter Green, with the optimisation expert Margaret Greig notable as the first ever female member of staff of the Durham Mathematical Sciences Department.
In the Bayesian statistical context of smoothing noisy (or corrupted) images, they showed how the maximum a posteriori estimate of a binary image can be obtained exactly by maximizing the flow through an associated image network, involving the introduction of a source and sink. The problem was therefore shown to be efficiently solvable. Prior to this result, approximate techniques such as simulated annealing (as proposed by the Geman brothers),[4] or iterated conditional modes (a type of greedy algorithm suggested by Julian Besag)[5] were used to solve such image smoothing problems.
Although the general -colour problem remains unsolved for the approach of Greig, Porteous and Seheult[3] has turned out[6][7] to have wide applicability in general computer vision problems. Greig, Porteous and Seheult's approaches are often applied iteratively to a sequence of binary problems, usually yielding near optimal solutions.
In 2011, C. Couprie et al.[8] proposed a general image segmentation framework, called the "Power Watershed", that minimized a real-valued indicator function from [0,1] over a graph, constrained by user seeds (or unary terms) set to 0 or 1, in which the minimization of the indicator function over the graph is optimized with respect to an exponent . When , the Power Watershed is optimized by graph cuts, when the Power Watershed is optimized by shortest paths, is optimized by the random walker algorithm and is optimized by the watershed algorithm. In this way, the Power Watershed may be viewed as a generalization of graph cuts that provides a straightforward connection with other energy optimization segmentation/clustering algorithms.
where the energy is composed of two different models ( and ):
— unary term describing the likelihood of each color.
— binary term describing the coherence between neighborhood pixels.
Graph cuts methods have become popular alternatives to the level set-based approaches for optimizing the location of a contour (see[9] for an extensive comparison). However, graph cut approaches have been criticized in the literature for several issues:
See also: Graph cut optimization |
The Boykov-Kolmogorov algorithm[17] is an efficient way to compute the max-flow for computer vision-related graphs.
The Sim Cut algorithm[18] approximates the minimum graph cut. The algorithm implements a solution by simulation of an electrical network. This is the approach suggested by Cederbaum's maximum flow theorem.[19][20] Acceleration of the algorithm is possible through parallel computing.