Part of a series on  
Arithmetic logic circuits  

Quick navigation  
Components


See also 

In computing, an arithmetic logic unit (ALU) is a combinational digital circuit that performs arithmetic and bitwise operations on integer binary numbers.^{[1]}^{[2]} This is in contrast to a floatingpoint unit (FPU), which operates on floating point numbers. It is a fundamental building block of many types of computing circuits, including the central processing unit (CPU) of computers, FPUs, and graphics processing units (GPUs).^{[3]}
The inputs to an ALU are the data to be operated on, called operands, and a code indicating the operation to be performed; the ALU's output is the result of the performed operation. In many designs, the ALU also has status inputs or outputs, or both, which convey information about a previous operation or the current operation, respectively, between the ALU and external status registers.
An ALU has a variety of input and output nets, which are the electrical conductors used to convey digital signals between the ALU and external circuitry. When an ALU is operating, external circuits apply signals to the ALU inputs and, in response, the ALU produces and conveys signals to external circuitry via its outputs.
A basic ALU has three parallel data buses consisting of two input operands (A and B) and a result output (Y). Each data bus is a group of signals that conveys one binary integer number. Typically, the A, B and Y bus widths (the number of signals comprising each bus) are identical and match the native word size of the external circuitry (e.g., the encapsulating CPU or other processor).
The opcode input is a parallel bus that conveys to the ALU an operation selection code, which is an enumerated value that specifies the desired arithmetic or logic operation to be performed by the ALU. The opcode size (its bus width) determines the maximum number of distinct operations the ALU can perform; for example, a fourbit opcode can specify up to sixteen different ALU operations. Generally, an ALU opcode is not the same as a machine language opcode, though in some cases it may be directly encoded as a bit field within a machine language opcode.
The status outputs are various individual signals that convey supplemental information about the result of the current ALU operation. Generalpurpose ALUs commonly have status signals such as:
Upon completion of each ALU operation, the status output signals are usually stored in external registers to make them available for future ALU operations (e.g., to implement multipleprecision arithmetic) or for controlling conditional branching. The collection of bit registers that store the status outputs are often treated as a single, multibit register, which is referred to as the "status register" or "condition code register".
The status inputs allow additional information to be made available to the ALU when performing an operation. Typically, this is a single "carryin" bit that is the stored carryout from a previous ALU operation.
An ALU is a combinational logic circuit, meaning that its outputs will change asynchronously in response to input changes. In normal operation, stable signals are applied to all of the ALU inputs and, when enough time (known as the "propagation delay") has passed for the signals to propagate through the ALU circuitry, the result of the ALU operation appears at the ALU outputs. The external circuitry connected to the ALU is responsible for ensuring the stability of ALU input signals throughout the operation, and for allowing sufficient time for the signals to propagate through the ALU before sampling the ALU result.
In general, external circuitry controls an ALU by applying signals to its inputs. Typically, the external circuitry employs sequential logic to control the ALU operation, which is paced by a clock signal of a sufficiently low frequency to ensure enough time for the ALU outputs to settle under worstcase conditions.
For example, a CPU begins an ALU addition operation by routing operands from their sources (which are usually registers) to the ALU's operand inputs, while the control unit simultaneously applies a value to the ALU's opcode input, configuring it to perform addition. At the same time, the CPU also routes the ALU result output to a destination register that will receive the sum. The ALU's input signals, which are held stable until the next clock, are allowed to propagate through the ALU and to the destination register while the CPU waits for the next clock. When the next clock arrives, the destination register stores the ALU result and, since the ALU operation has completed, the ALU inputs may be set up for the next ALU operation.
A number of basic arithmetic and bitwise logic functions are commonly supported by ALUs. Basic, general purpose ALUs typically include these operations in their repertoires:^{[1]}^{[2]}^{[4]}
Type  Left  Right 

Arithmetic shift  
Logical shift  
Rotate  
Rotate through carry 
ALU shift operations cause operand A (or B) to shift left or right (depending on the opcode) and the shifted operand appears at Y. Simple ALUs typically can shift the operand by only one bit position, whereas more complex ALUs employ barrel shifters that allow them to shift the operand by an arbitrary number of bits in one operation. In all singlebit shift operations, the bit shifted out of the operand appears on carryout; the value of the bit shifted into the operand depends on the type of shift.
In integer arithmetic computations, multipleprecision arithmetic is an algorithm that operates on integers which are larger than the ALU word size. To do this, the algorithm treats each operand as an ordered collection of ALUsize fragments, arranged from mostsignificant (MS) to leastsignificant (LS) or vice versa. For example, in the case of an 8bit ALU, the 24bit integer 0x123456
would be treated as a collection of three 8bit fragments: 0x12
(MS), 0x34
, and 0x56
(LS). Since the size of a fragment exactly matches the ALU word size, the ALU can directly operate on this "piece" of operand.
The algorithm uses the ALU to directly operate on particular operand fragments and thus generate a corresponding fragment (a "partial") of the multiprecision result. Each partial, when generated, is written to an associated region of storage that has been designated for the multipleprecision result. This process is repeated for all operand fragments so as to generate a complete collection of partials, which is the result of the multipleprecision operation.
In arithmetic operations (e.g., addition, subtraction), the algorithm starts by invoking an ALU operation on the operands' LS fragments, thereby producing both a LS partial and a carry out bit. The algorithm writes the partial to designated storage, whereas the processor's state machine typically stores the carry out bit to an ALU status register. The algorithm then advances to the next fragment of each operand's collection and invokes an ALU operation on these fragments along with the stored carry bit from the previous ALU operation, thus producing another (more significant) partial and a carry out bit. As before, the carry bit is stored to the status register and the partial is written to designated storage. This process repeats until all operand fragments have been processed, resulting in a complete collection of partials in storage, which comprise the multiprecision arithmetic result.
In multipleprecision shift operations, the order of operand fragment processing depends on the shift direction. In leftshift operations, fragments are processed LS first because the LS bit of each partial—which is conveyed via the stored carry bit—must be obtained from the MS bit of the previously leftshifted, lesssignificant operand. Conversely, operands are processed MS first in rightshift operations because the MS bit of each partial must be obtained from the LS bit of the previously rightshifted, moresignificant operand.
In bitwise logical operations (e.g., logical AND, logical OR), the operand fragments may be processed in any arbitrary order because each partial depends only on the corresponding operand fragments (the stored carry bit from the previous ALU operation is ignored).
Although an ALU can be designed to perform complex functions, the resulting higher circuit complexity, cost, power consumption and larger size makes this impractical in many cases. Consequently, ALUs are often limited to simple functions that can be executed at very high speeds (i.e., very short propagation delays), and the external processor circuitry is responsible for performing complex functions by orchestrating a sequence of simpler ALU operations.
For example, computing the square root of a number might be implemented in various ways, depending on ALU complexity:
The implementations above transition from fastest and most expensive to slowest and least costly. The square root is calculated in all cases, but processors with simple ALUs will take longer to perform the calculation because multiple ALU operations must be performed.
An ALU is usually implemented either as a standalone integrated circuit (IC), such as the 74181, or as part of a more complex IC. In the latter case, an ALU is typically instantiated by synthesizing it from a description written in VHDL, Verilog or some other hardware description language. For example, the following VHDL code describes a very simple 8bit ALU:
entity alu is
port (  the alu connections to external circuitry:
A : in signed(7 downto 0);  operand A
B : in signed(7 downto 0);  operand B
OP : in unsigned(2 downto 0);  opcode
Y : out signed(7 downto 0));  operation result
end alu;
architecture behavioral of alu is
begin
case OP is  decode the opcode and perform the operation:
when "000" => Y <= A + B;  add
when "001" => Y <= A  B;  subtract
when "010" => Y <= A  1;  decrement
when "011" => Y <= A + 1;  increment
when "100" => Y <= not A;  1's complement
when "101" => Y <= A and B;  bitwise AND
when "110" => Y <= A or B;  bitwise OR
when "111" => Y <= A xor B;  bitwise XOR
when others => Y <= (others => 'X');
end case;
end behavioral;
Mathematician John von Neumann proposed the ALU concept in 1945 in a report on the foundations for a new computer called the EDVAC.^{[5]}
The cost, size, and power consumption of electronic circuitry was relatively high throughout the infancy of the Information Age. Consequently, all early computers had a serial ALU that operated on one data bit at a time although they often presented a wider word size to programmers. The first computer to have multiple parallel discrete singlebit ALU circuits was the 1951 Whirlwind I, which employed sixteen such "math units" to enable it to operate on 16bit words.
In 1967, Fairchild introduced the first ALUlike device implemented as an integrated circuit, the Fairchild 3800, consisting of an eightbit arithmetic unit with accumulator. It only supported adds and subtracts but no logic functions.^{[6]}
Full integratedcircuit ALUs soon emerged, including fourbit ALUs such as the Am2901 and 74181. These devices were typically "bit slice" capable, meaning they had "carry look ahead" signals that facilitated the use of multiple interconnected ALU chips to create an ALU with a wider word size. These devices quickly became popular and were widely used in bitslice minicomputers.
Microprocessors began to appear in the early 1970s. Even though transistors had become smaller, there was sometimes insufficient die space for a fullwordwidth ALU and, as a result, some early microprocessors employed a narrow ALU that required multiple cycles per machine language instruction. Examples of this includes the popular Zilog Z80, which performed eightbit additions with a fourbit ALU.^{[7]} Over time, transistor geometries shrank further, following Moore's law, and it became feasible to build wider ALUs on microprocessors.
Modern integrated circuit (IC) transistors are orders of magnitude smaller than those of the early microprocessors, making it possible to fit highly complex ALUs on ICs. Today, many modern ALUs have wide word widths, and architectural enhancements such as barrel shifters and binary multipliers that allow them to perform, in a single clock cycle, operations that would have required multiple operations on earlier ALUs.
ALUs can be realized as mechanical, electromechanical or electronic circuits^{[8]}^{[failed verification]} and, in recent years, research into biological ALUs has been carried out^{[9]}^{[10]} (e.g., actinbased).^{[11]}