Software entropy is the idea that software eventually rots as it is changed if sufficient care is not taken to maintain coherence with product design and established design principles. The common usage is only tangentially related to entropy as defined in classical thermodynamics and statistical physics.
Another aspect can be found in what is perceived to be a decay in the quality of otherwise static software that is the result of the inevitable changes to its environment, that often occur as operating systems and other components are upgraded or retired.
A work on software engineering by Ivar Jacobson et al, in 1992[1] describes software entropy as follows:
In 1999, Andrew Hunt and David Thomas use fixing broken windows as a metaphor for avoiding software entropy in software development.[3]
The purpose of writing software is to encode domain and design knowledge, in source form(s) that can be translated to executable destination format(s). As such, to the extent that the source is a coherent, noise free encoding of the relevant knowledge sets, its entropy may be considered to be low. After initial development and acceptance, the code enters the maintenance phase of the software lifecycle, where it may be allowed to accumulate defects (noise), represented by divergence from those knowledge sets (domain and software design principles), and thereby increase the entropy of the software.
While there are known correlations (see software complexity), there is no direct relationship between software complexity and software entropy. Any two pieces of software, with equivalent complexity levels, can exist at different levels of entropy or rot. Even a very simple program, can suffer high levels of entropy as it passes through the hands of multiple developers.
The process of code refactoring can result in stepwise reductions in software entropy.