v0.3.13 / 16 May 2022
|Written in||Python, C++|
|Operating system||Linux, macOS, Windows|
Google JAX is a machine learning framework for transforming numerical functions. It is described as bringing together a modified version of autograd (automatic obtaining of the gradient function through differentiation of a function) and TensorFlow's XLA (Accelerated Linear Algebra). It is designed to follow the structure and workflow of NumPy as closely as possible and works with various existing frameworks such as TensorFlow and PyTorch. The primary functions of JAX are:
Main article: Automatic differentiation
The below code demonstrates the grad function's automatic differentiation.
# imports from jax import grad import jax.numpy as jnp # define the logistic function def logistic(x): return jnp.exp(x) / (jnp.exp(x) + 1) # obtain the gradient function of the logistic function grad_logistic = grad(logistic) # evaluate the gradient of the logistic function at x = 1 grad_log_out = grad_logistic(1.0) print(grad_log_out)
The final line should outputː
Main article: Just-in-time compilation
The below code demonstrates the jit function's optimization through fusion.
# imports from jax import jit import jax.numpy as jnp # define the cube function def cube(x): return x * x * x # generate data x = jnp.ones((10000, 10000)) # create the jit version of the cube function jit_cube = jit(cube) # apply the cube and jit_cube functions to the same data for spreed comoparion cube(x) jit_cube(x)
The computation time for jit_cube (line no.17) should be noticably shorter than that for cube (line no.16). Increasing the values on line no. 7, will further exacerbate the difference.
Main article: Vectorization (mathematics)
The below code demonstrates the vmap function's vectorization.
# imports from functools import partial from jax import vmap import jax.numpy as jnp # define function def grads(self, inputs): in_grad_partial = partial(self._net_grads, self._net_params) grad_vmap = jax.vmap(in_grad_partial) rich_grads = grad_vmap(inputs) flat_grads = np.asarray(self._flatten_batch(rich_grads)) assert flat_grads.ndim == 2 and flat_grads.shape == inputs.shape return flat_grads
The GIF on the right of this section illustrates the notion of vectorized addition.
Main article: Automatic parallelization
The below code demonstrates the pmap function's parallelization for matrix multiplication.
# import pmap and random from JAX; import JAX NumPy from jax import pmap, random import jax.numpy as jnp # generate 2 random matrices of dimensions 5000 x 6000, one per device random_keys = random.split(random.PRNGKey(0), 2) matrices = pmap(lambda key: random.normal(key, (5000, 6000)))(random_keys) # without data transfer, in parallel, perform a local matrix multiplication on each CPU/GPU outputs = pmap(lambda x: jnp.dot(x, x.T))(matrices) # without data transfer, in parallel, obtain the mean for both matrices on each CPU/GPU separately means = pmap(jnp.mean)(outputs) print(means)
The final line should print the valuesː
Several python libraries use Jax as a backend, including:
((cite journal)): CS1 maint: date and year (link)