The Alliance's motivations for creating AV1 included the high cost and uncertainty involved with the patent licensing of HEVC, the MPEG-designed codec expected to succeed AVC. Additionally, the Alliance's seven founding members – Amazon, Cisco, Google, Intel, Microsoft, Mozilla and Netflix – announced that the initial focus of the video format would be delivery of high-quality web video. The official announcement of AV1 came with the press release on the formation of the Alliance for Open Media on 1 September 2015. Only 42 days before, on 21 July 2015, HEVC Advance's initial licensing offer was announced to be an increase over the royalty fees of its predecessor, AVC. In addition to the increased cost, the complexity of the licensing process increased with HEVC. Unlike previous MPEG standards where the technology in the standard could be licensed from a single entity, MPEG LA, when the HEVC standard was finished, two patent pools had been formed with a third pool on the horizon. In addition, various patent holders were refusing to license patents via either pool, increasing uncertainty about HEVC's licensing. According to Microsoft's Ian LeGrow, an open-source, royalty-free technology was seen as the easiest way to eliminate this uncertainty around licensing.
Many of the components of the AV1 project were sourced from previous research efforts by Alliance members. Individual contributors had started experimental technology platforms years before: Xiph's/Mozilla's Daala published code in 2010, Google's experimental VP9 evolution project VP10 was announced on 12 September 2014, and Cisco's Thor was published on 11 August 2015. Building on the code base of VP9, AV1 incorporates additional techniques, several of which were developed in these experimental formats.
Many companies are part of Alliance for Open Media, including Samsung, Vimeo, Microsoft, Netflix, Mozilla, AMD, Nvidia, Intel and ARM, Google, Facebook, Cisco, Amazon, Hulu, VideoLAN, Adobe and Apple. Apple is an AOMedia governing member, although it joined after the formation. The management of the AV1 streams has been officially included among the typological videos manageable by Coremedia.
The first version 0.1.0 of the AV1 reference codec was published on 7 April 2016. Although a soft feature freeze came into effect at the end of October 2017, development continued on several significant features. One of these, the bitstream format, was projected to be frozen in January 2018 but was delayed due to unresolved critical bugs as well as further changes to transformations, syntax, the prediction of motion vectors, and the completion of legal analysis. The Alliance announced the release of the AV1 bitstream specification on 28 March 2018, along with a reference, software-based encoder and decoder. On 25 June 2018, a validated version 1.0.0 of the specification was released. On 8 January 2019 a validated version 1.0.0 with Errata 1 of the specification was released.
Martin Smole from AOM member Bitmovin said that the computational efficiency of the reference encoder was the greatest remaining challenge after the bitstream format freeze had been completed. While working on the format, the encoder was not targeted for production use and speed optimizations were not prioritized. Consequently, the early version of AV1 was orders of magnitude slower than existing HEVC encoders. Much of the development effort was consequently shifted towards maturing the reference encoder. In March 2019, it was reported that the speed of the reference encoder had improved greatly and within the same order of magnitude as encoders for other common formats.
AV1 aims to be a video format for the web that is both state of the art and royalty free. According to Matt Frost, head of strategy and partnerships in Google's Chrome Media team, "The mission of the Alliance for Open Media remains the same as the WebM project."
A recurring concern in standards development, not least of royalty-free multimedia formats, is the danger of accidentally infringing on patents that their creators and users did not know about. This concern has been raised regarding AV1, and previously VP8, VP9, Theora and IVC. The problem is not unique to royalty-free formats, but it uniquely threatens their status as royalty-free.
Impossible to ascertain until the format is old enough that any patents would have expired (at least 20 years in WTO countries)
To fulfill the goal of being royalty free, the development process requires that no feature can be adopted before it has been confirmed independently by two separate parties to not infringe on patents of competing companies. In cases where an alternative to a patent-protected technique is not available, owners of relevant patents have been invited to join the Alliance (even if they were already members of another patent pool). For example, Alliance members Apple, Cisco, Google, and Microsoft are also licensors in MPEG-LA's patent pool for H.264. As an additional protection for the royalty-free status of AV1, the Alliance has a legal defense fund to aid smaller Alliance members or AV1 licensees in the event they are sued for alleged patent infringement.
Under patent rules adopted from the World Wide Web Consortium (W3C), technology contributors license their AV1-connected patents to anyone, anywhere, anytime based on reciprocity (i.e. as long as the user does not engage in patent litigation). As a defensive condition, anyone engaging in patent litigation loses the right to the patents of all patent holders.
This treatment of intellectual property rights (IPR), and its absolute priority during development, is contrary to extant MPEG formats like AVC and HEVC. These were developed under an IPR uninvolvement policy by their standardization organisations, as stipulated in the ITU-T's definition of an open standard. However, MPEG's chairman has argued this practice has to change, which it is:EVC is also set to have a royalty-free subset, and will have switchable features in its bitstream to defend against future IPR threats.
The creation of royalty-free web standards has been a long-stated pursuit for the industry. In 2007, the proposal for HTML5 video specified Theora as mandatory to implement. The reason was that public content should be encoded in freely implementable formats, if only as a "baseline format", and that changing such a baseline format later would be hard because of network effects.
The Alliance for Open Media is a continuation of Google's efforts with the WebM project, which renewed the royalty-free competition after Theora had been surpassed by AVC. For companies such as Mozilla that distribute free software, AVC can be difficult to support as a per-copy royalty is unsustainable given the lack of revenue stream to support these payments in free software (see FRAND § Excluding costless distribution). Similarly, HEVC has not successfully convinced all licensors to allow an exception for freely distributed software (see HEVC § Provision for costless software).
The performance goals include "a step up from VP9 and HEVC" in efficiency for a low increase in complexity. NETVC's efficiency goal is 25% improvement over HEVC. The primary complexity concern is for software decoding, since hardware support will take time to reach users. However, for WebRTC, live encoding performance is also relevant, which is Cisco's agenda: Cisco is a manufacturer of videoconferencing equipment, and their Thor contributions aim at "reasonable compression at only moderate complexity".
AV1 is a traditional block-based frequency transform format featuring new techniques. Based on Google's VP9, AV1 incorporates additional techniques that mainly give encoders more coding options to enable better adaptation to different types of input.
The development process was such that coding tools were added to the reference code base as experiments, controlled by flags that enable or disable them at build time, for review by other group members as well as specialized teams that helped with and ensured hardware friendliness and compliance with intellectual property rights (TAPAS). When the feature gained some support in the community, the experiment was enabled by default, and ultimately had its flag removed when all of the reviews were passed. Experiment names were lowercased in the configure script and uppercased in conditional compilation flags.
To better and more reliably support HDR and color spaces, corresponding metadata can now be integrated into the video bitstream instead of being signaled in the container.
Frame content is separated into adjacent same-sized blocks referred to as superblocks. Similar to the concept of a macroblock, superblocks are square-shaped and can either be of size 128×128 or 64×64 pixels. Superblocks can be divided in smaller blocks according to different partitioning patterns. The four-way split pattern is the only pattern whose partitions can be recursively subdivided. This allows superblocks to be divided into partitions as small as 4×4 pixels.
"T-shaped" partitioning patterns are introduced, a feature developed for VP10, as well as horizontal or vertical splits into four stripes of 4:1 and 1:4 aspect ratio. The available partitioning patterns vary according to the block size, both 128×128 and 8×8 blocks can't use 4:1 and 1:4 splits. Moreover, 8×8 blocks can't use "T" shaped splits.
Two separate predictions can now be used on spatially different parts of a block using a smooth, oblique transition line (wedge-partitioned prediction). This enables more accurate separation of objects without the traditional staircase lines along the boundaries of square blocks.
More encoder parallelism is possible thanks to configurable prediction dependency between tile rows (ext_tile).
AV1 performs internal processing in higher precision (10 or 12 bits per sample), which leads to quality improvement by reducing rounding errors.
Predictions can be combined in more advanced ways (than a uniform average) in a block (compound prediction), including smooth and sharp transition gradients in different directions (wedge-partitioned prediction) as well as implicit masks that are based on the difference between the two predictors. This allows the combination of either two inter predictions or an inter and an intra prediction to be used in the same block.
A frame can reference 6 instead of 3 of the 8 available frame buffers for temporal (inter) prediction while providing more flexibility on bi-prediction (ext_refs).
The Warped Motion (warped_motion) and Global Motion (global_motion) tools in AV1 aim to reduce redundant information in motion vectors by recognizing patterns arising from camera motion. They implement ideas that were attempted in preceding formats like e.g. MPEG-4 ASP, albeit with a novel approach that works in three dimensions. There can be a set of warping parameters for a whole frame offered in the bitstream, or blocks can use a set of implicit local parameters that get computed based on surrounding blocks.
Switch frames (S-frame) are a new inter-frame type that can be predicted using already-decoded reference frames from a higher-resolution version of the same video to allow switching to a lower resolution without the need for a full keyframe at the beginning of a video segment in the adaptive bitrate streaming use case.
Intra prediction consists of predicting the pixels of given blocks only using information available in the current frame. Most often, intra predictions are built from the neighboring pixels above and to the left of the predicted block. The DC predictor builds a prediction by averaging the pixels above and to the left of block.
Directional predictors extrapolate these neighboring pixels according to a specified angle. In AV1, 8 main directional modes can be chosen. These modes start at an angle of 45 degrees and increase by a step size of 22.5 degrees up until 203 degrees. Furthermore, for each directional mode, six offsets of 3 degrees can be signaled for bigger blocks, three above the main angle and three below it, resulting in a total of 56 angles (ext_intra).
The "TrueMotion" predictor was replaced with a Paeth predictor which looks at the difference from the known pixel in the above-left corner to the pixel directly above and directly left of the new one and then chooses the one that lies in direction of the smaller gradient as predictor. A palette predictor is available for blocks with up to 8 dominant colors, such as some computer screen content. Correlations between the luminosity and the color information can now be exploited with a predictor for chroma blocks that is based on samples from the luma plane (cfl). In order to reduce visible boundaries along borders of inter-predicted blocks, a technique called overlapped block motion compensation (OBMC) can be used. This involves extending a block's size so that it overlaps with neighboring blocks by 2 to 32 pixels, and blending the overlapping parts together.
To transform the error remaining after prediction to the frequency domain, AV1 encoders can use square, 2:1/1:2, and 4:1/1:4 rectangular DCTs (rect_tx), as well as an asymmetric DST for blocks where the top and/or left edge is expected to have lower error thanks to prediction from nearby pixels, or choose to do no transform (identity transform).
It can combine two one-dimensional transforms in order to use different transforms for the horizontal and the vertical dimension (ext_tx).
AV1 has new optimized quantization matrices (aom_qm). The eight sets of quantization parameters that can be selected and signaled for each frame now have individual parameters for the two chroma planes and can use spatial prediction. On every new superblock, the quantization parameters can be adjusted by signaling an offset.
In-loop filtering combines Thor's constrained low-pass filter and Daala's directional deringing filter into the Constrained Directional Enhancement Filter, cdef. This is an edge-directed conditional replacement filter that smooths blocks roughly along the direction of the dominant edge to eliminate ringing artifacts.
Film grain synthesis (film_grain) improves coding of noisy signals using a parametric video coding approach.
Due to the randomness inherent to film grain noise, this signal component is traditionally either very expensive to code or prone to get damaged or lost, possibly leaving serious coding artifacts as residue. This tool circumvents these problems using analysis and synthesis, replacing parts of the signal with a visually similar synthetic texture based solely on subjective visual impression instead of objective similarity. It removes the grain component from the signal, analyzes its non-random characteristics, and instead transmits only descriptive parameters to the decoder, which adds back a synthetic, pseudorandom noise signal that's shaped after the original component. It is the visual equivalent of the Perceptual Noise Substitution technique used in AC3, AAC, Vorbis, and Opus audio codecs.
Daala's entropy coder (daala_ec), a non-binary arithmetic coder, was selected for replacing VP9's binary entropy coder. The use of non-binary arithmetic coding helps evade patents but also adds bit-level parallelism to an otherwise serial process, reducing clock rate demands on hardware implementations. This is to say that the effectiveness of modern binary arithmetic coding like CABAC is being approached using a greater alphabet than binary, hence greater speed, as in Huffman code (but not as simple and fast as Huffman code).
AV1 also gained the ability to adapt the symbol probabilities in the arithmetic coder per coded symbol instead of per frame (ec_adapt).
AV1 has provisions for temporal and spatial scalability.
Quality and efficiency
A first comparison from the beginning of June 2016 found AV1 roughly on par with HEVC, as did one using code from late January 2017.
In April 2017, using the 8 enabled experimental features at the time (of 77 total), Bitmovin was able to demonstrate favorable objective metrics, as well as visual results, compared to HEVC on the Sintel and Tears of Steel short films. A follow-up comparison by Jan Ozer of Streaming Media Magazine confirmed this, and concluded that "AV1 is at least as good as HEVC now". Ozer noted that his and Bitmovin's results contradicted a comparison by Fraunhofer Institute for Telecommunications from late 2016 that had found AV1 65.7% less efficient than HEVC, underperforming even H.264/AVC which they concluded as being 10.5% more efficient. Ozer justified this discrepancy by having used encoding parameters endorsed by each encoder vendor, as well as having more features in the newer AV1 encoder. Decoding performance was at about half the speed of VP9 according to internal measurements from 2017.
Tests from Netflix in 2017, based on measurements with PSNR and VMAF at 720p, showed that AV1 was about 25% more efficient than VP9 (libvpx). Tests from Facebook conducted in 2018, based on PSNR, showed that the AV1 reference encoder was able to achieve 34%, 46.2% and 50.3% higher data compression than libvpx-vp9, x264 High profile, and x264 Main profile respectively.
Tests from Moscow State University in 2017 found that VP9 required 31% and HEVC 22% more bitrate than AV1 in order to achieve similar levels of quality. The AV1 encoder was operating at speed "2500–3500 times lower than competitors" due to the lack of optimization (which was not available at that time).
Tests from University of Waterloo in 2020 found that when using a mean opinion score (MOS) for 2160p (4K) video AV1 had bitrate saving of 9.5% compared to HEVC and 16.4% compared to VP9. They also concluded that at the time of the study at 2160p the AV1 video encodes on average took 590× longer compared to encoding with AVC; while HEVC took on average 4.2× longer and VP9 took on average 5.2× longer than AVC respectively.
The latest encoder comparison by Streaming Media Magazine as of September 2020, which used moderate encoding speeds, VMAF, and a diverse set of short clips, indicated that the open-source libaom and SVT-AV1 encoders
took about twice as long time to encode as x265 in its "veryslow" preset while using 15-20% less bitrate, or about 45% less bitrate than x264 veryslow. The best-in-test AV1 encoder, Visionular's Aurora1, in its "slower" preset, was as fast as x265 veryslow while saving 50% bitrate over x264 veryslow.
CapFrameX tested the performance of GPUs with AV1 decoding. On 5 October 2022, Cloudflare announced that it has a beta player.
Profiles and levels
AV1 defines three profiles for decoders which are Main, High, and Professional. The Main profile allows for a bit depth of 8 or 10 bits per sample with 4:0:0 (greyscale) and 4:2:0 (quarter) chroma sampling. The High profile further adds support for 4:4:4 chroma sampling (no subsampling). The Professional profile extends capabilities to full support for 4:0:0, 4:2:0, 4:2:2 (half) and 4:4:4 chroma sub-sampling with 8, 10 and 12 bit color depths.
Feature comparison between AV1 profiles
8 or 10
8 or 10
8, 10 & 12
AV1 defines levels for decoders with maximum variables for levels ranging from 2.0 to 6.3. The levels that can be implemented depend on the hardware capability.
Example resolutions would be 426×240@30fps for level 2.0, 854×480@30fps for level 3.0, 1920×1080@30fps for level 4.0, 3840×2160@60fps for level 5.1, 3840×2160@120fps for level 5.2, and 7680×4320@120fps for level 6.2. Level 7 has not been defined yet.
MaxHeader Rate (Hz)
Min Comp Basis
Max Tile Cols
Supported container formats
ISO base media file format: the ISOBMFF containerization spec by AOMedia was the first to be finalized and the first to gain adoption. This is the format used by YouTube.
Matroska: version 1 of the Matroska containerization spec was published in late 2018.
Real-time Transport Protocol: a preliminary RTP packetization spec by AOMedia defines the transmission of AV1 OBUs (Open Bitstream Units) directly as the RTP payload. It defines an RTP header extension that carries information about video frames and their dependencies, which is of general usefulness to § scalable video coding. The carriage of raw video data also differs from for example MPEG TS over RTP in that other streams, such as audio, must be carried externally.
WebM: as a matter of formality, AV1 has not been sanctioned into the subset of Matroska known as WebM as of late 2019. However support has been present in libwebm since May 2018.
On2 IVF: this format was inherited from the first public release of VP8, where it served as a simple development container. rav1e also supports this format.
Pre-standard WebM: Libaom featured early support for WebM before Matroska containerization was specified; this has since been changed to conform to the Matroska spec.
In October 2016, Netflix stated they expected to be an early adopter of AV1. On 5 February 2020, Netflix began using AV1 to stream select titles on Android, providing 20% improved compression efficiency over their VP9 streams. On 9 November 2021, Netflix announced it had begun streaming AV1 content to a number of TVs with AV1 decoders as well as the PlayStation 4 Pro.
In 2018, YouTube began rolling out AV1, starting with its AV1 Beta Launch Playlist. According to the description, the videos are (to begin with) encoded at high bitrate to test decoding performance, and YouTube has "ambitious goals" for rolling out AV1. YouTube for Android TV supports playback of videos encoded in AV1 on capable platforms as of version 2.10.13, released in early 2020.
In 2020 YouTube started serving videos at 8K resolution in AV1. From 1 to 27 August in 2022, the company paused new AV01 video roll-out globally, but the next day it resumed the pipeline.
In February 2019, Facebook, following their own positive test results, said they would gradually roll out AV1 as soon as browser support emerges, starting with their most popular videos. Also, Meta (Facebook's parent company) is said to be interested in the SVT-AV1 as Google engineer Matt Frost said in an Intel YouTube video. The intention was carrying out a first test in 2023, when the HW will be widespread, but it hasn't expressed statement in a latest Streaming Media video. MSVP (Meta Scalable Video Processor) was announced and an article was published in a popular scientific research website on 15 October 2022.
On 4 November 2022 AV1 was announced with an article on the technology blog and with a video on Meta socials Mark Zuckerberg on Instagram Reels was posted which shows AV1 codec compared with H.264/MPEG-4 AVC. Citing "Our Instagram engineering team developed a way to dramatically improve video quality. We made basic video processing 94% faster". And even though Android does support AV1 playback natively, you'll have to implement some type of testing protocol—like Meta—to ensure smooth playback until AV1 hardware support becomes pervasive, which probably won't be until 2024 or beyond.
In June 2019, Vimeo's videos in the "Staff picks" channel were available in AV1. Vimeo is using and contributing to Mozilla's Rav1e encoder and expects, with further encoder improvements, to eventually provide AV1 support for all videos uploaded to Vimeo as well as the company's "Live" offering.
On 30 April 2020, iQIYI announced support for AV1 for users on PC web browsers and Android devices, according to the announcement, as the first Chinese video streaming site to adopt the AV1 format.
Twitch plans to roll out AV1 for its most popular content in 2022 or 2023, with universal support projected to arrive in 2024 or 2025.
In April 2021, Roku removed the YouTube TV app from the Roku streaming platform after a contract expired. It was later reported that Roku streaming devices do not use processors that support the AV1 codec. In December 2021, YouTube and Roku agreed to a multiyear deal to keep both the YouTube TV app and the YouTube app on the Roku streaming platform. Roku had argued that using processors in their streaming devices that support the royalty-free AV1 codec would increase costs to consumers.
Libaom is the reference implementation. It includes an encoder (aomenc) and a decoder (aomdec). As the former research codec, it has the advantage of being made to justifiably demonstrate efficient use of every feature, but at the general cost of encoding speed. At feature freeze, the encoder had become problematically slow, but dramatic speed optimizations with negligible efficiency impact have subsequently been made.
SVT-AV1 includes an open-source encoder and decoder developed primarily by Intel in collaboration with Netflix with a special focus on threading performance. They implemented in Cidana Corporation (Cidana Developers) and Software Implementation Working Group (SIWG). In August 2020, the Alliance for Open Media Software Implementation Working Group adopted SVT-AV1 as their production encoder. SVT-AV1 1.0.0 was released on 22 April 2022. SVT-AV1 1.4.0 was released on 30 November 2022.
rav1e is an encoder written in Rust and assembly language. rav1e takes the opposite developmental approach to aomenc: start out as the simplest (therefore fastest) conforming encoder, and then improve efficiency over time while remaining fast.
dav1d is a decoder written in C99 and assembly focused on speed and portability. The first official version (0.1) was released in December 2018. Version 0.2 was released in March 2019, with users able to "safely use the decoder on all platforms, with excellent performance", according to the developers. Version 0.3 was announced in May 2019 with further optimizations demonstrating performance 2 to 5 times faster than aomdec. Version 0.5 was released in October 2019. Firefox 67 switched from Libaom to dav1d as a default decoder in May 2019. In 2019, dav1d v0.5 was rated the best decoder in comparison to libgav1 and libaom. dav1d 0.9.0 was released on 17 May 2021. dav1d 0.9.2 was released on 3 September 2021. dav1d 1.0.0 was released on 18 March 2022.
Cisco AV1 is a proprietary live encoder that Cisco developed for its Webexteleconference products. The encoder is optimized for latency and the constraint of having a "usable CPU footprint", as with a "commodity laptop". Cisco stressed that at their operating point – high speed, low latency – the large toolset of AV1 does not preclude a low encoding complexity. Rather, the availability of tools for screen content and scalability in all profiles enabled them to find good compression-to-speed tradeoffs, better even than with HEVC. Compared to their previously deployed H.264 encoder, a particular area of improvement was in high resolution screen sharing.
libgav1 is a decoder written in C++11 released by Google.
Several other parties have announced to be working on encoders, including EVE for AV1 (in beta testing), NGCodec, Socionext, Aurora and MilliCast.
Firefox (software decoder since version 67.0, released in May 2019: enabled by default on all desktop platforms - Windows, macOS and Linux for both 32-bit and 64-bit systems). Hardware decoder on compatible platforms since version 100.0, released on 3 May 2022.
DaVinci Resolve (since version 17.2, May 2021, decoding support; since version 17.4.6, March 2022, Intel Arc hardware encoding support, since version 18.1, November 2022, Nvidia hardware encoding support)
OBS Studio (libaom and SVT-AV1 support since 27.2 Beta 1) and since OBS Studio 29.1 Beta 1 encoding with GPUs that support it (QSV, NVENC, VCN 4.0) as well as AV1 streaming transmission on YouTube and also other platforms via RTMP (Real Time Messaging Protocol), YouTube joins SRT Alliance.
MKVToolNix (adoption of final av1-in-mkv spec since version 28)
Several Alliance members demonstrated AV1 enabled products at IBC 2018, including Socionext's hardware accelerated encoder. According to Socionext, the encoding accelerator is FPGA based and can run on an Amazon EC2 F1 cloud instance, where it runs 10 times faster than existing software encoders.
According to Mukund Srinivasan, chief business officer of AOM member Ittiam, early hardware support will be dominated by software running on non-CPU hardware (such as GPGPU, DSP or shader programs, as is the case with some VP9 hardware implementations), as fixed-function hardware will take 12–18 months after bitstream freeze until chips are available, plus 6 months for products based on those chips to hit the market. The bitstream was finally frozen on 28 March 2018, meaning chips could be available sometime between March and August 2019. According to the above forecast, products based on chips could then be on the market at the end of 2019 or the beginning of 2020.
On 7 January 2019, NGCodec announced AV1 support for NGCodec accelerated with Xilinx FPGAs.
On 18 April 2019, Allegro DVT announced its AL-E210 multi-format video encoder hardware IP, the first publicly announced hardware AV1 encoder.
On 23 April 2019, Rockchip announced their RK3588 SoC which features AV1 hardware decoding up to 4K 60fps at 10-bit color depth.
On 9 May 2019, Amphion announced a video decoder with AV1 support up to 4K 60fps On 28 May 2019, Realtek announced the RTD2893, its first integrated circuit with AV1 decoding, up to 8K.
On 17 June 2019, Realtek announced the RTD1311 SoC for set-top boxes with an integrated AV1 decoder.
On 20 October 2019, a roadmap from Amlogic shown 3 set-top box SoCs that are able to decode AV1 content, the S805X2, S905X4 and S908X. The S905X4 was used in the SDMC DV8919 by December.
On 21 October 2019, Chips&Media announced the WAVE510A VPU supporting decoding AV1 at up to 4Kp120.
On 26 November 2019, MediaTek announced world's first smartphone SoC with an integrated AV1 decoder. The Dimensity 1000 is able to decode AV1 content up to 4K 60fps.
On 3 January 2020, LG Electronics announced that its 2020 8K TVs, which are based on the α9 Gen 3 processor, support AV1.
At CES 2020, Samsung announced that its 2020 8K QLED TVs, featuring Samsung's "Quantum Processor 8K SoC," are capable of decoding AV1.
On 13 August 2020, Intel announced that their Intel Xe-LP GPU in Tiger Lake will be their first product to include AV1 fixed-function hardware decoding.
On 1 September 2020, Nvidia announced that their Nvidia GeForce RTX 30 Series GPUs will support AV1 fixed-function hardware decoding.
On 2 September 2020, Intel officially launched Tiger Lake 11th Gen CPUs with AV1 fixed-function hardware decoding.
On 15 September 2020, AMD merged patches into the amdgpu drivers for Linux which adds support for AV1 decoding support on RDNA2 GPUs.
On 28 September 2020, Roku refreshed the Roku Ultra including AV1 support.
On 30 September 2020, Intel released version 20.3.0 for the Intel Media Driver which added support for AV1 decoding on Linux.
On 10 October 2020, Microsoft confirmed support for AV1 hardware decoding on Xe-LP(Gen12), Ampere and RDNA2 with a blog post.
On 11 January 2021, Intel announes new Pentium and Celeron models with 11th Gen UHD iGPU with the capability to support AV1 decode.
On 12 January 2021, Samsung announced the Exynos 2100 with claimed AV1 decode support, however Samsung has not implemented AV1 support yet.
On 16 March 2021, Intel officially launched Rocket Lake 11th Gen CPUs with AV1 fixed-function hardware decoding.
On 19 October 2021, Google officially launched the Tensor featuring BigOcean supporting AV1 fixed-function hardware decoding.
On 27 October 2021, Intel officially launched Alder Lake 12th Gen CPUs with AV1 fixed-function hardware decoding.
On 4 January 2022, Intel officially launched Alder Lake 12th Gen mobile CPUs and non-K series desktop CPUs with AV1 fixed-function hardware decoding.
On 17 February 2022, Intel officially announced that Arctic Sound-M has the industry's first hardware-based AV1 encoder inside a GPU.
On 30 March 2022, Intel officially announced the Intel Arc Alchemist family with AV1 fixed-function hardware decoding and fixed-function hardware encoding.
On 20 September 2022, Nvidia officially announced the Nvidia GeForce RTX 40 series with AV1 fixed-function hardware decoding and fixed-function hardware encoding.
On 22 September 2022, Google released the Chromecast with Google TV (HD), the first Chromecast device with support for AV1 hardware decoding.
On 26 September 2022, AMD released Ryzen 7000 series CPUs with an embedded GPU capable of AV1 hardware decoding.
On 27 September 2022, Intel officially launched Raptor Lake 13th Gen CPUs with AV1 fixed-function hardware decoding.
Sisvel, a Luxembourg-based company, has formed a patent pool, and are selling a patent license for AV1.
The pool was announced in early 2019, but a list of claimed patents was first published on 10 March 2020. This list contains over 1050 patents.
The substance of the patent claims remains to be challenged. Sisvel has stated that they won't seek content royalties, but their license makes no exemption for software.
As of March 2020[update], the Alliance for Open Media has not responded to the list of patent claims. Their statement after Sisvel's initial announcement reiterated the commitment to their royalty-free patent license and made mention of the "AOMedia patent defense program to help protect AV1 ecosystem participants in the event of patent claims", but did not mention the Sisvel claim by name.
According to The WebM Project, Google does not plan to alter their current or upcoming usage plans of AV1 even though they are aware of the patent pool, and third parties cannot be stopped from demanding licensing fees from any technology that is open-source, royalty-free, and/or free-of-charge.
On 7 July 2022, it was revealed that the European Union's antitrust regulators had opened an investigation into AOM and its licensing policy. It said this action may be restricting the innovators' ability to compete with the AV1 technical specification, and also eliminate incentives for them to innovate.
The Commission has information that AOM and its members may be imposing licensing terms (mandatory royalty-free cross licensing) on innovators that were not a part of AOM at the time of the creation of the AV1 technical, but whose patents are deemed essential to (its) technical specifications
On 23 May 2023, the European Commission decided to close the investigation while taking no further action. But in an email they reiterated that the closure does not constitute a finding of compliance or non-compliance with EU antitrust laws.
AV1 Image File Format (AVIF) is an image file format specification for storing still images or image sequences compressed with AV1 in the HEIF file format. It competes with HEIC which uses the same container format, built upon ISOBMFF, but HEVC for compression.
^ ab"An Invisible Tax on the Web: Video Codecs". 11 July 2018. Archived from the original on 5 January 2019. Retrieved 4 January 2019. Mozilla uses Cisco's OpenH264 in Firefox. If not for Cisco's generosity, Mozilla would be paying estimated licensing fees of $9.75 million a year.
^ ab"Why is FRAND bad for Free Software?". 20 June 2016. Archived from the original on 6 June 2019. Retrieved 8 April 2019. As Free Software gives each user the freedom to redistribute the software itself, keeping track and collecting royalties based on distributed copies is also, in practice, impossible.
^"OpenH264 Now in Firefox". 14 October 2014. Archived from the original on 11 July 2021. Retrieved 8 April 2019. Because H.264 implementations are subject to a royalty bearing patent license and Mozilla is an open source project, we are unable to ship H.264 in Firefox directly. We want anyone to be able to distribute Firefox without paying the MPEG LA.
^Chiariglione, Leonardo (28 January 2018). "A crisis, the causes and a solution". Archived from the original on 17 April 2018. Retrieved 21 April 2018. two tracks in MPEG: one track producing royalty free standards (Option 1, in ISO language) and the other the traditional Fair Reasonable and Non Discriminatory (FRAND) standards (Option 2, in ISO language). (…) The Internet Video Coding (IVC) standard was a successful implementation of the idea (…). Unfortunately 3 companies made blank Option 2 statements (of the kind "I may have patents and I am willing to license them at FRAND terms"), a possibility that ISO allows. MPEG had no means to remove the claimed infringing technologies, if any, and IVC is practically dead.
^Chiariglione, Leonardo (28 January 2018). "A crisis, the causes and a solution". Archived from the original on 17 April 2018. Retrieved 21 April 2018. How could MPEG achieve this? Thanks to its "business model" that can simply be described as: produce standards having the best performance as a goal, irrespective of the IPR involved.
^Wium Lie, Håkon (29 March 2007). "Proposal for the HTML 5 video element (Google TechTalks)". Google Video, later YouTube. Archived from the original on 25 February 2019. Retrieved 3 January 2019. Flash is today the baseline format on the web. The problem with Flash is that it's not an open standard. It's a proprietary format, it hasn't been documented, and it probably requires the payment of licenses if you are going to (…) write software for it (…) The web community has always been based on open standards. This has been what the web was founded on, where HTML started. That's why we developed the PNG image format – we wanted a freely implementable open standard to hold the content we are putting out there. Our content is too valuable to put into some locked format. This goes back all the way to SGML, in which the mantra was "own your data". (…) If we look at open standards for video today (…), there is one which I believe is the right one, and that's called Ogg Theora.
^Midtskogen, Steinar; Fuldseth, Arild; Bjøntegaard, Gisle; Davies, Thomas (13 September 2017). "Integrating Thor tools into the emerging AV1 codec"(PDF). Archived(PDF) from the original on 25 February 2019. Retrieved 2 October 2017. What can Thor add to VP9/AV1? Since Thor aims for reasonable compression at only moderate complexity, we considered features of Thor that could increase the compression efficiency of VP9 and/or reduce the computational complexity.
^Ozer, Jan (3 June 2016). "What is AV1?". Streaming Media Magazine. Information Today, Inc. Archived from the original on 26 November 2016. Retrieved 26 November 2016. ... Once available, YouTube expects to transition to AV1 as quickly as possible, particularly for video configurations such as UHD, HDR, and high frame rate videos ... Based upon its experience with implementing VP9, YouTube estimates that they could start shipping AV1 streams within six months after the bitstream is finalized. ...
^Mukherjee, Debargha; Su, Hui; Bankoski, Jim; Converse, Alex; Han, Jingning; Liu, Zoe; Xu, Yaowu (2015), Tescher, Andrew G (ed.), "An overview of new video coding tools under consideration for VP10 – the successor to VP9", SPIE Optical Engineering+ Applications, Applications of Digital Image Processing XXXVIII, International Society for Optics and Photonics, 9599: 95991E, Bibcode:2015SPIE.9599E..1EM, doi:10.1117/12.2191104, S2CID61317162
^ abcIan Trow (16 September 2018). Tech Talks: Codec wars (Recorded talk). IBC 2018 Conference. 28 minutes in. Retrieved 18 September 2018.
^"Results of Elecard's latest benchmarks of AV1 compared to HEVC". 24 April 2017. Archived from the original on 26 December 2017. Retrieved 14 June 2017. The most intriguing result obtained after analysis of the data lies in the fact that the developed codec AV1 is currently equal in its performance with HEVC. The given streams are encoded with AV1 update of 2017.01.31
^Ozer, Jan (18 September 2020). "AV1 Has Arrived: Comparing Codecs from AOMedia, Visionular, and Intel/Netflix". Archived from the original on 10 November 2020. Retrieved 7 November 2020. While 2018 was the year AV1 became known, 2020 will be the year that AV1 became interesting, primarily because of three developments. First, in early 2020, AV1-enabled smart TVs hit the market, right on the 2-year schedule announced back in 2018 by the Alliance for Open Media (AOMedia). Second, over the past two years, encoding times for the AOMedia AV1 codec have dropped from about 2500x real time to about 2x slower than HEVC. Finally, the emergence of third-party AV1 codecs have increased both the quality and encoding speed of the AV1 codec.
^Ronca, David (12 October 2016). "Royalty-Free Video Encoding Netflix Meet-up". YouTube. Netflix. Archived from the original on 4 February 2021. Retrieved 5 February 2020. In addition, we're engaged with the AOM as far as providing test vectors, providing requirements, we'll be looking forward to testing AV1 in our workflow against a large catalog and providing results there. And also we would expect to be an early adopter of AV1.
^Ozer, Jan; Shen, Yueshi (2 May 2019). "NAB 2019: Twitch Talks VP9 and AV1 Roadmap". YouTube. Archived from the original on 12 July 2020. Retrieved 30 May 2019. but we're hoping, towards 2024-2025 the AV1 ecosystem's ready, we wanna switch to AV1 a 100%. … this is our projection right now. But on the other hand, as I said, our AV1 release will be, for the head content will be a lot sooner. We are hoping 2022-2023 is we are going to release AV1 for the head content.
^ ab"Linux Conference Australia 2019: The AV1 Video Codec". YouTube. 24 January 2019. Archived from the original on 6 June 2019. Retrieved 5 February 2019. We have been focusing on freezing the bitstream and getting the quality, not necessarily making things fast. This is a graph of the [encoding] speed of AV1 over its development process. You can se that as we near the end of that process, we started making things faster again, and it's now two orders of magnitude faster than it was at its slowest point. So that's going to improve. And this is a corresponding graph of the quality. (…) So you can see that even as it has continued to get much faster, the quality hasn't really gone down. (…) We wanted to approach this from the other end, so we started an encoder of our own, called rav1e, and the idea is that we would start out always being fast, and then try to make it better over time.