CK Tech Check logo

The AV1 Codec – A Techies Perspective

av1 logo

Intro

AV1 also known as Aomedia Video 1 is a video codec that has been touted as a game changer for major companies and consumers alike. The question is, how big of a leap forward is this relatively new codec? Nathan Larsen explains some of the intricacies to try and answer that question.

Full Guide

More on YouTube

Subscribe to the CKTechCheck channel to get even more helpful tutorials and product reviews the moment they are released.

Table of Contents

Background

The biggest thing with AV1 is that it’s patent-free. At least, as patent-free as you can get: there are several companies who have claimed that their random patents apply to AV1, and have tried to collect license fees, but Google has fought them by saying that 1) they (meaning Google and the other members of the Alliance for Open Media who developed it) will pay for any patent litigation that occurs if someone comes after you for using it, and 2) all of the free licenses to patents owned by Alliance for Open Media members self-destruct if you attempt any kind of patent litigation at all, which probably puts companies in a way worse place than when they started if they attempt to enforce any of their patents.

Adoption Motivation

This is a big thing: Fedora (the Linux distribution) actually had to pull H.264 and H.265/HEVC hardware acceleration support from the default installation about a month ago, because it turned out it was technically infringing on various patents.

For H.264 specifically, not only did they attempt to enforce patents on codec implementations, they actually tried to put a royalty on distributing content that’s encoded in H.264. Like, having a video embedded on your website? That costs a royalty. They eventually said that they wouldn’t enforce royalties on “free video distributed over the Internet”, but they apparently just gave up enforcing it altogether after a while because everybody refused to pay them.

This is Google’s motivation for AV1: they run YouTube, which reencodes approximately yes videos every day, and distributes even more than that. So they have an economic motivation to both get smaller file sizes, and to avoid using formats that make them pay royalties. Which is what led to Roku losing the rights to YouTube TV briefly: one of Google’s conditions was that they had to include a hardware AV1 decoder on any future devices, because they intend to go AV1-only as much as possible within the next few years; and they’ve apparently built an entire data center (or at least a very large section in an existing data center) just dedicated to transcoding the YouTube library to AV1.

Technical Review

It accomplishes that with a much more efficient frequency transform algorithm, but also by including a bunch of options to allow you to adapt the encoder to different types of video. There’s an algorithm that recognizes common forms of camera shake and adjusts the encoder to handle those better; I already told you about the “synthetic film grain” system which denoises the video to remove the existing film grain, compresses the new “clean” version without the inefficiency created by truly random film grain, and then adds back algorithmic film grain that compresses much better; there’s much better prediction and filter algorithms; as well as switching to a non-binary arithmetic coder, which dodges patents on binary arithmetic encoders while apparently making the algorithm much more efficient to implement in hardware.

The main issue with AV1 right now is that it’s new and it’s slow. It’s getting better every day, but right now the software implementations of AV1 are still 1x in the best of conditions, and are often even slower; and hardware encoders are essentially unavailable outside of dedicated accelerator cards.

NVIDIA only just implemented hardware decoders for AV1 into the 3000-series cards, with their initial encoders coming on the 4000-series cards (which, as we discussed already, have their own issues). AMD’s at about the same point with decoders, and doesn’t even have plans for encoders yet. Intel included an encoder (and decoder) on their new Arc dedicated GPU, which is cool, but it hasn’t gotten wide adoption yet. Apple finally, begrudgingly, stuck a VP9 decoder into the M1 after it became clear that their push for HEVC adoption was going nowhere, but it looks like it’ll be at least another generation before we see any AV1 support (but they do seem to be working on it, as they added AVIF support to the most recent iOS/macOS update).

The bizarre thing about Apple is that they are a member of the Alliance for Open Media, and apparently invested into AV1, but they haven’t adopted it yet. I’m guessing it’s similar to the Lightning vs. USB-C thing, where Apple invested the money into developing the open standard because they wanted to be involved in designing it; but are going to stick to the expensive and annoying proprietary standard that they also developed out of spite until either the EU or Google end up forcing their hand.

Conclusion

It’s 50% better than H.264 and 20% better than HEVC while also not having any patent issues, but it’s really slow to encode with right now unless you have bleeding-edge hardware.

Learn More about Nathan Larsen

Share the knowledge!

Facebook
LinkedIn
Twitter
Reddit

This website uses cookies to ensure you get the best experience on our website.