Brightcove
지원+1 888 882 1880
제품
솔루션 이용사례
리소스
브라이트코브
Search IconA magnifying glass icon.
제품 문의하기Request a Demo

Context-Aware Encoding: Building a Better Mousetrap

Tech Talk

In 2015, video heads everywhere marveled at the research Netflix had just completed and applied to their entire library. Their Per-Title Encode Optimization approach, leveraged an analysis of each title to determine the best way to encode based on the complexity of the action in-frame.

Accounting for Video Complexity

In simple terms, not all video is the same. Sports is complex because it has lots of scene-to-scene motion—not just the players, but camera moves. Whereas episodic dramas may have much less scene-to-scene motion, and talking heads in news and current affairs programming generally have the least motion.

This motion relates to encoding complexity: The more you have, the more complex the video is to encode. Netflix’s approach was to look at each piece of content and determine how to encode it based on its inherent complexity. For an organization that delivers more streams than anyone else, every bit of savings and every slight stream optimization can have a massive return on investment.

As impressive as this innovation was, there are always technologists in this space who look at challenges from a different angle. This is how we’ve gone from postage stamp sized video to today’s HDR 4K, DVR enabled live experiences, and beyond. It seems that no matter how impressive the next innovation is, there is another streaming technologist thinking about how to build the proverbial better mousetrap.

Building Context-Aware Encoding

In this case, Brightcove’s video research team, lead by Dr. Yuriy Reznik, came up with one of our newest video innovations at Brightcove called Context-Aware Encoding (CAE). CAE takes the concept of content-based encoding optimization and augments it with additional information about network conditions and the device distribution amongst the audience.

Think of Context-Aware Encoding as having a compression expert in a box. For every title processed, and every frame therein, CAE is looking at the source asset and also making calculations about the target device and the network through which the stream will be delivered.

Using this approach, it can optimize the encoding process so that multiple attributes of the bitrate ladder are adjusted, not just the bitrate, and save on renditions that aren’t needed. On average, this approach has shown savings on the order of one third across most content types, and as much as 50% where in-frame activity is fairly simple.

Testing Context-Aware Encoding

Of particular note, Jan Ozer, one of the video industry’s best known video experts put Brightcove’s Context-Aware Encoding through its paces in a broad and subjective test.

In Jan’s words:

“Why was CAE so successful? Because with this low motion, synthetic video, it allowed Brightcove to deliver a higher resolution video to lower bitrate viewers than a traditional ladder. The result illustrates the key advantages of per-title encoding; happier viewers, lower bandwidth consumption, and lower storage and encoding costs from dropping from a seven rung ladder to four.”

This is great initial feedback for Context-Aware Encoding, and we’re only getting started.

As we work with more publishers, networks, and broadcasters and process more content, the algorithm will learn over time. Ultimately, this will mean that we are continuing to drive the costs out of bringing rich, compelling video experiences to audiences everywhere and transforming the media experience.


맨 위로