In case you’re thinking about streaming media, you probably fall into considered one of two camps: Either you already know something about transcoding, or you’re wondering why you keep hearing about it. If you happen to aren’t sure you want it, bear with me for a couple of paragraphs. I’ll explain what transcoding is (and isn’t), and why it might be critical on your streaming success — particularly if you want to deliver adaptive streams to any device.
So, What Is Transcoding?
First, the word transcoding is commonly used as an umbrella time period that covers a number of digital media tasks:
Transcoding, at a high level, is taking already-compressed (or encoded) content material; decompressing (decoding) it; and then by some means altering and recompressing it. As an example, you might change the audio and/or video format (codec) from one to a different, comparable to converting from an MPEG2 source (commonly used in broadsolid television) to H.264 video and AAC audio (the preferred codecs for streaming). Different basic tasks may embrace adding watermarks, logos, or different graphics to your video.
Transrating refers specifically to altering bitrates, such as taking a fourK video input stream at thirteen Mbps and changing it into one or more lower-bitrate streams (additionally known as renditions): HD at 6Mbps, or other renditions at three Mbps, 1.eight Mbps, 1 Mbps, 600 kbps, etc.
Transsizing refers specifically to resizing the video frame; say, from a decision of 3840×2160 (4K UHD) down to 1920×1080 (1080p) or 1280×720 (720p).
So, when you say “transcoding,” you is perhaps referring to any combination of the above tasks — and typically are. Video conversion is computationally intensive, so transcoding often requires more highly effective hardware resources, including faster CPUs or graphics acceleration capabilities.
What Transcoding Is Not
Transcoding shouldn’t be confused with transmuxing, which can be referred to as repackaging, packetizing or rewrapping. Transmuxing is if you take compressed audio and video and — without altering the precise audio or video content material — (re)package it into totally different delivery formats.
For example, you might need H.264/AAC content, and by altering the container it’s packaged in, you’ll be able to deliver it as HTTP Live Streaming (HLS), Clean Streaming, HTTP Dynamic Streaming (HDS) or Dynamic Adaptive Streaming over HTTP (DASH). The computational overhead for transmuxing is much smaller than for transcoding.
When Is Transcoding Critical?
Simply put: Transcoding is critical once you need your content to achieve more finish users.
For instance, let’s say you need to do a live broadsolid using a camera and encoder. You may be compressing your content material with a RTMP encoder, and select the H.264 video codec at 1080p.
This needs to be delivered to online viewers. But when you try and stream it directly, you will have just a few problems. First, viewers without sufficient bandwidth aren’t going to be able to view the stream. Their players will be buffering consistently as they wait for packets of that 1080p video to arrive. Secondly, the RTMP protocol is not widely supported for playback. Apple’s HLS is far more widely used. Without transcoding and transmuxing the video, you will exclude almost anyone with slower data speeds, tablets, mobile phones, and connected TV devices.
Utilizing a transcoding software or service, you may concurrently create a set of time-aligned video streams, every with a unique bitrate and frame dimension, while converting the codecs and protocols to achieve additional viewers. This set of internet-friendly streams can then be packaged into a number of adaptive streaming codecs (e.g., HLS), permitting playback on almost any screen on the planet.
Another frequent example is broadcasting live streams using an IP camera, as would be the case with surveillance cameras and visitors cams. Again, to achieve the biggest number of viewers with the absolute best quality allowed by their bandwidth and gadgets, you’d wish to support adaptive streaming. You’d deliver one HD H.264/AAC stream to your transcoder (typically located on a server image in the cloud), which in flip would create a number of H.264/AAC renditions at completely different bitrates and resolutions. Then you’d have your media server (which may be the same server as your transcoder) package these renditions into one or more adaptive streaming formats before delivering them to finish users.
If you loved this information and you would certainly such as to receive even more information regarding Traffic Camera Sharing kindly check out our web-site.