Oct 25, 2021 - By Kai Hayashi, Angelo Paxinos, Yueshi Shen

Low Latency, High Reach: Creating an Unparalleled Live Video Streaming Network at Twitch

This is part of a series highlighting our Interactive Video Service (IVS) team, who pioneer low-latency live streaming solutions at Twitch and other companies. 

Learn more about how we’re changing the game by reading about our IVS Core team, who bring Twitch’s streaming tech to other companies, and our Global Service Operations Center, who are the first responders to infrastructure failures world-wide. 


Yueshi Shen and Kai Hayashi want to make live streaming accessible to more people around the world. The principal engineers are part of the Twitch Video team that developed a breakthrough live streaming solution, Amazon Interactive Video Service (IVS), that’s ideal for creating interactive video experiences regardless of technical skill or internet access. With a focus on a quick and easy setup at a moment’s notice, IVS provides a comprehensive, low-latency streaming solution that allows users to focus on building engaging, interactive experiences for their viewers. We connected with Kai and Yueshi about the challenges of building a worldwide live video network, their vision for the future of live streaming technology, and what it’s like to be pioneers in this new field. 

Angelo Paxinos: How long have you worked at Twitch, and what are you focusing on? 

Kai Hayashi: I’m a principal software engineer on the Video Platform team based in San Francisco. I’m focused on scalability, reach, and security with a goal of offering a reliable web service to IVS customers around the world. I’ve been working at Twitch for almost seven years, beginning back in 2013 when I started the team that built our in-house analytics system from scratch. That system receives over 60 gigabytes of data per minute at its peak and processes it for analysis. 

Yueshi Shen: I’ve been a principal engineer on the Transcoder team at Twitch for six years, and my domain is core video compression and streaming technology to create a better user experience.I live in the Bay Area, though it also feels a bit like I live in Asia; my work hours often extend to 10 p.m. because I work closely with some of our larger, APAC customers on every aspect (technology, product, commercial) related to launching IVS. 

AP: Tell me a bit more about IVS. What is it, who uses it, and how does it relate to Twitch streams? 

Yueshi: IVS is a fully managed live streaming solution. When you stream to Amazon IVS, the service provides everything you need to create low-latency live videos available to viewers around the world. We handle the ingestion, transcoding, packaging, and delivery of your live content, using the same technology that powers the Twitch platform. Arizona State University and Amazon Live are great examples of our technology in action. 

AP: What is Twitch trying to achieve?

Kai: Our mission is to empower people of varying levels of technical skill and internet access to be able to participate in live streaming. To enable that mission, we’ve built a vertically integrated video solution where we own each step of the process: the ingestion, the transcoding, the distribution, and the player–all of which allows us to deliver on our mission. We have hundreds of thousands of live channels at any given point in time—versus, say, traditional television, which might have about 40, or something like that. And we want to have a wide range of quality so lots of people can watch. For example, say we had five different quality levels for every broadcaster and there were 300,000 broadcasters; that’s 1,500,000 different streams.

Yueshi: We’re not only building an international TV station, making live streams available to millions of users; we’re also providing interactivity. Creators and their viewers can communicate with each other whether the viewer has a robust internet connection or is limited to one megabit per second.

AP: What are each of you focused on day-to-day? 

Yueshi: I’m focused on the user experience. There are a variety of ways to measure the quality of that experience, and sometimes we have to balance competing factors. For example, we use player buffers to keep the live streams smooth, because with the internet, no matter where you are, you’ll have connection problems. Essentially, our player downloads a bit of the video ahead of time and stores it in the player, so if there are any connection problems – maybe your mom is using the microwave next door and it interferes with your WiFi – your stream can still play smoothly. That said, buffers can increase latency, so there are trade-offs; it’s a technical challenge.

Kai: I’m mainly working to increase the number of locations where we transcode. When you upload your video, the live stream gets transcoded, meaning that the original video is converted into multiple formats that plays (streams) well on viewers’ devices under different network conditions. We distribute that transcode all around the globe. All the pieces that make this technology work are tied together, so the teams in charge of these various pieces need to talk to each other. We have to communicate internally really well.

AP: What are some challenges you’re facing right now?

Kai: As we’ve been creating more and more transcoding locations, the teams have to change the configuration for each new location and make sure all the transcoders are able to push into the distribution system. There’s a whole chain of dependencies that need to know about these transcoders.Right now I’m trying to understand how we distribute this configuration. How do the different services collect information when they’re built? Finding out involves working with a lot of teams across our whole org, because everyone has different requirements. So I’m starting small and working with one team to do it, and we’ll go from there.In addition, there’s just the sheer scale of this: When a big channel goes live, you suddenly have a few million people watching a channel. That’s something that you have to design for up front.

Yueshi: The video service is already available in many countries around the world, and we are expanding our edge locations to many others, especially in developing countries where creators and viewers are mobile-first and internet connectivity is not always reliable.

AP: What does it take to achieve something like this on a global scale?

Kai: We’ve been successful in providing high-quality, low-latency video at scale because we have a really diverse team solving the technical challenges. A lot of our success in providing a service that works just as well on a home computer in the San Francisco Bay Area, or on a phone in Brazil, comes from having experts in the full stack of hardware, fiber optics, networking, internet peering, distributed systems, and on and on. Bringing all of those people together to work toward a common goal is a full-time job—and one that both Yueshi and I have been privileged to partake in over the years. Hopefully we’ve built lasting visions with the teams and continue to provide value to new and old team members well into the future. 

AP: What’s it like being a pioneer in this field?

Yueshi: The scale we are dealing with, and the diversity of our users, require us to build new technology. For example, a key part of the interactive experience is reducing latency—the amount of time it takes from when a broadcaster waves at the camera to when their viewers see that wave show up on the screen, or the time it takes for a broadcaster’s words to reach their audience. Twitch had already reduced latency from 15 seconds down to about 10 seconds when I joined six years ago. But even then, imagine yourself as a broadcaster. When you say something and you don’t get a response for 10 seconds, it makes conversation difficult. So we’ve invented what we call low-latency HTTP live streaming, and we’ve further reduced that lag time down to 3 seconds. In Korea, which has some of the best network conditions in the world, we’re down to 1.5 seconds in certain cases, and the broadcasters really notice that.And now that we’ve launched Amazon IVS, the same technology we bring to the Twitch community is available for any company or developer to use, too. So there’s huge potential there. 

Kai: It’s not like there’s an off-the-shelf solution for anything we’re doing. And that’s exciting, because we can make our own requirements. One of our priorities is making sure the solution integrates well with our current system so that it’s natural for people to upgrade to the new system—or downgrade from it, if they can’t support it. There’s all sorts of scenarios we need to be prepared for.

Yueshi: Another focus for us is transcoding. If we receive a high bitrate source from broadcasters with a reliable network, we want to make sure that when we distribute the video, people who have high bandwidth can watch the high quality, while people who have low bandwidth can still watch, but at the quality available to them. So when we receive one bitrate, we transcode that into multiple bitrates, to get them ready so that people can watch. Transcoding is computationally very expensive. At the time I joined Twitch, we were only able to offer transcoding to 2 percent or 3 percent of our channels.Our video team has built a hardware-based transcoder solution, which is much more cost-effective and allows us to scale. We’re expanding our capacity by 10 times, at only two times the cost. That’s a big benefit for the community, especially for live video and gaming content that requires high-quality streaming to function. It’s a complex and expensive problem, but one that’s already making an impact.

AP: Given those challenges, how do you go about building a worldwide live video network? How do you know where to focus?

Yueshi: We deploy a lot of physical infrastructure. We partner with the local Internet Service Providers (ISPs). We also have what we call a backbone connection, which is the connection from our data center to the point of presence (POP), an access point or location at which our viewers on various ISPs around the world download their live video streams.  

We are a global content live streaming company, so we see American broadcasts are often watched in the U.K. and other English-speaking countries, like India and in Canada–or even in Southeast Asia. If we had to replicate every channel to everywhere, it would be a huge burden because there’s so much traffic that we need to shuffle around. We have to be smart to make sure the content goes to where it’s watched.

Kai: There’s a metric called reach, which estimates what percentage of people have a certain level of internet access and would actually get a decent quality of service from our platform. That metric gives us a good high-level idea, but it’s difficult to measure. So it’s even more valuable to actually go interact with our users, which is what we did two years ago. We sent a team to Brazil, and they hung out with partners—users who typically have large followings and monetize their channels. We looked at their experience watching live streams in a variety of locations. You start to see, “Oh, there’s 10 people in this house and they all use Android phones to watch on one WiFi device." The internet is complicated. Ideally, you want the server that’s serving video to be as close to the end user as possible. It all starts with physical locations, the ISPs, which are basically a gateway to everything on the internet. Servers physically connect to these ISPs and then pass that internet access along in a “chain” that ultimately connects to the modem and router in your house. Boom. Now you have internet access. 

In order to provide the best possible quality of service to customers, our team built servers and connected with local Internet Service Providers at selected locations or points of presence (POPs) around the world. The topology of our POPs are carefully designed to maximize our service’s reach and quality of service, and they help keep the cost of our video infrastructure under control.

AP: What do you find most gratifying about working on the IVS team?

Yueshi: Twitch gives me the freedom to work on anything relevant to the business or to this platform. Working on the most relevant topics improves my skills and helps me become a better engineer.

Kai: The question that motivates me most is: How do we continue to deliver 24/7, 365 days a year, on a platform where more than 1.5 million people are streaming at any given moment? That’s something that I’ve spent a lot of time designing. And for a lot of engineers at Twitch, that’s what they work on—figuring out how we make our platform even more reliable. 

This is part of a series highlighting our Interactive Video Service (IVS) team, who pioneer low-latency live streaming solutions at Twitch and other companies. 

Learn more about how we’re changing the game by reading about our IVS Core team, who bring Twitch’s streaming tech to other companies, and our Global Service Operations Center, who are the first responders to infrastructure failures world-wide. 


Want to Join Our Quest to empower live communities on the internet? Find out more about what it is like to work at Twitch on our Career Site,  LinkedIn, and Instagram, or check out our Job Openings and apply

In other news
Oct 27, 2021

Twitch Game Cover Art to be Powered by IGDB

Starting November 17, the cover art on IGDB will be displayed for Twitch categories.
Twitch Game Cover Art to be Powered by IGDB Post
Oct 25, 2021

So you want to work at Twitch? Meet the IVS Core team, with Song King!

Song King shares what it is like to work on our IVS Core team and how they help bring Twitch's low-latency video streaming tech to other companies.
So you want to work at Twitch? Meet the IVS Core team, with Song King! Post