Just look at the bit rate of what you are streaming and multiply it by 3 then add a little extra for overhead.
I have 35mbps upload from the ISP, and limit each stream to 8mbps. This covers direct streaming all my 1080p content and a 4K transcode as needed.
My family is very satisfied with 6 mbit/s per stream. Some HEVC, most H264. They see it as high quality. 3 Streams would be 18 to 20 Mbit/s
Are you transcoding?
4mbit per client for 1080 is generally a workable minimum for the average casual watcher if you have H265 compatible clients (and a decent encoder, like a modern intel CPU for example), 6 - 8mbit per client if its H264 only.
Remember that the bitrate to quality curve for live transcoding isn’t as good as a slow, non-real-time encode done the brute force way on a CPU. so if you have a few videos that look great at 4mbit, dont assume your own transcodes will look quite that nice, you’re using a GPU to get it done as quickly as possible, with acceptable quality, not as slowly and carefully as possible for the best compression.
I don’t have a jellyfin server but 1MB/s (8mbps) for each person watching 1080p (3.6Gb per hour of content for each file) seems reasonable. ~3MB/s (24mbps) upload and as much download should work.
1mbps is awfully low for 1080. Or did you mean megabyte rather than megabit?
I had a hunch that writing the actual Upload/download speed tather than mbps was probably wrong. My bad, my internet provider lingo is rusted.
Gotcha. Typically lowercase b=bit and uppercase B=Byte, but it’s hard to tell what people mean sometimes, especially in casual posts.
Come to think of it, I messed up the capitalization too. Should be a capital M for mega.
Why don’t people use Mb/s and MB/s which makes it so much clearer what you’re talking about
Back in the day, the rule was mbit (megabit) for data in transfer (network speed) and MB (megabyte) for data at rest, like on HDDs
So mbit/s instead of Mbit/s ? But the M in Mega is always capitalized though, except the k in kilo.
but why?
Bigger number sounds better for the ISP.
The real answer?
Data is transmitted in packets. Each packet has a packet header, and a packet payload. The total data transmitted is the header + payload.
If you’re transmitting smaller packet sizes, it means your header is a larger percentage of the total packet size.
Measuring in megabits is the ISP telling you “look, your connection is good for X amount of data. How you choose to use that data is up to you. If you want more of it going to your packet headers instead of your payload, fine.” A bit is a bit is a bit to your ISP.
@Moneo @SigHunter Networking came to be when there were lots of different implementations of a ‘byte’. The PDP-10 was prevalent at the time the internet was being developed for example, which supported variable byte lengths of up to 36-bits per byte.
Network protocols had to support every device regardless of its byte size, so protocol specifications settled on bits as the lowest common unit size, while referring to 8-bit fields as ‘octets’ before 8-bit became the de facto standard byte length.
The best format imo is MB/s and Mbit/s
It avoids all confusion.
How expensive is internet? If its cheap go overkill and don’t worry about it.