In a nutshell, you reduce a video's file size by storing as little data as possible. That process is video compression, and there are several standard methods called codecs [COmpressor DECompressor]. There are different versions of the same codec, some better than others, & some specialized for different hardware, like GPUs, but they'll all comply with the standard. You can also reduce the frame size -- the *native or original* size of the picture you see -- which *might* become more common in the future as Microsoft adds a gaming tech to Windows that enlarges what you see on the fly... the idea is that if you don't have the hardware to play whatever game at 1080p or higher screen resolution, with quality settings in the mid to high range, maybe you can do that at a lower resolution, and let the tech enlarge the results to 1080p. And of course you can have fewer frames, using for example 24 fps [frames per second] rather than 30 or even 60 fps. That's something else that *may* become more common with recent tech that adds [generates] frames on the fly. There's also a more legacy method of reducing video file size, anamorphic video. It was first popularized with video DVDs, but a version was developed when HD was young; it works by using a smaller video frame with a distorted [squeezed] picture that's automatically widened or stretched by the player.
Modern codecs work by storing the minimum information necessary for software to reconstruct the video frame. [That software can be in a video player app or in a chip's firmware in say a Blu-ray player.] The amount of that minimal info that's stored depends on the bitrate -- the data in a video file has to be transferred or fed at a certain minimal rate to view the video in real time without frame drops. And so the bitrate serves as a limit or cap on how much data a video file can hold. As you reduce the bitrate less data is stored, and less data means less quality in the picture you see. As you reduce the bitrate close to the bare minimum you'll start to see random noise that's more apparent on a more or less solid background -- I used to like to check a clip with sand, which really seemed to set this off. You'll also start to see blocks, where instead of one color gradually blending into another you get an abrupt change or cutoff.
Now, if you've got a video of an auto race you're probably safe using one set bitrate for the entire video -- nothing much is changing in the *overall* scene -- but the typical film OTOH has both quiet scenes and scenes with much more action and/or movement. Ideally you'd use a lower bitrate for the quiet parts, reserving a much higher bitrate for the action shots. You also need a higher bitrate for certain scenes with smoke for example, which just doesn't compress well. While a behemoth like Netflix can encode each scene separately, using optimized settings for each one, you'd have a hard time doing something like that -- just combining the separate clips would be a big hassle, not to mention cutting the video into one clip for each scene. The Av1 encoder can do that for you, to a lesser extent than what Netflix achieves, but better than what you'd get using a constant bitrate with H.264/AVC. To get best results using ffmpeg however you'll likely need to use a GUI or the command line for more control over the settings.
trac.ffmpeg[.]org/wiki/Encode/AV1
If anyone wants to go a bit more in depth in experimenting with those settings, finding a compromise between the quality you'd like vs. the time you're willing to spend actually doing the encoding, Netflix provided the community with a torture video of sorts, including the types of scenes they've had problems trying to encode. As .mxf files you'll probably want to convert them to something more friendly -- Google for directions -- you can use Davinci Resolve or ffmpeg etc.,
media.xiph[.]org/video/derf/meridian/MERIDIAN_SHR_C_EN-XX_US-NR_51_LTRT_UHD_20160909_OV/