DVD & Blu-Ray video discs can be complicated, as in hard -- most everything else about video is complicated in a different way, as in there's loads & loads of stuff to read & learn in every direction. Lots of people like & want quick answers -- unless your question is Very specific, e.g. will VLC play this, you won't find an easy answer concerning most anything video related, & if you do, it's usually only partly correct at best. That means that there's a lot of misinformation on-line, & because it's quicker & easier to just accept it, that misinformation can become quite popular. Here are some of the basics for dealing, working with video. It's oversimplified -- my goal was to provide enough info that if something really interests you, you'll be able to pursue that specific topic & learn more. OTOH even at a somewhat simplified level, because I try to cover a lot of ground, I think this borders on overkill -- I tried to break it up so it's easier for anyone to ignore the stuff that doesn't matter to them.
To start with, video is nothing more than a string of images, a digital flipbook if you will. It's always compressed because you couldn't deal effectively with the mass of data reading 30 .tif image files a second. All video needs a decoder to uncompress it, an encoder to compress it, a splitter to separate the audio & video for processing, often one or more filters in between input & output, and a renderer to display the video as it's played. Graphedt, GraphStudio, & GraphStudioNext are examples of apps that you can use to see just what's involved in playing a video in your installed copy of Windows -- they can also be decent troubleshooting tools, &/or let you manipulate audio & video in ways that your software may not be able to. All video is stored in some sort of container file -- you can often change the container file or rewrite it without altering the contents. Video & Audio data are always separate, but they can be combined in the same container file -- putting them together is called muxing, splitting them apart is demuxing. Content can also be in a nested container arrangement, e.g. an audio file [in its own container] muxed into an .avi or .mpg container, &/or .m2v [mpg2] video & .ac3 audio files together in a DVD VOB file.
In the old days everything about video in Windows was VFW [Video For Windows], which is a standard way for things like codecs [video COmpressor DECompressor] to work inside video apps. Direct Show is it's replacement, though VFW is still around. With VFW it's difficult to have more than one mjpeg codec -- with DS [Direct Show] you can have several, & they all compete. Everything DS competes. A DS file with a specific function [called a DS filter] has registry entries that tell Windows what it can do, often incorrectly BTW [it's not uncommon for them to lie about what they can do]. Those registry entries also include a DS filter's Merit, which is a ranking of sorts... When software using DS opens a video file it tries to built a graph, which is a chain of the DS filters necessary to open & usually play that file. When there's more than one filter for a particular job, the software will choose those with the highest merit 1st. If one combination of filters doesn't work the software will try another, & if that doesn't work another and so on. DS filters can conflict with one another, & filters that were tried but not used can remain open, increasing the potential for conflict. You can alter DS filter merit, &/or disable filters to get things working better, or just get them working when things break.
Every time you (re)encode video you lose quality -- every time... it's called generational loss. And that's an overriding concern whenever you're working with video. Today quality can be more important than ever, when we use higher resolution displays than ever before, when even new models of tablets & cell phones brag about resolutions & clarity that were undreamed of on the best monitors & TVs a few years ago, To get the results many expect nowadays it's common practice to start with a higher resolution than you need to provide a cushion of sorts -- popular, standard-sized video on Youtube starts as HD, while HD versions start out at the highest quality possible for whomever's doing the shooting. So start out with more resolution, a larger frame size than you need, then plan your workflow to minimize generational loss.
If you don't have a lot of extra quality to spare, or if you just want to keep every bit you can, check out frame serving -- the results from one process or app are send directly to another app without creating an encoded file on disk as an intermediate step. It can also save both disk space & time, but mileage varies. If your original source is marginal, see if you can record directly into the final format you want to use. You can do a limited amount of enhancing &/or restoration, but not nearly as much as you can with still images, so try not to plan on making up for generational loss through filtering. That said, PC/laptop graphics hardware has come a long way, & routinely enhances the video you play... in some circumstances you **might** get better quality doing a screen capture of playing video.
While I'm talking about hardware, however briefly, try not to enlarge video -- the hardware you're using to play that video can do it on the fly more efficiently, & most likely with better quality. Go ahead & test it if you want. OK, in theory a slower than real time, high quality, bi-cubic resize with Avisynth might do a better job, if only slightly, but then you've got generational loss from re-encoding taking away that improvement. In many cases you can use hardware scaling capabilities to your advantage.
The bigger the video frame the more pixels it holds, which means handling more data. That data has to be moved, stored, read, & processed before you see the video playing. You may want or have to limit the amount of data for any or all of those 4 reasons, & then you have 2 choices, increase the amount of video compression or reduce the frame size. Both choices throw out some of your original data, reducing quality, but in practice when you reduce the frame size you can get some [not all] of that quality back via upscaling in hardware. Super Video CDs [SVCDs] gave you *almost* DVD quality in 700 MB instead of 4 GB, in large part by using a 480 x 480 frame rather than 720 x 480. Blu-Ray on DVD can do the same thing using frame sizes of 1280 x 720 or 1440 x 1080 instead of 1920 x 1080. That's not to say you should reduce the frame size when you don't have to.
The amount of data a video file holds is referred to by the bandwidth or bit rate of that video file or stream -- that bit rate for the duration of the video determines the file size. You usually increase the amount of video compression by reducing the bit rate in the encoder settings, sometimes you use a quality setting instead, and with some encoders you have your choice, being able to set a target bit rate or quality level.
The format(s) you choose to encode your video matters, because while all encoding loses quality, some codecs lose a lot more data than others. As possible for recording & for any intermediates use codecs that store every frame of video -- final distribution formats like AVC only store occasional full frames, & then record what changes in all the rest. An intermediate video file is when you can't do everything you want/need in one app, so you save a video in as lossless a format as you can, then open that file in your 2nd (or 3rd) app to continue working with it. I like the free UT Video, though many others use a free codec designed just for this sort of thing from Avid. Something else to look out for are color conversions...
Different formats &/or codecs store color data in different ways -- color data may be stored separately from picture data, can be in different color spaces, can be stored based on different ways of measuring or analyzing color. Your monitor shows you images based on Windows using the familiar RGB [Red Green Blue] data. Video may store RGB values, or more commonly it may use YUV, with colors being translated &/or transformed when you watch it on your monitor. What you want to watch out for is software converting data to RGB & back to YUV, sometimes more than once during processing. As possible you also don't want to reencode video from one format to another when both use different methods of storing color. You usually won't lose much in the conversion process, but repeated conversions add up, & if you can avoid it entirely so much the better.
Without going into a lot of tech details -- you can read more here if you want http://en.wikipedia.org/wiki/Chroma_subsampling -- watch out for numbers like 4:2:0 & 4:2:2 etc. in your encoder settings or dialogs... oversimplified, try not to go from one codec with one set of those numbers to a codec with another set.
Watch out for the color space -- if you want you can read: http://en.wikipedia.org/wiki/YCbCr -- http://en.wikipedia.org/wiki/Rec._601 . Briefly, 1/2 century or so ago they found that they couldn't broadcast the full range of color over the air waves for TV, so they came up with an abbreviated set, which was further reduced in the US. IMHO you *should* be able to almost completely ignore it today, but the video software & hardware in use won't always let you. I wouldn't advise you to worry about converting video you shoot or create into that reduced color set or space [unless you're sending it to a broadcast TV studio], but I will caution you to watch out for hardware & software doing that same conversion for you, sometimes repeatedly, usually unannounced.
One problem is video standards that are still in use but were developed based on analog broadcast TV, & somewhat arbitrarily, on the hardware made by one company to convert those analog signals to digital & back again. Newer standards are based off of & include older standards, plus to make sure they're widely accepted & adopted, newer video standards were made overly inclusive -- if you don't include everything those companies that are left out can object, so you include everything. Lots of companies, engineers, & developers go out of their way to incorporate video standards that practically speaking don't matter to you, while many companies & devs etc. don't, so you're stuck in the middle. This issue of reduced color space is so pervasive that the newest version of AMD's Catalyst Control Center, where you adjust how your video card/chipset works, lets you limit the colors whenever you're watching video -- I can't think of a valid reason why.
There are a couple of other things that can be hard to avoid... One is the issue of so-called non-square pixels, which is a concept inherited from when a certain company got involved in creating the standards [as noted above]. The other is Interlacing. An optional 3rd is IVT [Inverse Telecine]. The whole thing about non-square pixels is really very, very involved, so I'll risk avoiding explanation as much as I can. The other 2 are easy.
Hardest 1st, the picture for a standard TV with a picture tube is usually said to be 720 x 480 [NTSC] or 720 x 576 [PAL] -- you'll also see specs where it's a little taller than that, but this is the standard used by most software, for DVDs etc. Picture tubes don't have pixels -- they're coated on the inside with a compound that glows when hit by a beam of electrons shooting across the face of the tube horizontally, line by line. That's how TV resolution was/is measured BTW, & you'll still see references to the number of scan lines with different CRT tubes, e.g. commonly with [cheaper?] security cameras & their monitors. Since there's nothing digital about the signal that CRTs accept & use, you can arbitrarily set whatever digital specs you want -- the important thing is the hardware that takes that digital data & converts it to the analog signal the CRT needs. The digital standard originally was only important so PC software & capture hardware agreed on what frame size they were going to use -- later on that standard came to be used by studio equipment, cameras etc. whenever you were dealing with anything digital.
I wanted to start by emphasizing the arbitrary aspects of the standard to hopefully make what follows easier to swallow. Standards-based 720 x 480 or 576 video displayed on your PC *unaltered* will look distorted. If it looks undistorted on your PC, it will look distorted on a TV. Burn it to a disc, and it'll look distorted on your HDTV when you play it in your DVD/Blu-Ray player. DVD player software alters the video so it appears on your PC in the correct proportions or aspect ratio. When video software detects 720 x 480 or 576 video it may or may not alter the display for you, & it probably won't tell you it's being altered. Add still images, e.g. for a slideshow or DVD menu, & things can really get interesting... software may or may not alter the aspect ratio of your still(s) to match the standard, & they may change the aspect ratio of the display or preview window so things look right, or they may not.
Skip this & the next paragraph if you want... An oversimplified explanation is that 720 pixel width includes the overscan area at both sides of the video that's hidden from view by the TV bezel. Remove that overscan & you get roughly 704 pixels NTSC. 704 non-square pixels NTSC is roughly the equivalent of 640 square pixels. The reason it came about was capture or digitizing hardware sampled a broadcast TV signal at a certain frequency & the result was stored at 640 x 480. Newly designed hardware used a higher sampling frequency, but that extra data was useless if stored in the same 640 x 480 frame. SO the solution they came up with was to use a wider, 720 width frame to store the same picture but at a higher resolution -- because there were only so many lines in the analog signal the height couldn't be increased. To explain why it looked stretched they invented the concept that the pixels making up that picture were no longer square. There is math behind all of this, but it's been mostly ignored for a decade+ -- folks just use 720, 704, & 640, with sometimes frames around 655 width thrown in as the square pixel equivalent of 720 NTSC. In fact, the video world got tired enough of this whole mess that when they invented 16:9 video, e.g. for DVDs, no one bothered making up a standard frame size! Use whatever your software uses.
In practice I've found it useful to take screenshots of standard DVD sized video, especially wide screen, using DVD player software like PowerDVD that alters the display to show the proper aspect ratio -- importing that still into an image editor I can use the re-sizing windows or dialogs to resize the image however I want while maintaining the aspect ratio. This gives me the frame dimensions I need and use when re-sizing or cropping etc. Converting square to non-square, & the reverse, you can normally get close enough just resizing 640 x 480 to 720 x 480 & the reverse. Widescreen, 16:9 DVDs always use standard 720 width video, which is the wider picture resized to 720 width -- it's called anamorphic video, & is also available in other formats like .wmv & .mp4 -- when played the results depend on if the player understands anamorphic video for that video format.
Interlaced video is simple in that it makes better sense... Broadcast TV airwaves could only transmit so much data semi-reliably. Worse, in the US those same signals were carried for thousands of miles by wires strung on telephone & utility poles, so even more of the signal was lost. The answer was to do 2 things -- 1st they cut the signal in 1/2, then later when color TV came out, they reduced the number of colors. To cut the signal in 1/2 they developed a system where every other line was scanned on a TV picture tube -- 60 times a second, because the AC current at our power outlets cycles 60 times a second, 1/2 the lines are scanned. All the even numbered lines are scanned, then the odd numbered lines, & since the picture you see is the result of the compound on the inside of the picture tube glowing, it stays glowing for that 1/60 sec. when the alternate lines are being scanned. Thus NTSC video is 60 fields a second, or 30 frames a second [though through the vagaries of everything video it's actually 29.976 fps, which is what you'll see in software].
The explanation was necessary I think to convey the concept that every 1/60 sec. 1/2 of the lines have absolutely no data. It's not a half frame where the height's reduced, but a full sized frame where 1/2 the data's completely missing. There's no way to show that in software. Instead what you're shown is either a blend, or 2 alternate fields superimposed on one another -- since there's a 60th of a second worth of movement from one field to the next, whatever's shown by those superimposed lines will not always line up. The easiest way to handle interlaced video is leave it alone -- PC monitors, HDTVs, TVs all display it just fine. If you do decide to edit interlaced video, be aware that there are 2 possible field orders, odd or even, & that getting this wrong will make your video stutter. The best way to tell is to go through the frames one by one at a slower than normal playback speed looking for stuttering, but beware your software or graphics hardware may automatically blend interlaced frames, making it near impossible to tell, so turn that off or use another app.
Reduce the frame size and software will take the complete frame, average it out, & reduce the size of the result -- running a separate operation to de-interlace video when you're also reducing the frame size is normally senseless. Enlarging an interlaced frame works the same way, blend the fields & then enlarge, but it doesn't work nearly as well as reducing the size, & since display hardware up-sizes so well, there's usually no reason to even attempt it. The blending step of de-interlacing adds blur, which you'll then upsize to make it more apparent, then add whatever generational loss from re-encoding. Nowadays you'll run into interlaced standard sized video [720 x 480 or 576], but you'll also see it with 1080i. Sometimes it's fake, reported by a 1080p file or stream for compatibility, & sometimes it's a way to capture extra data with a video camera, e.g. 60 fps of interlaced data averaged out to 30 fps in editing. With cable TV 1080i is an opportunity to reduce the bandwidth consumed, leaving more bandwidth available for money making PPV.
IVT is even easier... NTSC video is 29.976 fps. Movie cameras using film traditionally shoot at 24 fps. Digital movie cameras aren't limited to 24 fps, but are often used at that frame rate because that's what viewers seem to prefer. To make 24 [or 23.976] video work with the NTSC standard they artificially add in between frames, either by repeating existing frames or creating new frames based on averaged picture data. Retail movie DVDs use pull down, which is just flags inserted into the video file saying which frames to repeat when. Removing that pulldown is trivial when you want to convert DVD video but when you have actual frames inserted, whether they're duplicates or completely new, averaged frames, you need IVT or Inverse Telecine to remove them. Mileage can vary a Lot using IVT, & the need is based on your preference as much as anything else, though it does reduce file size. At any rate there's a decent chance you'll see it as an option in software, & if you didn't know what it was, now you do.
A couple of things that may come up converting video to a smaller frame size... If the original & the resize process are both exceptionally sharp, watch for a shimmering effect, most noticeable in my experience on horizontal edges in bright outdoor city scenes. And something much more common if/when you're using a lot of video compression [i.e. low bit rate], look for blocky artifacts on almost uniform background surfaces or colors. For the 2nd you can of course increase the bit rate [reduce the amount of compression], but I've fixed both sorts of problems unintuitively by adding a bit of noise. Rather than use a filter, though you could easily enough, I've just encoded an intermediate mjpeg video file at slightly lower quality settings. I think that with the added noise the encoder was far less inclined to group blocks of similarly colored pixels together. Yes, the whole picture was degraded slightly, but I found the overall effect much less distracting.
Formats...
You have to encode to something -- even originally if you're shooting the video -- even the uncompressed video codec that comes with some versions of Windows is still a compression format, & a not too good one at that.
At the moment AVC [also called H.264] rules, but plenty of folks & companies would like to change that, so don't be surprised if from time to time in the next year or so you see lots of hype about the newest & now greatest format. You'll also hear more & more about 4k video -- a higher resolution almost no one can actually display. 4K is the hype successor to 3D -- despite so many movies premiering in the theaters in 3D, people just haven't been buying 3D HDTVs for their homes, so marketers need some new incentive to get you to buy stuff.
AVC is complicated... There are loads & loads of encoder settings that can be used, and with many of them the best setting or value depends on the scene to be encoded. AVC also lets you use more pre & post processing than previous video formats, & that means hardware doing more work, more calculations both during encoding & when the video's played back. Hardware that fails to play an AVC video well may work very well if/when you reduce the frame size or bit rate or both, giving it less work to do. And because of all that processing AVC is poor as a source for editing &/or conversion -- avoid re-encoding AVC as possible.
A fair amount of the time I think AVC video is transcoded or converted when an easier alternative would have been to change the video's container, &/or maybe change the audio format, &/or maybe change the player, whether software or hardware. If the video's in an on-line format like .flv, you may be able to just stick it in a more compatible container, e.g. .mp4 or .mkv. If a AVC file plays for you where you want it to play it, maybe think twice before you re-encode it to take up less storage space -- sometimes the best answer may be to change how & where you want to store &/or play it rather than turning it into something else.
If you want &/or need to work with a AVC video source you may have to change the container it's in before an app will accept it, but your success can depend on the AVC video you want to swap as well as the software you want to use it with. Some software will only open AVC video using Avisynth [requires an AVC decoder], which is simple to use -- often just cut & paste a couple few lines of script you found on-line into a new text file. If the software you want to use won't open the file you want it to work with, the best solutions I've found, regardless the format, are: 1) Google & test the methods & apps you find, & 2) use another app that will open the video, then save an intermediate video file. I often use Google, specifying the type of file & the software I want to use, because there's always new tools &/or tools that have been updated, & it can be really hard to keep up with all that, if it's possible. OTOH I've had situations where the stuff I found didn't work with a particular file, or took too long, or didn't have the quality I was looking for & so on. In those situations I use another app that will open the file [albeit sometimes after changing the container], & then save the video with the highest bit rate possible, in a format that's as near lossless as possible.
An alternative to AVC is mpg2, the same format used on DVDs. It's much easier to encode & play, meaning encoding takes much less time & playback requires much less processing horsepower. Mpeg2 playback is not guaranteed on cell phones or tablets -- you may have to install the VLC Mobile app or similar -- & since files are often larger you may have to split something like a movie into several, smaller files on a FAT32 microSD card. At the higher bit rates used on Blu-Ray discs mpg2 works surprisingly well, & can look just as good as AVC. Windows Media Video [& closely related VC1], Real Video, Xvid/mp4, WebM etc. are also alternatives but much less common -- because of that there's a lot less software, & that means not so much really good software.
Blu-Ray & AVCHD settings for AVC video files...
While the next frontier so-to-speak is 4k, the max video resolution commonly used is 1080p -- that's 1920 x 1080 progressive frame. 1080i [i for Interlaced] provides 1/2 the horizontal lines, normally at 60 fps. Since most all screens capable of displaying 1080 are progressive anyway, I don't think the difference between interlaced & progressive is as pronounced as it is with standard 720 width video. In fact with broadcast I've come across video that appeared to be mis-labeled 1080p, calling itself 1080i instead -- x264 includes the option to falsely report 1080i as well.
The video on retail Blu-Ray discs is usually 1080p with a bit rate between 20 & 40, usually encoded as AVC, rarely as VC1 [a type of .wmv], & while I've never seen it, mpg2 is included in the spec. 1440 x 1080 is also included, along with 720p [1280 x 720] & the most common DVD frame sizes [720 x 480 or 576]. AVCHD discs are virtually identical to Blu-Ray, & playable in most Blu-Ray players -- Blu-Ray however has better compatibility & stricter specs. If you create & burn a AVCHD disc you usually need to [or should anyway] test it on whatever players where you want to use the disc -- if you stick to Blu-Ray spec [at least in theory] you shouldn't have to.
The exact Blu-Ray spec, like the DVD spec before it, is a secret. Most all of the info you'll find on-line about DVDs was either leaked or figured out through reverse engineering -- the same goes for Blu-Ray. That means that the video encoding settings or specs are secret too. Whatever is required, retail software that can encode Blu-Ray video uses specific Blu-Ray templates. The plants that produce retail Blu-Ray movie discs have special software that checks the video & everything else to make sure it's Blu-Ray compliant before they start making your discs. Most retail software for creating Blu-Ray movie discs includes a version of that compliance checking software built in, so if you try to import non-compliant video [video that doesn't pass that internal check] it will re-encode it using that software's Blu-Ray template.
And what that all means is that you should IMHO spend a little bit of time thinking about what your wants, needs, and goals are before spending hours & hours (re)encoding HD video. My feelings are that if you're going to encode HD video to one of the sizes in the Blu-Ray spec, *And* if you want to keep that video for a long time, maybe watching it years from now, *And* if it's not too much trouble, go ahead & encode it using a Blu-Ray template so it's in Blu-Ray spec. I feel that that way in the future you'll have better odds of being able to watch it, if for no other reason than the sheer volume of Blu-Ray discs available today means there will be loads of them still hanging around tomorrow. I don't think you can say that about any other format besides mpg2 which, because it's used on millions of DVDs, will likely be playable decades from now. But then again that's just my opinion, I could be wrong, and it hinges on my being able to encode Blu-Ray spec video as easily as non-spec files, so why not?