So Avisynth, probably the only reason I am interested in encoding at all, well more like I would really have no reason to encode, as no other program can even get close to the level of amazing that avisynth reaches. Then again if you fail at avisynth it can get pretty bad (see above picture for details)
Let me just start by saying this article will not really contain any direct advice about avisynth usage , it will be more focused on what I do with avisynth when faced with certain scenarios and issues.
Alright so first, I think there are really 2 ways to learn I used both, they are built around different mindsets so its really impossible for me to talk about one as if it is the other. I find that most people really are just afraid of avisynth and most people who know avisynth tend to be dicks and look down on new people, instead of trying to help them, but that is another rant.
BTW One thing to keep in mind is the concept of as I like to call them “encoder eyes”. Be it for what ever reason people who (usually extensively) encode tend (but not always) to be able to notice changes in video better, this is an extremely extremely important factor, because it really will determine if and how fast you are able to do get better. If you have a standard monitor that is not CRT then you should be ok, but things like better monitors will really help and can make a world of difference. I am going to assume, that everyone has an equally good monitor for this lecture. Also please realize that if at first you do not notice difference , you will eventually it just takes time, but more on that later.
So I will start off with the easier one first, or atleast more user/confidence friendly one. Get a source that preferably has some flaws that are either moderate or atleast noticeable. So now that you have your flawed source, I suggest this course of action, use a program such as smart ripper and rip the vob files into vob episodes (basically each episode is contained in its own vob). You have to do this by chapters usually, so it can be useful to check and see if you accidentally put part of the second episode into the first or vice versa. Once you have done that, then go to a program, like meGUI so that you may index your vob episode. After it indexes it should automatically open up a preview window and a options window. What you are going to want to do first, is to either manually crop off the black space (if it exists) , please note you can only going in multiples of 2. Then go to the second tab and click the analyse button, let that finish and then hit save. Now in order to proceed you are going to need to download avspmod. This will allow you to preview your script and work on it much easier than using meGUI, plus you can take screenshots with it which will be very important in a little.
Please PLEASE only use actual dvds or ISO’s if you plan to encode!!! The ONLY time I think raws should be encoded from, is if they are BD raws from Yousei Raws, which are basically 100% unfiltered transparent raws from the BD, but they tend to be huge as a consequence, but let me talk about DVDs for now.
Now really the only thing left to do is to start practicing with filters, but I think if you REALLY want to learn avisynth the most important thing to remember is
The only way to really learn avisynth is to do it alone then ask for help from other people, or atleast ask for feedback!!! Its almost impossible to learn how to do avisynth well if you don’t do it all alone!!!!!!!
So now the process should be as follows
1:Research if this is your first time, repeat step 1 until you feel like you are capable of understanding the basics of video
5:Repeat steps 2-5 until step 1 is required
Now lets start with step 1, research. Now when I say research I really mean it.
You need to gain an understanding of video, things like framerate, fields, telecining, dotcrawl , rainbowing, aliasing, combing, interlacing, noise, grain, haloing, chroma, luma, artifacts. These are some basic things that you need to understand. And I will go over them briefly to help show you visually (if possible) what each ones is.
Framerate, framerate is really closely connected to telecining and interlacing, so I will talk about all 3. Lets call all frames that contain unique movement, real frames. Framerate is how many frames make a up a second in video. A frame is basically just a picture, and when you put frames in a stream and run them consecutively you get video. In a perfect world video is progressive most progressive sources are 23.976 fps, which means every frame shows movement (or atleast every real frame shows INTENDED MOVEMENT ) more often you find dvds are telecined which basically is the process of adding 2 extra that are not moving frames after 3 progressive frames which makes the video 29.97fps (except in a special circumstance which I will talk about in a bit) Now these frames tend to look like this
, this is pretty light telecine (interlacing) and can be fixed relatively easily. What you normally do on a telecined source, is to IVTC it which stands for inverse telecine, basically meaning to fix the messed up frames and then remove them, the removal of the telecined frames is called decimation, bringing the source back to 23.976 fps.
Now the difference between telecine and interlacing can start to get a bit shady when you start to deal with hybrid sources, hybrid sources. It might just be a coincidence, but hybrid sources tend to be the worst kind you can find, filled with really odd pervasive problems such as dotcrawl and rainbowing and other nasty artifacts, I will talk more about that later. The reason the hybrid sources are weird is because they are not really 23.976 fps if you count all of their “real” frames. They are actually more than 23.976, but usually less than 29.97.
Now let me explain a couple of things, the first is interlacing. A true interlaced source (which are extremely rare) contains 59.94fps, with a “real” frame rate of 29.97. The true interlaced source contains 1 field per frame, a fields are basically partial images stored in frames, a normal telecined video contains 2 fields, which are then merged together to create 1 image. In this interlaced source you only need half of the frames as those are the “real” ones. So you bring that 59.94 fps to 29.97fps. Now the reason the process of removing the 2 telecined frames is called decimating (deci as in 10) and not something else (because you are removing 20%) is because if I recall correctly, one of the older processes of removing telecined frames involved , doubling frame rate by splitting each frame into 2 frames, each one containing a field. This process is called bobing or bob deinterlacing (remember normally a frame contains 2 fields) And then when you brought the frame rate back down, its at 29.97fps and then you remove the duplicate frames which is about 20% of 30, but only 10% of 60, which it was at before. (not really important to know but still, and I could be wrong)
Now let me talk briefly (as if) about combing, also know as partially interlaced frames. These can typically be seen after you deinterlace or IVTC (inverse telecine just incase you forgot what it meant) and they look something like this
, its easiest to see on the darker areas. It looks sort almost as if a comb had been run through the video IE the name combing is used often to describe it. There are multiple ways to fix this, all with different drawbacks and advantages, but I will cover actual solutions to things like this in my filtering guide later.
Now let me talk about aliasing, most people have seen this before somewhere, but might not of known the name for it.
I figured it would be easier to show you what it looked like than explain what it looks like (that is what a line with aliasing looks like when zoomed in). Aliasing also called jaggies (less often) are stairlike lines that appear where there should be smooth straight lines or curves. For example, when a straight, un-aliased line steps across one pixel, a overlap occurs halfway through the line, where it crosses the threshold from one pixel to the other.
BTW I am probably gonna start using the terms luma and chroma more often from this point on, so I should take this opportunity to explain quickly what they mean. Luma relates basically to brightness in terms of black and white (and gray). Chroma relates to color and the like.
Now lets talk about the 2 most evil video artifacts around, dotcrawl and rainbowing. Dotcrawl and rainbowing are both crosstalk artifacts dot crawl (in the luma) and rainbows (in the chroma) . It only shows up in the luma plane and tends to appear as a checkerboard pattern that typically flickers back and forth every frame, it also seems to attracted to things that border the color red. It may not be noticeable when you are just playing a DVD, unless it is really bad and other times you won’t see it until you stop on a still frame. Dotcrawl looks like this
when its bad and like this
when its not as bad. Regardless it hurts filesize and it just looks ugly. Rainbowing on the other hand is a lot more noticeable
and probably will be easy to see even when watching the video.
Now let me explain haloing, haloing is sort of as the name suggests, its a white ring around lines. It might be hard to notice if you are used to seeing it , but once it is gone then its much easier to tell what it looks like. See here
for an example of dehaloing or halo removal. Haloing is typically caused by 2 things. The first is from sharpening the video, sharpening causes haloing because of something that is known as “overshooting” in most sharpeners: the sharpener works by amplifying the difference (or things similar to this process) between nearby pixels, but sometimes the difference ends up brightening the nearby pixels making them look significantly different from the other nearby pixels , then that’s overshooting. The other reason why you might see haloing is because the source has it, for what ever reason. Usually it’s either that the company mastering the dvds sharpened the video or because they overcompressed it using archaic methods that were never very well suited to anime.
Now let me explain the difference between grain and noise. Noise IS NOT SUPPOSED TO BE IN THE VIDEO, it is a remnant (aka an artifact) of improper DVD mastering and/or overcompressing. Grain on the other hand may or may not be intended to be in the video, sometimes it is added for some strange unworldy reasons. Another key difference is that grain tends to help hold details together while noise actually makes them worse, which is why removing noise is usually easier than removing grain.
Here are 2 visual examples of the difference between noise and grain
Grain (click to really see)