Skip to content

Releases: pifroggi/vs_align

v3.2.0

14 Jul 22:08
9c5e4e6

Choose a tag to compare

Changes

Temporal Alignment

  • Fixed wrong matching values when precision=2 (Butteraugli method) was used in GPU mode. This was due to the plugin vship changing its default values from version 3.0.0 onward, which now no longer match the CPU version.
  • Fixed wrong fps props for the output clip. The output clip will now always have the fps props of the reference clip, or the ones set trough ref_num and ref_den if used.

v3.1.0

20 Mar 21:48

Choose a tag to compare

Just a small fix and a small feature extension. By the way, the Spatial Alignment can now also be used in the image processing program chaiNNer via the "Align Image to Reference" node. This requires v0.25.0 or newer.

Changes

Spatial Alignment

  • Made sure device="cpu" runs fully on CPU. It was mistakenly still using CUDA for some things and fail if no CUDA device is present.

Temporal Alignment

  • Add support for tr=0 to just compare the current frame of clip against the current frame of ref (without a radius), which can sometimes be useful for example in combination with the tresh parameter.

v3.0.0

06 Mar 21:09
e04f4c7

Choose a tag to compare

I thought I'll try releases to have a place for posting changelogs.
Since I've never done a release before, first what changed in 3.0.0 and below what changed from the initial 1.0.0 version.

3.0.0 Changes

Spatial Alignment

  • Added new wide_search parameter and removed now obsolete iterations parameter.
    This enables a larger search radius at the cost of some speed. When set to True completely different crops like 4:3 and 16:9, shearing, and rotations up to 45° can be aligned. Recommended if the misalignment is larger than about 20 pixel.
    It works by doing a fast homography alignment with the help of XFeat first to move the corners of the image roughly into the right position before proceeding with the main alignment step. This is both better and faster than the old iterations parameter. Adds a new requirement opencv-python.
  • Improved border handling. Borders should no longer erratically move in some cases with low quality input clips.
  • General alignment quality improvements.
  • Some performance increases.
  • Fixed an issue with the mask sometimes causing the output frame to be green due to division by 0.
  • Fixes for very low resolution inputs.

Temporal Alignment

  • Add support for clip's framerate to be lower than ref's. Keep in mind that if this is the case, a matching frame may not exist.
  • Made sure batch_size can not be lower than 1.

2.2.0 Changes

Temporal Alignment

  • Added new batch_size parameter for precision=3 (TOPIQ method) to control VRAM usage. A value lower than tr will reduce VRAM usage, but is slower. None means maximum possible batch size.

2.1.0 Changes

Temporal Alignment

  • Added GPU support for precision=2 (Butteraugli method) with the help of the new vship plugin by Line-fr.

2.0.0 Changes

Spatial Alignment

  • Overhaul to how the alignment works, which makes it easier to get good results.
    Previously it could be hard to finetune the parameters. Sometimes higher precision was better, sometimes more iterations were better, sometimes multiple passes with different precisions was better, sometimes a combination of them all. The new version should now consistently improve with a higher precision level and no longer requires much time to find the right combinations.
  • precision levels now produce different results. Levels go from 1-4 instead of 1-5 previously. Alignment quality is better than before for a given precision level, but also slightly slower.
  • iterations parameter still exists, but is now only recommended for large misalignments.
  • Added new mask parameter.
    It takes a black & white clip that can be used to exclude areas in white from warping, like a watermark or text that is only on one clip. Masked areas will instead be warped like the surroundings. Can be a static single frame or a moving mask. Can be any format and dimensions.
  • Added new lq_input parameter and removed now obsolete blur_strength parameter.
    This enables better handling for low-quality input clips. When set to True general shapes are prioritized over high-frequency details like noise, grain, or compression artifacts by averaging the warping across a small area. Also fixes an issue sometimes noticeable in 2D animation, where lines can get slightly thicker/thinner due to warping, if that is the case on the reference. This works better than the old blur_strength parameter and does not require finetuning.
  • Resizing and warping is now done in the same operation which slightly reduces interpolation blur.
  • Added fp16 support to increase performance and reduce VRAM usage. Automatically used if supported by the hardware.
  • Changed RIFE model weights back to 4.14, which works better for this task.
  • Results are now temporally much more stable.

Temporal Alignment

  • Increased precision=3 (TOPIQ method) performance by around 40x with caching and batching with the trade-off of higher VRAM usage. This adds a new requirement timm and removes pyiqa.
  • Inputs with different framerates now cause much less slowdown and no longer require the tivtc plugin.
  • Inputs with different framerates can now be any format and no longer require YUV.
  • fp16 is now automatically used if supported by the hardware and no longer has a toggle.
  • Renamed parameter clip2 to out to better indicate what it is for.
  • General fixes for unexpected combination of parameters.

1.0.0

  • Initial version.