logo
RunComfy
  • Models
  • ComfyUI
  • TrainerNew
  • API
  • Pricing
discord logo
MODELS
Explore
All Models
LIBRARY
Generations
MODEL APIS
API Docs
API Keys
ACCOUNT
Usage

LTX-2 19B Video-to-Video LoRA: High-Fidelity Motion & Style Transfer | RunComfy

ltx/ltx-2-19b/video-to-video/lora

Transform existing videos into cinematic, high-fidelity sequences with precise motion control, synchronized audio, and adaptable styles powered by a 19B LoRA video-to-video model.

The URL of the video to generate the video from.
An optional URL of an image to use as the first frame of the video.
LoRAs 1
URL, HuggingFace repo ID (owner/repo), or local path to LoRA weights.
Scale of the LoRA model.
List of LoRAs to apply (maximum 10).
When enabled, the number of frames will be calculated based on the video duration and FPS. When disabled, use the specified num_frames.
The number of frames to generate.
The size of the generated video.
Whether to generate audio for the video.
Whether to use multi-scale generation. If true, the model generates a smaller-scale version first, then refines details at the target scale.
When true, match the output FPS to the input video's FPS instead of using the default target FPS.
The frames per second of the generated video.
The guidance scale to use.
The number of inference steps to use.
The camera LoRA to use for controlling camera movement.
The scale of the camera LoRA to use for camera motion control.
The negative prompt to guide the generation away from undesired qualities.
Whether to enable prompt expansion.
The output type of the generated video.
The quality of the generated video.
The preprocessor to use for the video.
The type of IC-LoRA to load.
The scale of the IC-LoRA to use.
Video conditioning strength. Lower values represent more freedom given to the model to change the video content.
Idle
The rate is $0.05 per second.

Introduction To LTX-2 19B Video-to-Video LoRA

Developed by Lightricks, LTX-2 19B Video-to-Video LoRA is a 19-billion-parameter foundation model built for precise video transformation, synchronized audio, and high-fidelity motion control. Designed for creators, studios, and developers seeking structural accuracy and cinematic style, it replaces multi-step, error-prone workflows with a single, efficient video-to-video pipeline powered by LoRA and IC-LoRA controls.

Ideal for: Video Style Transfer | Motion Retargeting | Cinematic Scene Reinterpretation

Examples Of LTX-2 19B Video-to-Video LoRA

Video thumbnail
Loading...
Video thumbnail
Loading...
Video thumbnail
Loading...
Video thumbnail
Loading...
Video thumbnail
Loading...
Video thumbnail
Loading...

What makes LTX-2 19B Video-to-Video LoRA stand out

LTX-2 19B Video-to-Video LoRA is a structure-preserving transformer that restyles and enhances footage while keeping geometry, depth, and temporal continuity stable. Video-to-video serves as a targeted transformation of an existing clip, enabling controlled changes without rebuilding core scene layout. Leveraging LoRA conditioning and multi-scale refinement, LTX-2 19B Video-to-Video LoRA delivers cinematic fidelity, coherent motion, and configurable camera behavior across frames. Optional audio can be generated in sync.


Key capabilities:

  • Structure-preserving edits: LTX-2 19B Video-to-Video LoRA maintains pose, layout, and material response with minimal flicker.
  • Camera-aware motion: camera_lora and camera_lora_scale enable dolly and jib control with predictable results.
  • Multi-scale refinement: LTX-2 19B Video-to-Video LoRA performs coarse-to-fine generation for crisp detail at target size.
  • Conditioning fidelity: depth, canny, or pose preprocessors pair with IC-LoRA for accurate edge and silhouette adherence.
  • Temporal coherence: LTX-2 19B Video-to-Video LoRA aligns output via match_input_fps and match_video_length; video_strength tunes adherence to source.
  • Production-friendly delivery: codecs, quality, and write modes that LTX-2 19B Video-to-Video LoRA renders reproducibly.

Prompting guide for LTX-2 19B Video-to-Video LoRA

Start with LTX-2 19B Video-to-Video LoRA by providing video_url and a precise prompt that states what to change and what to preserve. Set match_video_length to true to maintain timing with the source, or specify num_frames and fps for custom pacing; match_input_fps aligns cadence. Choose video_size and enable use_multiscale for finer detail. When camera motion matters, the model responds to camera_lora and camera_lora_scale. To align edits with depth, canny, or pose, pair preprocessors with IC-LoRA; LTX-2 19B Video-to-Video LoRA will adhere to these signals. Control adherence via video_strength, then tune guidance_scale and num_inference_steps for balance between fidelity and flexibility. Use negative_prompt to exclude defects and generate_audio for synchronized output.


More Models to Try


  • LTX-2 19B Text-to-Video LoRA
  • LTX-2 19B Image-to-Video LoRA

Related Models

seedance-v1.5-pro/text-to-video

Create camera-controlled, audio-synced clips with smooth multilingual scene flow for design pros.

wan-2-2/lora/text-to-image

Generate cinematic visuals with MoE precision and creative control.

hailuo-2-3/pro/image-to-video

Turn static images into fluid, realistic 1080p motion with smart style control.

wan-2-2/vace-fun

Prompt-based animating with subject fidelity and smooth motion.

kling-2-1/standard/image-to-video

Animate a single image into a smooth video with Kling 2.1 Standard.

dreamina-3-0/text-to-video

Generate lifelike motion visuals fast with Dreamina 3.0 for designers.

Frequently Asked Questions

What is LTX-2 19B Video-to-Video LoRA and what does it do?

LTX-2 19B Video-to-Video LoRA is a 19-billion-parameter AI foundation model from Lightricks designed for generating synchronized audio and video. It supports video-to-video creation, allowing users to produce lifelike 4K clips with creative control and structural guidance.

What are the main features of LTX-2 19B Video-to-Video LoRA?

LTX-2 19B Video-to-Video LoRA offers native 4K video generation up to 50 fps, synchronized audio output, and robust temporal stability. Its LoRA and IC-LoRA modules enhance video-to-video performance by introducing camera motion, style, and structural control for pose, depth, and edge input signals.

Who should use LTX-2 19B Video-to-Video LoRA?

LTX-2 19B Video-to-Video LoRA is ideal for filmmakers, animators, content creators, game developers, and researchers who want to perform high-quality video-to-video style transfer, motion retargeting, and synchronized audiovisual generation in controlled creative spaces.

What inputs and outputs are supported by LTX-2 19B Video-to-Video LoRA?

LTX-2 19B Video-to-Video LoRA accepts text, images, video clips, and structural maps such as depth, pose, or edge data as input. It outputs synchronized 4K video and audio, making it ideal for end-to-end video-to-video pipelines or creative scene recreation tasks.

Are there any limitations to using LTX-2 19B Video-to-Video LoRA?

While LTX-2 19B Video-to-Video LoRA is powerful, users should note that its quantized or distilled versions might reduce detail or motion fidelity. Overuse of multiple LoRAs at high scales may also lead to inconsistent video-to-video results or visual artifacts.

Does LTX-2 19B Video-to-Video LoRA generate audio by default?

Yes, LTX-2 19B Video-to-Video LoRA generates synchronized audio alongside video, aligning ambient, dialogue, and motion sounds automatically—something that enhances the realism of video-to-video or text-to-video outputs compared to separated workflows.

Follow us
  • LinkedIn
  • Facebook
  • Instagram
  • Twitter
Support
  • Discord
  • Email
  • System Status
  • Affiliate
Video Models
  • Seedance 1.0
  • Seedance 1.5 Pro
  • Seedance 1.0 Pro Fast
  • Wan 2.2
  • LTX 2 Fast
  • Wan 2.6 Text to Video
  • View All Models →
Image Models
  • Qwen Image 2512 LoRA
  • Nano Banana Pro
  • Wan 2.6 Image to Image
  • seedream 4.0
  • Seedream 4.5 Edit sequential
  • Seedream 4.0 Edit sequential
  • View All Models →
Legal
  • Terms of Service
  • Privacy Policy
  • Cookie Policy
RunComfy
Copyright 2026 RunComfy. All Rights Reserved.

RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Models, enabling artists to harness the latest AI tools to create incredible art.