Lightricks advances AI video capabilities with new 13B-parameter LTXV model

AI-powered visual content tech pioneer Lightricks has responded to the latest advances of Big Tech rivals such as OpenAI, Meta, and Google, releasing an upgraded open-source model that sets new standards for rapid, high-fidelity video generation.
LTXV-13B is the successor to Lightricks’ original and extremely efficient video generator, LTX Video. It was compact enough to generate high-quality videos on consumer-grade hardware at a record-breaking speed.
Announcing the new version this week, the Lightricks leadership team sees LTXV-13B as a significant upgrade. With 13 billion parameters, it delivers substantially more sophistication than the previous 2-billion-parameter model. It also boasts some cutting-edge features not present in LTXV 0.9.6, including a new multiscale rendering capability that reduces latency while enhancing its users’ ability to refine the details of their outputs.
“Our users can now create content with more consistency, better quality, and tighter control,” Lightricks CEO Zeev Farbman said in a statement. “This new version of LTX Video runs on consumer hardware, while staying true to what makes all our products different – speed, creativity, and usability.”
A commitment to open source
With the release of LTXV-13B, Lightricks is doubling down on its belief that an open-source approach is the best way to accelerate innovation within the AI community. When the company first launched the original LTXV model, the developers wanted to make the model as accessible as possible, so as to encourage AI enthusiasts and academics to experiment with it as much as possible.
The leadership at Lightricks sees this strategy as necessary for advancing an industry where progress is often made by smaller startups, individual developers, and even hobbyists who just tinker around with AI models to see what they’re capable of, contributing to their code bases and building innovative integrations. But the ecosystem can only benefit from this if it has open access to the most advanced models.
While the likes of OpenAI’s Sora and Adobe’s Firefly are considered to be at the cutting edge of video generation, they’re locked behind paywalled APIs. This creates a big barrier to entry for newcomers, while the proprietary licensing nature of the models offered by Big Tech means it’s impossible to build on them.
“Today, the best models on the market are closed,” Farbman told Calcalist in November. “This creates problems beyond cost. Gaming companies, for example, want to produce simple graphics and then use these models to experiment with visual styles, but closed models don’t allow for that.”
Like its predecessor, LTXV-13B has been open-sourced, and it’s available for anyone to experiment with via Hugging Face and GitHub, where it’s free to license for enterprises with less than $10 million in annual revenue. Users have full freedom to customize it however they like, fine-tune it, build on top of it, add new features, enhance its training data, and more. As always, Lightricks is eager to see what the community can do, knowing that it stands to benefit from whatever improvements they make.
“We embarked on the adventure of distributing an open model so that academia and industry could use it, add capabilities, and develop new features. This will make us more competitive,” Farbman said.
Community contributions and ethical outputs
According to the release announcement, LTXV-13B has benefited from major contributions by the open-source community, helping to enhance aspects such as creative adaptability, motion consistency, and scene coherence, improving the overall quality of its outputs. The company highlights the new upsampling controls for video editing that can help to refine frame granularity and reverse noise. Another key advance is VACE model inference, which simplifies tasks such as video-to-video, reference-to-video, and masked video-to-video editing.
In addition, the open-source community is credited with increasing LTXV-13B’s scalability, helping to optimize inference tasks by using Q8 kernels with diffusers, so it can still run efficiently on consumer hardware, despite being much bigger than the original LTX Video model.
Users can also be reassured of the “ethical” nature of LTXV-13B, which was trained on licensed data from the stock photo and video companies Getty Images and Shutterstock. This is in stark contrast to the questionable practices of tech giants like OpenAI, which have attracted significant controversy by using content scraped from the internet’s top publishers for training data, leading to concerns over copyright infringement.
Moreover, the high quality of Getty’s and Shutterstock’s visual assets has led to a dramatic improvement in the overall caliber of LTXV-13B’s outputs, Lightricks says.
The model’s performance has been further enhanced by a new multiscale rendering feature. The approach provides creators with more control over the finer details in the videos they generate, and it’s much faster too, with rendering times up to 30 times faster than similar-sized models.
The AI video model competition intensifies
The launch of LTXV-13B highlights the rapid pace of development in AI video generation, coming less than three weeks after Lightricks announced a significant upgrade to its original model with LTXV 0.9.6.
That release received a warm reception from users, who noted its huge speed increase, superior prompt adherence, and greater coherence, among other benefits. The default resolution of its outputs was notably increased to 1216×704 pixels at 30 frames per second, resulting in more fluid videos. Lightricks also offered a “distilled” version of that model, boosting its ability to generate quality outputs on low-powered hardware.
No doubt, the AI community will be eager to see how the latest improvements in LTXV-13B stack up against the rest of the industry. The AI video arms race has intensified dramatically in recent months, with dozens of competitors making significant strides of their own. In March, Runway announced that the company had made unprecedented gains in terms of visual fidelity with the launch of its Gen-4 model, noting its ability to generate characters, locations, and objects across scenes and from different perspectives with much higher consistency.
Powerhouses like OpenAI, Google and Adobe have received plenty of attention with their most recent proprietary video models, such as Sora, Veo 2 and Firefly, while Alibaba Cloud threw its hat into the open-source ring in February with the release of a series of models based on its 14B and 1.3B parameter Wan 2.1 models.
Lightricks also faces competition in terms of ethical AI video-generation from a startup called Moonvalley, which launched its first video-generation model, Marey, back in March, trained exclusively on “clean” data that was either fully licensed or owned by the company.
One advantage LTXV-13B has is its integration with Lightricks’ LTX Studio platform. While users can download the model themselves, it’s much easier to access it via the web app, which is used by professional and hobbyist creators to create impressive videos without the need for expensive production shoots.
LTX Studio provides users additional controls over the editing process, with features such as camera motion control, keyframe editing, and multi-shot sequencing, making it easier to refine the model’s outputs. The platform also provides access to third-party models such as Veo 2 and Flux, allowing for further experimentation.
🚀 Want Your Story Featured?
Get in front of thousands of founders, investors, PE firms, tech executives, decision makers, and tech readers by submitting your story to TechStartups.com.
Get Featured