close
close

Light tricks shakes AI video creation with a powerful open source model

Light tricks Ltd. Throws the Gauntlet on artificial intelligence PowerHouses Openaai, Google LLC and others with the publication of its latest open source video video model LTX Video-13b.

The new version should be a significant upgrade to the original LTXV model of Light tricks, which increases the number of parameters and improves its functions so that they “dramatically” increase the quality of your video editions and at the same time maintain your impressive speed. Light tricks are available as part of Lightrick LTX Studio's flagship tool and says that LTXV-13b can generate videos with “breathtaking detail, coherence and control”, even if they are operated on hardware for consumers.

The original LTXV model debut in November and, as one of the most advanced models, became a lot of attention. With its light architecture, the 2-billion parameter model on laptops and PCs, which are driven by a single graphics processor for consumers, became efficient and quickly generate five seconds with smooth-looking video with smooth and consistent movement.

However, it was the extremely accessible nature of LTXV that really contributed to standing out from the crowd. In a world in which the most advanced models are usually “black boxes” that are closed behind the pay-to-play application programming interface, LTXV was a hint of fresh air. The open source model, its code base and its weights were freely made available to the AI ​​community, which gives researchers and enthusiasts a rare opportunity to understand how it works and makes it even better.

Light tricks made LTXV Open Source because it would like to promote further innovations in the AI ​​industry, and the only way to do this is to make the latest progress for everyone so that everyone can build on them. It was a calculated step of the startup, which hoped that it brought as many developers into the hands of so many developers as possible to use more of them to the paid platforms.

With LTXV-13B, the company is pursuing the same approach and provides it to download from Hugging Face and Github, where it can be freely licensed by any organization with less than $ 10 million. This means that users can freely tinker with how you want it to be finely adjusted, add new functions and integrate into third -party applications.

Fine -grained controls

Users can also get some convincing new functions into the hands that have been developed to improve video quality without influencing the efficiency of the model.

One of the biggest updates is a new function for multiscale rendering, which enables the creators to slow more details and color in a step-by-step process. Think of an artist who starts with a coarse pencil sketch before pulling out his brush and adding more complicated details and colors. Creators can use the same “layered” approach and gradually improve the individual elements in their videos, similar to the scene design techniques used by professional filmmakers.

The advantage of doing this is double. On the one hand, this leads to better videos with refined visual details, said Light tricks. It is also much faster and enables the model to make high-resolution videos up to 30 times faster than competing models with a similar number of parameters.

Light tricks also revealed improvements in the existing functions for the camera movement control, keyframe processing, the multi-shot sequencing and the adaptation of movement at the scene level. In addition, the publication integrates several contributions from the Open Source Community, which improve the scene crew and movement consistency of the model and preserve its efficiency.

Light tricks, for example, said that it worked with researchers to integrate more advanced reference-to-video generations and video-to-video processing tools with LTXV-13b. And there are new Up -Sampling controls that help to eliminate the effects of background noise.

The open source community also helped the company to optimize LTXV-13B to ensure that it is still efficient for consumers' GPUs, although it is much more bulky than the original model. This is made possible by the Ueffictic -Q8 kernel, which scales the power of the model on devices with minimal arithmetic resources. In this way, developers can run the model locally on any machine.

LTXV-13b is also a “ethical” model, since it was trained on a curated data set with visual assets by Getty Images Holdings Inc. and Shutterstock Inc. The high quality of the licensed training data ensures that the expenditure of the model is both visually convincing and safe without using the risk of problems with copy law injury problems.

LTXV-13b is now available via LTX Studio, a premium platform with which the creators can sketch their ideas with the help of text-based input requests and slowly refine to generate professional videos. With LTX Studio, creators can access advanced processing tools so that they change the camera hinges, refine the appearance of individual characters, edit buildings and objects in the background, adapt the environment and much more.

Co-founder and managing director ZEV Farbman said that the publication was a “crucial moment” for everyone who is interested in the AI ​​video-video.

“Our users can now create content with more consistency, better quality and closer control,” he promised. “This new version of LTX video is carried out on consumer hardware and remains at the same time what makes all of our products differently – speed, creativity and user -friendliness.”

Image: silicon angle/dreamina

Your support is important to us and helps us keep the content free.

A click below supports our mission to provide free, depth and relevant content.

Enter our community on YouTube

Join the community, which includes more than 15,000 #cubealumni experts, including Amazon.com -ceo Andy Jassy, ​​the founder and CEO of Dell Technologies, Michael Dell, the CEO of Intel, Pat Gelsinger, and many other lights and experts.

“Thecube is an important partner in the industry. You are really part of our events and we really appreciate you, and I know that people also appreciate the content they created” – Andy Jassy

THANKS

Leave a Comment