Αποτελέσματα Αναζήτησης
[2024.08.13] 🎉 We are launching Open-Sora Plan v1.2.0 I2V model, which based on Open-Sora Plan v1.2.0. The current version supports image-to-video generation and transition generation (the starting and ending frames conditions for video generation).
24 Ιουλ 2024 · Compared to previous video generation models, Open-Sora-Plan v1.2.0 offers the following improvements: Better compressed visual representations . We optimized the structure of CausalVideoVAE, which now delivers enhanced performance and higher inference efficiency.
We open-source the Open-Sora-Plan to facilitate future development of Video Generation in the community. Code, data, model are made publicly available. Code: All training scripts and sample scripts.
In version 1.3.0, Open-Sora-Plan introduced the following five key features: A more powerful and cost-efficient WFVAE. We decompose video into several sub-bands using wavelet transforms, naturally capturing information across different frequency domains, leading to more efficient and robust VAE learning. Prompt Refiner.
Today, we are thrilled to launch a project called Open-Sora plan, aiming to reproduce OpenAI's video generation model. Therefore, we introduce our framework, which is comprised of the following components. Video VQ-VAE. This Compress video into latent in time and space dimensions. Denoising Diffusion Transformer. Condition Encoder.
11 Απρ 2024 · Open-Sora-Plan v1.0.0 is a groundbreaking framework designed to advance video generation technology while empowering precise text control capabilities. Developed collaboratively by researchers from Peking University and Rabbitpre AI, this open-source initiative aims to replicate the capabilities of OpenAI's Sora model.
5 Μαρ 2024 · Open-Sora Plan. [Project Page] [中文主页] Goal. This project aims to create a simple and scalable repo, to reproduce Sora (OpenAI, but we prefer to call it "CloseAI" ) and build knowledge about Video-VQVAE (VideoGPT) + DiT at scale. However, we have limited resources, we deeply wish all open-source community can contribute to this project.