Seedance 2.0 officially announced the suspension of features related to live-action footage.

This article is machine translated
Show original

According to TechFlow TechFlow, on February 10th, Jinshi Data reported that ByteDance's AI video generation model Seedance 2.0, after being heavily trained on videos related to "Film Hurricane," could automatically generate videos with Tim's voice without any prompts, has announced that it will temporarily not support the function of inputting real-life footage as the main reference. A reporter learned that in the JiMeng creator community, an operations staff member posted a message stating: "Seedance 2.0 received far more attention than expected during its internal testing phase. Thank you for your feedback. To ensure a healthy and sustainable creative environment, we are urgently optimizing based on the feedback. Currently, we do not support inputting real-life images or videos as the main reference." The staff member added that the platform understands that respecting the boundaries of creativity is essential, and the product will be officially released with a more complete version after adjustments. As of press time, ByteDance has not yet responded.

Source
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
Comments