When OpenAI revealed its new generative video model, Sora, last month, it invited a handful of filmmakers to try it out. This week the company published the results: seven surreal short films that leave no doubt that the future of generative video is coming fast.
The first batch of models that could turn text into video appeared in late 2022, from companies including Meta, Google, and video-tech startup Runway. It was a neat trick, but the results were grainy, glitchy, and just a few seconds long.
Fast-forward 18 months, and the best of Sora’s high-definition, photorealistic output is so stunning that some breathless observers are predicting the death of Hollywood. But fears of misuse are growing too. Read the full story.
—Will Douglas Heaven
This piece is part of MIT Technology Review’s What’s Next series, looking across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here.
Interested in learning more about how filmmakers are already using Sora? Check out how three of them are already experimenting with it to create stunning videos—and find out what they told us they believe is coming next.
What to expect if you’re expecting a plug-in hybrid
24World Media does not take any responsibility of the information you see on this page. The content this page contains is from independent third-party content provider. If you have any concerns regarding the content, please free to write us here: contact@24worldmedia.com
Do you believe the Covid vaccine had negative side effects? VOTE HERE
Latest Google layoffs hit the Flutter and Python groups
‘Women’s rights have been attacked constantly!’
Here’s your chance to own a decommissioned US government supercomputer
AWS S3 storage bucket with unlucky name nearly cost developer $1,300
FTC fines Razer for every cent made selling bogus “N95 grade” RGB masks
Apple confirms bug that is keeping some iPhone alarms from sounding
Roundtables: Inside the Next Era of AI and Hardware
Supplements: Ginkgo biloba boosts memory