OpenAI Introduces Sora, a Text-to-Video Model for Realistic Minute-Long Generation
OpenAI has introduced Sora, a new text-to-video model capable of generating realistic, minute-long video scenes from textual descriptions. The model demonstrates notable capabilities in simulating real-world physics, maintaining character consistency across frames, and producing high-resolution, cinematic-style visuals.
BUSINESSES RESHAPING OUR WORLD
Global N Press
3/14/20241 min read


OpenAI has introduced Sora, a new text-to-video model capable of generating realistic, minute-long video scenes from textual descriptions. The model demonstrates notable capabilities in simulating real-world physics, maintaining character consistency across frames, and producing high-resolution, cinematic-style visuals.
The announcement has sparked widespread discussion among professionals regarding its potential long-term impact on industries such as filmmaking, advertising, and education, as well as raising questions about the future of synthetic media.
Industry analysts view Sora as a significant step toward the commercialization of AI-generated video and suggest it strengthens OpenAI's position in the competitive landscape of multimodal foundation models.




