AI stability has released a new AI model, a stable virtual camera, which the company claims to convert 2D images into “flagship” videos with realistic depth and perspective.
Virtual cameras are tools often used in digital movies and 3D animations to capture and browse real -time scenes. With a fixed virtual camera, stability seeks to add genetic AI to the mixture to provide greater control and adaptability, the company said in a blog.
The fixed virtual camera produces “new views” of a scene from one or more images (up to 32 in total) in the camera corners specified by the user. The model can create videos that travel along the “dynamic” camera or default routes such as “Spiral”, “Dolly Zoom”, “Move” and “Pan”.
The current version of the fixed virtual camera, a preview of the research, can create a square (1: 1) video, portrait (9:16) and landscape (16: 9) proportions of up to 1,000 frames in length. Stability warns that the model can produce lower quality results in some scenarios, however, especially with images of humans, animals or “dynamic textures” such as water.
“The extremely ambiguous scenes, the complex camera routes that intersect objects or surfaces and irregular objects can cause artifacts,” notes stability in the blog, “especially when target views are significantly different from input images.”
The fixed virtual camera is available for research use with non -commercial license. It can be downloaded from the AI Dev platform that hugs the face.
Stability, the stable business behind the popular images production model of stable diffusion, increased new cash last year, such as investors, such as Eric Schmidt and Napster founder Sean Parker, tried to convert the business around. Emad Mostaque, the co -founder of stability and the former CEO, reportedly, the malevolent stability in economic disaster, leading staff to resign, a partnership with Canva to fall and investors to worry about the company’s prospects.
In recent months, stability has hired a new chief executive, appointed “Titanic” director James Cameron on his board and released several new images production models. Earlier in March, the company worked with the chipmaker arm to bring an AI model that can create audio, including sound effects on mobile devices running arm chips.
