This new artificial intelligence can create videos from texts and even other tricks

Another impressive breed model is presented.

Currently, artificial intelligence is fundamentally rewriting the laws of content creation. Not only is it making a series of image creation software available to the public, but thanks to ChatGPT and similar services that come after it, we can now generate entire scripts by defining some basic aspects.

The next step may be to conquer the video format, and last year Meta and Google separately introduced their own solutions in development in the form of Make-A-Video and Imagen Video. On the other hand, a much smaller player, Runway, washed both giants off the field.

He started the New York-based company in 2018 with the goal of making machine learning-based creative tools accessible to those without deep computer skills. The company employs only 50 people Recently introduced its latest modelGen 2, which can generate video clips from simple scripts, just as DALL-E 2 or similar image generators do.

However, this is far from the end of the possibilities offered by the technology: the Runway algorithm can create movies by inserting images or combinations of images and text, it can be used to simplify existing video material, and it can “dress” raw materials. Convert it into amazing textures, or even create a video scene from several objects. For all this You can see great examples on the Gen-2 siteThe following video also attempts to summarize the broad repertoire:

The generator, trained with the help of 240 million photos and 6.4 million clips, is not yet available to the general public, but on the Runway Discord channel, it was revealed that they will make the service available under a paid open beta. , in which there is already a lot of interest.

See also  NASA suspends its Earth control over Mars due to the coexistence of planets

Are you more seriously interested in IT? You can find our news and analysis for ICT decision makers here.

Leave a Reply

Your email address will not be published. Required fields are marked *