Runaway AI can produce short videos based on a few words

Are we ready for new video-generation systems?

When it comes to generative AI, there’s a lot more than ChatGPT-like language models. Text-to-image is already mainstream, but brewing in the background as the next evolutionary step in generative AI are its capabilities in converting text to videos.

Similar to how AI language models help us string sentences together, new video-generation systems hold the potential to quicken the work of moviemakers and digital artists. They also hold the potential to be the next big thing for misinformation. As with any major innovation in AI, comes new ethical challenges regarding its widespread usage. 

New Engine For Misinformation  

It should be known that Google and Meta unveiled their first video-generation systems last year with ‘Imagen Video’ and ‘Make-A-Video Meta AI’. They weren’t shared with the public however due to concerns that the systems could eventually be used to spread disinformation with newfound speed and efficiency. Despite the addition of safeguards to prevent the creation of “fake, hateful, explicit or harmful content”, researchers cited the technology had too much potential to promote stereotypes, given that it was trained on a limited data set of videos and images. 

Although, as newer startups take hold of the reins, opinions on advancing the technology continue to vary. Runway AI’s chief executive, Cristóbal Valenzuela, said he believed the technology was too important to keep in a research lab, despite its risks. “This is one of the single most impressive technologies we have built in the last hundred years,” he said. “You need to have people actually using it.”

Runway is one of several companies building artificial intelligence technology that will soon let people generate videos simply by typing simple prompts. Companies like Runway rely on neural networks, which learns by way of analyzing thousands of videos to string images together more coherently. Like early versions of tools such as DALL-E and Midjourney, the technology sometimes combines concepts and images in curious ways. But experts believe they can iron out the flaws, in terms of better understanding the relationship and consistency between each frame, as they train their systems on more and more data. 

Runway Research is said to be at the forefront of these developments, and have even vowed to “ensure that the future of content creation is both accessible, controllable and empowering for users.” In the same statement issued by the company, they spoke about learning techniques applied to audiovisual content as something that will “forever change art, creativity, and design tools.”

Building Applications That Cannot Be Abused

With AI advancements in voice matching and the ability to alter and create realistic videos of just about anything at everyone’s fingertips, there is an immeasurable level of harm that can be done by falsifying the words and actions of public figures and society at large. Many feel that the technology has entered into new and dangerous territory, past the point of no return. 

Those advocating for this technology’s advancement are in agreement that developers should be held responsible for the product their AI generates. The ultimate goal with this technology is to give people, governments, and institutions the ability to determine truth from fiction. Within that lies the responsibility of building applications that cannot be abused. 

About The Author

Reply