Blog Layout

“We have decided not to release the Imagen Video model or its source code until these concerns are mitigated.”

Challenge Mode

Google’s impressive-looking new video-generating neural network is up and running — but issues with “problematic” content mean that for now, the company is keeping it from public release.

In a  paper about its Imagen Video model , Google spends pages waxing prolific about the artificial intelligence’s amazing text-to-video-generating capabilities before admitting briefly just that due to “several important safety and ethical challenges,” the company isn’t releasing it.

As far as what that problematic content looks like more specifically, the company characterizes it as “fake, hateful, explicit or harmful.” Translation? It sounds like this naughty AI is capable of spitting out videos that are sexual, violent, racist, or otherwise unbecoming of an image-conscious tech giant.

“While our internal testing suggests much of explicit and violent content can be filtered out, there still exists social biases and stereotypes which are challenging to detect and filter,” the company’s researchers wrote. “We have decided not to release the Imagen Video model or its source code until these concerns are mitigated.”

Ghost in the Machine

Under the “biases and limitations” subheading, the researchers explained that although they tried to train Imagen against “problematic data” in order to teach it how to filter that stuff out, it’s not quite there yet.

The admission underscores an intriguing reality in machine learning: it’s not uncommon for researchers to build a model that can generate extraordinary results from a model — Imagen really does look very impressive — while struggling to control its potential outputs.

In sum, it sounds a lot like the issues we’ve seen with other neural networks, from the role-playing “Dungeon Master” AI that people began using to role-play child abuse to the less-severe tendency to create realistic photos of drugs exhibited by the Craiyon image generator, formerly known as DALL-E Mini.

Where Imagen is different, of course, is that it generates video from text, which until very recently wasn’t possible.

It’s one thing to read or see a still image of gore or porn; it’s another entirely to see moving video of it, which makes Google’s decision seem pretty astute.

Share This Article

By Laurence November 21, 2022
Usually, the winners of a pitching competition are bathed with accolades, media attention, and applause. After it’s done and dusted, all they have to think about is what to spend
By Laurence November 19, 2022
Above all else, FTX advertisements wanted you to know two things: that cryptocurrency is a force for good, and that you don’t need to be an expert to buy and
By Laurence November 19, 2022
This article was originally published on .cult by Luis Minvielle. .cult is a Berlin-based community platform for developers. We write about all things career-related, make original documentaries, and share heaps
By Laurence November 18, 2022
Okay, that’s a good question. Red Crew, Blue Crew Had it not been for the heroics of three members of NASA’s specialized “Red Crew,” NASA’s absolutely massive — and incredibly
By Laurence November 18, 2022
Pharmaceutical manufacturing is closely linked to mass production. In order for medicines to be sold cheaply, they often have to be made in huge amounts. But what happens if you
By Laurence November 17, 2022
“I’m in checkmark purgatory.” Checkmate They say “don’t meet your heroes,” but what’s even worse? When your hero buys Twitter, forces you and others to start paying eight dollars per
More Posts
Share by: