It’s difficult to create smooth flowing slow motion videos on phones. While existing technology can provide satisfactory footage in the limited frames producible, it starts to lack and eventually generates chunky results when anyone tries to push the boundaries. That is where the new technology comes in.
For example, if current technology can turn a 30-fps video into a half-speed 60 fps video, it fails to generate a 240-fps video from a 30-fps video. This is because instead of one extra frame being generated between each recorded frame, now there needs to be seven new frames between frames. This is why researchers produced a convolutional neural network to overcome the issue. To make it as efficient as it could be, the network was trained on 11,000 different types of videos shot at 240-fps. It also allowed the system to study and adapt to the optical flow between two images and later calculate the type of images required to create a flow between the two.
The researchers explained, “Our method can generate multiple intermediate frames that are spatially and temporally coherent. Our multi-frame approach consistently outperforms state-of-the-art single frame methods.”
They also explained that even though recently launched smartphones, like the Samsung Galaxy S9, can shoot slow-mo videos at high frame-rates, it isn’t the most concrete method for shooting in small mobile devices. “While it is possible to take 240-frame-per-second videos with a cell phone, recording everything at high frame rates is impractical, as it requires large memories and is power-intensive for mobile devices.”
If it succeeds in its testing phase, the new system will open up new opportunities for professional videographers or even instagrammers. However, the researchers state that it might take time for their invention to go into smartphones because it’s currently too “processing intensive” and will take up most of the phone’s space. “The processing power required for doing this is more than what a phone would have in this point in time,” says Jan Kautz, one of the Nvidia researchers in an interview with ZDnet, “but you could imagine uploading to a server – there are ways of making it work and giving it to users.”
This prototype was recently presented at the Computer Vision and Pattern Recognition (CVPR) conference in Salt Lake City, Utah. Until it passes all stages, guess we’re stuck with recording choppy slow-motion videos.
h/t: New Atlas