Some time ago we devised a new, unpublished, algorithm for edge-preserving smoothing. We discovered that it possessed the desirable property of being able to significantly reduce visible skin discolorations, blemishes, and wrinkles in facial photos and videos, while leaving minimal signs of blurring or other image manipulation. In other words, it produced results resembling hand-applied photo-reouching, but that could be applied automatically to each frame of a video. We showed our results to Digital Anarchy, who licensed it as the core skin-smoothing algorithm for their Beauty Box plugin for After Effects, Photoshop, and Final Cut Pro. Beauty Box went on to win multiple awards in 2010, including:
- Mario award for New Technology at NAB 2010;
- 5 star rating in Photoshop User;
- Best Video Software Plugin award in Videomaker’s Magazine Best Video Products of the Year 2010.
We originally created this image-processing technology as CPU-only, but we’ve since ported it to CUDA-based Nvidia GPUs without loss of quality. We’ve now gotten it to run at the same speed ranges (tending towards the faster end) as our GPU-accelerated cartoon and painterly rendering effects (see http://toonamation.com/2010/11/28/artistic-video-chat-effects-in-real-time), meaning that it can be applied to enhance facial appearance in HD video in real-time.
Here’s some example output. Click on icons below to pop the 1280×720p original videos (chosen at iStock and edited as brief facial closeups) or corresponding post-process cleanup videos.
For Skinsmooth Level 1 we selected settings to produce a pleasing amount of skin cleanup, while otherwise looking as naturalistic as possible (notice the retained hair detail and skin pores), so the videos wouldn’t look noticeably manipulated. Skinsmooth Level 2 enables somewhat more cleanup to resemble classic photo retouch or airbrushing. Many intermediate, plus stronger or weaker adjustments to degree/quality of smoothing are possible. For most applications, we’ll probably offer several preset setting combinations to support a range of user preferences. The effect gives best results for close-ups and medium shots, and settings can be further optimized by taking both shot type and image resolution into account.
We see a wide range of potential applications for this technology, including:
- Enhancing the appearance of participants with or without makeup in live or rapid turn-around broadcast tv, especially where there are HD facial closeups - including newscasts, interviews on talk, sports, and commentary shows, and on-scene interviews and reports for live-streaming.
- Improving the appearance of participants in business video conferences.
- Improving the appearance of self-conscious teens (or anyone with a skin condition/concern) in consumer-oriented video chat.
- Enhancing facial appearance in videos done for, and video chat associated with, online dating sites. Note there is already an industry to hand retouch photos submitted to such sites. We could do potentially do the same in high volume, plus offer the similar enhancements in real time for submitted videos.
- Allowing online facial photo/video cleanup for any non-government site where people are submitting photos/videos, or as a quick processing intermediary when people are sending photos/videos to each other through mobile devices.
We’re not primarily consumer-facing, so we’re looking to partner with companies in the above application areas who are. In order to run real-time our effects require a CUDA-enabled Nvidia card to be present somewhere along the transmission path, e.g., embedded in a camera, mobile device, intermediate server, display device, or computer at the broadcast or receiving end. We’re simultaneously exploring architectures assuming that our GPU-accelerated image processing can be supported offsite in the cloud, e.g., applying Amazon EC2’s new cluster GPU instances. We’re also expecting to conduct near future ports of our software to work on OpenCL-capable ARM processors for mobile devices as they become available.