After Adobe and Jigsaw, Microsoft is also joining the game of detecting and labeling fake photos and videos with the help of AI. The company has introduced Microsoft Video Authenticator which analyzes videos in real-time and lets you know if they have been manipulated. According to Microsoft, the main goal of the new tech is to combat misinformation
Microsoft Video Authenticator is particularly focused on spotting deepfakes. It can analyze both still photos and videos and in both cases, it will show you a “confidence score.” In other words, it’s a percentage chance that the media has been artificially manipulated. It’s interesting that, with videos, the Video Authenticator provides this percentage in real-time as it analyzes the footage frame by frame. Microsoft explains that it detects “the blending boundary of the deepfake and subtle fading or greyscale elements that might not be detectable by the human eye.”
The Video Authenticators comes just in time for the upcoming US presidential election when misinformation is expected to grow. When you add the pandemic to the equation, there can be a lot of fake content to recognize and label as such. Windows notes that methods for generating deepfakes will continue to grow in sophistication. “As all AI detection methods have rates of failure, we have to understand and be ready to respond to deepfakes that slip through detection methods,” the company writes on its blog. “Thus, in the longer term, we must seek stronger methods for maintaining and certifying the authenticity of news articles and other media.” [via Engadget]