Deepfakes (photos or videos that are manipulated by artificial intelligence) can be a medium of entertainment when using technology to create memes or even movies. However, their appearance is also cause for concern, as in the age of fake news, they can be excellent tools for producing expired content.
To Microsoft However – It turns out that Prof. Mashable Article He had an idea to filter this content. The company announced, Tuesday, two new technologies that can identify the true content in an image and what is being edited there. One of them is the Microsoft Video Authenticator, which can analyze images and videos in terms of percentage of chances of being manipulated. The program works by detecting elements in the image that are unlikely to be noticed by the human eye, such as subtle opacity and suspicious borders within colors.
The second program will be launched as part of Microsoft’s cloud service, Azure. The program will provide digital visual content creators with a feature that allows their digital signatures and certificates to be included in the image or video file. When a particular photo or video spreads over the Internet, the digital signature remains in the file through the metadata. The program is a browser add-on, however, you can find these signatures so you can provide consumers with information about who originally created the content and what original content was created.
As the AI that creates deepfakes becomes more sophisticated and sophisticated, Microsoft will also constantly improve its AI to filter its deepfakes content.
Incidentally, Microsoft isn’t the only tech company fighting against deepfakes. Facebook already banned such recordings in January, Twitter notified the content that had been tampered, and Reddit banned the use of manipulated content on its site to spread fake news.