Video manipulation


Video manipulation is a new variant of media manipulation that targets digital video using a combination of traditional video processing and video editing techniques and auxiliary methods from artificial intelligence like face recognition. In typical video manipulation, the facial structure, body movements, and voice of the subject are replicated in order to create a fabricated recording of the subject. The applications of these methods range from educational videos to videos aimed at manipulation and propaganda, a straightforward extension of the long-standing possibilities of photo manipulation. This form of computer-generated misinformation has contributed to fake news, and there have been instances when this technology was used during political campaigns. Other uses are less sinister; entertainment purposes and harmless pranks provide users with movie-quality artistic possibilities.
The proof-of-principle software Face2Face was developed at the University of Erlangen-Nuremberg, the Max-Planck Institute for Informatics, and Stanford University. Such advanced video manipulation must be ranked alongside and beyond previous examples of deepfakes.

History

The concept of manipulating video can be traced back as far as the 1950s, when the 2 inch Quadruplex tape used in videotape recorders would be manually cut and spliced. After being coated with ferrofluid, the two ends of tape that were to be joined were painted with a mixture of iron filings and carbon tetrachloride, a toxic and carcinogenic compound to make the tracks in the tape visible when viewed through a microscope so that they could be aligned in a splicer designed for this task
As the video cassette recorder developed in the 1960s, 1970s, 1980s, and 1990s, the ability to record over an existing magnetic tape became possible. This led to the concept of overlaying specific parts of film to give the illusion of one consistently recorded video, which is the first identifiable instance of video manipulation.
In 1985, Quantel released The Harry, the first all-digital video editing and effects compositing system. It recorded and applied effects to a maximum of 80 seconds of 8-bit uncompressed digital video. A few years later, in 1991, Adobe released its first version of Premiere for the Mac, a program that has since become an industry standard for editing and is now commonly used for video manipulation. In 1999, Apple released Final Cut Pro, which competed with Adobe Premiere and was used in the production major films such as The Rules of Attraction and No Country for Old Men.
Face detection became a major research subject in the early 2000s that has continued to be studied in the present. In 2017, an amateur coder named “DeepFakes” was altering pornography videos by digitally substituting the faces of celebrities for those in the original videos. The word deepfake has become a generic noun for the use of algorithms and facial-mapping technology to manipulate videos.
On the consumer side, popular video manipulation programs FaceApp and Faceswap, developed from similar technology, have become increasingly sophisticated.

Types of video manipulation

Computer applications are becoming more advanced in terms of being able to generate fake audio and video content that look real. A video published by researchers depicts how video and audio manipulation works using facial recognition. Though video manipulation could be thought of as piecing together different video clips, the types of video manipulation extend further than that. For example, an actor can sit in front of a camera moving his face. The computer then generates the same facial movement in real time on an existing video of Barack Obama. When the actor shakes his head, Obama also shakes his head, and the same happens when the actor speaks. Not only does this create fake content, but it masks the content as even more authentic than other types of fake news, as video and audio were once the most reliable types of media for many people.
One of the most dangerous parts of video manipulation include the concept of politics; campaign videos are being manipulated to pose a threat to other nations. Dartmouth University computer science professor Hany Farid commented on video manipulation and its dangers. Farid said that actors could generate videos of Trump claiming to launch nuclear weapons. These fabricated videos could be shared on social media before the mistake can be fixed, possibly resulting in war. Despite the presence of manipulated video and audio, research teams are working to combat the issue. Prof. Christian Theobalt, a member of a team working on the technology at the Max-Planck-Institute for informatics in Germany, states that researchers have created forensic methods to detect fakes.

Video manipulation and fake news

With fake news becoming increasingly prominent in popular culture and with rapid advancements of audio and video manipulation technology, the public is increasingly encountering fake news that is supported by deceptive videos. In terms of types of fake news, the potential to be classified is ever-expanding, but include five main types — satire or parody, selective reporting, sloppy journalism, clickbait, and conspiracies. Though the five main types of fake news are prominent globally, one of the most destructive types of fake news lies within all five types and is video and audio manipulation. Video and audio manipulation are defined as a new variant of media manipulation which targets digital video using a combination of traditional video processing and video editing techniques with auxiliary methods from artificial intelligence like face recognition. The results range from artistic videos produced for aesthetic effects to videos aimed at manipulation and propaganda, a straightforward extension of the long-standing possibilities of photo manipulation.

Digital Fakes

A digital fake refers to a digital video, photo, or audio file that has been altered or manipulated by digital application software. Deepfake videos fall within the category of a digital fake media, but a video may be digitally altered without being considered a deepfake. The alterations can be done for entertainment purposes, or more nefarious purposes such as spreading disinformation. The information can be used to conduct malicious attacks, political gains, financial crime, or fraud.