ChaoticMusings

Members Login
Username 
 
Password 
    Remember Me  
Post Info TOPIC: Deepfake used to attack activist couple shows new disinformation frontier


Admin

Posts: 16885
Date:
Deepfake used to attack activist couple shows new disinformation frontier
Permalink   
 


A combination photograph showing an image purporting to be of British student and freelance writer Oliver Taylor (L) and a heat map of the same photograph produced by Tel Aviv-based deepfake detection company Cyabra is seen in this undated handout photo obtained by Reuters. The heat map, which was produced using one of Cyabra's algorithms, highlights areas of suspected computer manipulation. The digital inconsistencies were one of several indicators used by experts to determine that Taylor was an online mirage. Cyabra/Handout via REUTERS

 deepfake.JPG

 

Online profiles describe him as a coffee lover and politics junkie who was raised in a traditional Jewish home. His half dozen freelance editorials and blog posts reveal an active interest in anti-Semitism and Jewish affairs, with bylines in the Jerusalem Post and the Times of Israel.

The catch? Oliver Taylor seems to be an elaborate fiction.

His university says it has no record of him. He has no obvious online footprint beyond an account on the question-and-answer site Quora, where he was active for two days in March. Two newspapers that published his work say they have tried and failed to confirm his identity. And experts in deceptive imagery used state-of-the-art forensic analysis programs to determine that Taylor’s profile photo is a hyper-realistic forgery - a “deepfake.”

Who is behind Taylor isn’t known to Reuters. Calls to the U.K. phone number he supplied to editors drew an automated error message and he didn’t respond to messages left at the Gmail address he used for correspondence.

Reuters was alerted to Taylor by London academic Mazen Masri, who drew international attention in late 2018 when he helped launch an Israeli lawsuit against the surveillance company NSO on behalf of alleged Mexican victims of the company’s phone hacking technology.

In an article in U.S. Jewish newspaper The Algemeiner, Taylor had accused Masri and his wife, Palestinian rights campaigner Ryvka Barnard, of being “known terrorist sympathizers.”

Masri and Barnard were taken aback by the allegation, which they deny. But they were also baffled as to why a university student would single them out. Masri said he pulled up Taylor’s profile photo. He couldn’t put his finger on it, he said, but something about the young man’s face “seemed off.”   

Six experts interviewed by Reuters say the image has the characteristics of a deepfake.

“The distortion and inconsistencies in the background are a tell-tale sign of a synthesized image, as are a few glitches around his neck and collar,” said digital image forensics pioneer Hany Farid, who teaches at the University of California, Berkeley.  

Artist Mario Klingemann, who regularly uses deepfakes in his work, said the photo “has all the hallmarks.”

“I’m 100 percent sure,” he said.

 

 


Attachments
__________________

 [04-10, 20:41] xtras:i dont think anyone in their right mind would have a crush on stoo

 

Anonymous

Date:
Permalink   
 

How can you make a 'heat map' from a photo? 



__________________


Admin

Posts: 16885
Date:
Permalink   
 

Anonymous wrote:

How can you make a 'heat map' from a photo? 


 It's not a 'heat map' in the literal sense.  The software picks up alterations in the pixels of the image. 

 

Deepfakes rely on a type of neural network called an autoencoder. These consist of an encoder, which reduces an image to a lower dimensional latent space, and a decoder, which reconstructs the image from the latent representation. Deepfakes utilize this architecture by having a universal encoder which encodes a person in to the latent space.The latent representation contains key features about their facial features and body posture. This can then be decoded with a model trained specifically for the target. This means the target's detailed information will be superimposed on the underlying facial and body features of the original video, represented in the latent space.

A popular upgrade to this architecture attaches a generative adversarial network to the decoder. A GAN trains a generator, in this case the decoder, and a discriminator in an adversarial relationship.[The generator creates new images from the latent representation of the source material, while the discriminator attempts to determine whether or not the image is generated. This causes the generator to create images that mimic reality extremely well as any defects would be caught by the discriminator. Both algorithms improve constantly in a zero sum game This makes deepfakes difficult to combat as they are constantly evolving; any time a defect is determined, it can be corrected 

 

 



-- Edited by Digger on Sunday 19th of July 2020 07:42:07 PM



-- Edited by Digger on Sunday 19th of July 2020 07:42:39 PM

__________________

 [04-10, 20:41] xtras:i dont think anyone in their right mind would have a crush on stoo

 



Admin

Posts: 16885
Date:
Permalink   
 

So, basically, this can be used very maliciously.

__________________

 [04-10, 20:41] xtras:i dont think anyone in their right mind would have a crush on stoo

 

Page 1 of 1  sorted by
Quick Reply

Please log in to post quick replies.