New face-swapping app flags up real "deepfaking" concerns


Just a matter of weeks after I wrote about the security risks surrounding the FaceApp, which everyone was using to post selfies of what they will look like in years to come, the same issues have hit a new app in China.


A face-swapping app, which allows users to appear and star in blockbuster movies just by uploading their photos, jumped straight to the top of the domestic download rankings but things are not what they seem.


The Russian app used artificial intelligence to change your photos, while gaining access to your data, including your Facebook friends list.


The Chinese app, in comparison, uses AI to manipulate videos to produce realistic looking, but fabricated videos which can be downloaded and redistributed.


The Chinese app, in comparison, uses AI to manipulate videos to produce realistic looking, but fabricated videos which can be downloaded and redistributed.


Just like FaceApp the early terms and conditions of the Zao app said it had “free, irrevocable, permanent, transferable and relicense-able” rights to all user-generated content that is created.


As soon as users became aware, the complaints followed, along with the poor ratings on the App Store, and the developers updated their user agreement to say the app won’t use headshots or mini videos created by users for anything else except to improve the app or anything that was pre-agreed by the users.


So it seems the community and democracy still works but it remains to be seen what “individual improvement” actually means.


Even better is the news that if users delete the content they upload, then they will erase it from their servers as well.


Which is great news but who will check and keep them accountable?


The issues have also been flagged up by the China E-Commerce Research Centre – with further changes set to be made.


These data privacy issues are not the only problems that have come to light with the launch of the Zao app and the fabrication of videos, which is also known as “deepfake”.



Although there are many ways that the process can benefit business and entertainment by bringing dead actors and musicians back to life so they can continue to perform or by creating multilingual ad campaigns, there are also many dangers.


Deepfake has raised real concerns about how it can be used for more serious offences and malicious activity.


According to NeimanLab concerns are rising that the process could be used in seven different ways to impact the 2020 US election.


We all know the impact fake news had on previous elections but this could take things to a whole new level. Just imagine if videos could be manipulated to show candidates doing or saying things which never actually happened. These examples, known as “cheapfakes” can really call into question everything making it even harder for people to know what to believe.



A clip of House Speaker Nancy Pelosi, which has already been widely circulated, makes it appear that she drunkenly slurring their words.


Despite knowing the video was fake Facebook refused to remove it, although they did flag up the fact it had been manipulated, but, by then the damage could already be done!


You can watch the video here.

So now, as well as making sure we read all the terms and conditions on new apps which include adding our images, it is going to be harder than ever to believe anything we see on the internet.


Happy surfing!!


0 views

© 2020 by NSC42 LTD

  • White LinkedIn Icon
  • YouTube - White Circle
  • White Twitter Icon
  • medium logo
  • White LinkedIn Icon