September 9, 2019

What ZAO’s Rise and Fall Means for User Privacy

Zao, the latest app to get in on the deep-fake craze, has seen a tremendous rise in popularity in a tremendously short amount of time. However, Zao’s privacy agreement has caused a significant uproar in China and users are starting to question the app’s approach to user privacy, which begs the question: are deep-fake apps just fun or are they a reason for concern?

Valentin Scorus

Content Marketing Manager

Chinese app Zao launched just a few days ago but it has taken the online world by storm, reaching the top charts in a matter of days, according to data from AppAnnie. However, what looked like an interesting and fun app to use quickly turned into a huge debate about the risks such apps have on user privacy and data security. 

The Rise: What is Zao and how does it work?

Zao is an app developed by the creators of Momo, a Chinese social search and instant messaging app. Using AI-driven technology, the app allows users to put their faces in clips of famous actors and TV personalities. The process is straightforward: the user uploads a selfie, the app generates the clip, and that’s it – they’re now in their favorite movie!

ZAO China Deepfake app AppAnnie

Zao tops the iOS App Store chart in China

The app launched in late August this year but it didn’t take long for it to become a huge hit with Chinese netizens. In just a matter of days, the app rose to the top of the Free Apps charts across the major Chinese App Stores, becoming a huge success story. Zao clips quickly became viral as users started sharing them across popular, well-established social channels like WeChat and Weibo – the rest, as they say, is history.

The Fall: User privacy and data security concerns

While Zao sounds like a fun app that simply took the face-swap concept to a new level, user privacy started becoming a concern. Some users noticed that the app’s terms and conditions stated that developers had the global right to permanently use any image created on the app for free, according to South China Morning Post. Not only that but developers also had the right to transfer this right to other third parties without the user’s permission. 

The clause caused a huge uproar and Zao’s developer, Momo, to remove the clause and they posted an official apology, saying the app would not store biometric information and that it would not excessively collect user information. This proved to be too little too late, as WeChat, one of the main reasons the app became viral, started banning users from uploading Zao videos on its platform, claiming “security risks” as the reason.

Next, data security became an issue, as users became concerned that deepfake videos created using Zao could bypass security systems such as Alipay’s Smile to Pay, a payment method popular with many Chinese mobile users, or any other areas where facial recognition can be used as a login method. This ultimately drew attention from China’s Ministry of Industry and Information Technology, which regulates content on the Chinese internet. 

As this concern spread very quickly, Alipay reacted by releasing a statement on their official Weibo account saying “There is a lot of online face-changing software – but no matter how realistic, it is impossible to break through the facial payment system”. Momo also released a statement about this, saying “The facial payment security threshold is extremely high, and ‘face-changing’ technology realized by only one photo can’t break through the security system.”

What can app developers learn from this?

Users around the world are becoming more invested in their privacy and how their data is used. Regulatory initiatives like GDPR in Europe are helping users take control of how their data is used, which has forced developers to adapt. 

This privacy scandal is not the first (and probably not the last). The Zao controversy comes just a few months after FaceApp, the app that basically popularized the concept of deepfakes, also came under fire for user privacy breaches. FaceApp released a statement in July where they had to clarify how their users’ data is handled – the fact that their team is based in Russia also didn’t help.

It’s very likely that AI-powered image/video editing apps will continue to be popular but their future is unclear. As privacy concerns become more and more important to mobile app users, the sheer scale of this phenomenon will force developers to become more transparent or risk regulation. In fact, following the Zao controversy, several Chinese news outlets like Global Times or Beijing News wondered whether this type of apps could be used maliciously in the future to the point where they could even become a threat to national security.


Deepfake images and videos are fun, there is no denying that. The fact that apps like Zao or FaceApp became so viral so quickly is proof that demand is huge and developers want to be where the users are. Whether these apps have a future or whether they’re just a fad and will eventually drop off, that remains to be seen.

The potential risks to user privacy and personal data that these apps have shown is something that cannot be ignored. In a world where user privacy is becoming increasingly important, only the app developers who adapt to this new demand will be the ones left standing.