Zama's Developer Survey Confirms that AI amplified Cybersecurity and Data Privacy Concerns
At Brightside AI, we share the concerns highlighted in the recent Zama Developers Survey, which found that over half of developers see AI as nearly as big a threat to data privacy as cybercrime. It’s noticeable that 98% of them think we need to take active steps to address future privacy issues.
This resonates with our mission to safeguard teams against cyber threats, especially as cybercriminals are using AI to launch more sophisticated attacks. By mitigating personal data risks and conducting AI-driven phishing exercises, we align with the goal of creating a safe digital environment where privacy is key.
Together with Zama, we support the goal of ensuring that innovation is accompanied by strong privacy safeguards.
ZAMA's Developers Survey
New privacy research pegs AI as a rival threat to cybercrime
More than half of developers believe AI will almost equal Cybercrime in terms of risk to data privacy
Developers are concerned about current regulatory frameworks, with 98% advocating for proactive measures to address future data privacy concerns
PARIS, 21 MAY 2024: New research released today reveals the extent of concern regarding thefuture threat posed by AI and Machine learning to our privacy.
Cybercrime is still seen as the main threat with 55%, but AI comes in close second at 53%. Despite AI being a relatively new menace, the research shows that developers believe the technology is a threat that is rapidly catching up with cybercrime, as it becomes more mainstream. The cost of cybercrime is projected to reach $13.82 trillion by 2028: the reality is that with increasingly sophisticated AI potentially in the hands of a new generation of cybercriminals, this cost could grow exponentially.
The study, commissioned by Zama - a Paris-based deep tech cryptography firm specializing in the world of Fully Homomorphic Encryption (FHE)* - surveyed developers across both the UK and US.
During the research, more than 1000 UK and US Developers were asked their opinions on the subject of privacy, to uncover insight from the people that build privacy protection into every day applications. The research revealed developers’ own perceptions and relationship with privacy,delving into subjects such as , what privacy considerations should be at the center of evolving innovation frameworks, who holds the ultimate ownership of privacy and what their opinion is on the approach to regulation.
In addition to the findings revealing significant concerns about AI’s threat, the research also reveals that 98% of developers believe that steps need to be taken now to address future privacy and regulation framework concerns. 72% also said that regulations made to protect privacy are not built for the future with 56% believing that dynamic regulatory structures - which are meant to be adaptable to tech advancements - could pose an actual threat.
“’Despite cybercrime expected to surge in the next few years to the cost of trillions, 55% of developers we surveyed in our research stated that they feel cybercrime is only ‘marginally more of an issue’ than the threat to privacy that AI will pose. We have seen from our work that many developers are the real champions of privacy in organizations and the fact that they have some legitimate concerns about the privacy of our data, in relation to the surge in AI adoption, isa real worry,” says Pascal Paillier, CTO and Co-founder of of Zama.
“Zama shares the concerns expressed by developers about the privacy risks posed by AI andits potential irresponsible use. Regulators and policymakers should take this insight into consideration as they try to navigate this new world. It’s important not to underestimate the very real threat highlighted by the experts who are thinking about protecting privacy every day, and make sure upcoming regulations address the increased risks to users’ privacy,” he added.’
The survey went on to reveal that 30% of developers believe that those behind making the regulations are not as knowledgeable as they could be about all the technologies that should betaken into consideration, also presents a real danger, while 17% believe this would pose a possible threat to future tech advancements.
“It’s undoubtedly an exciting time for innovation, especially with AI advancements developing as fast as they have. But with every new development, privacy must be at the center; it’s the only way to ensure the data that powers new innovative use cases is protected. Developers know this, embracing the vision championed by Zama in which they have the ability and responsibility of safeguarding the privacy of their users. It’s clear, in analyzing their insights, that they would like to see regulators taking more responsibility for understanding how Privacy Enhancing Technologies can be used to ensure privacy of use for even the newest of innovations, including Gen AI. Advanced encryption technology such as FHE can play a positive role in ensuring innovation can still flourish, while protecting privacy at the same time,” he adds.
Pascal Paillier, CTO and Co-Founder of Zama, commented on the findings: “Developers are the first line of defense when it comes to privacy, and their concern around AI should not be overlooked. AI not only amplifies cyber threats, but also poses a direct risk to confidentiality through its inherent capabilities. A particularly alarming aspect is the security risk associated with AI systems that require vast datasets for training, which often include sensitive personal information. These datasets make AI systems attractive targets for cyberattacks, leading to potential breaches that can expose private data on a massive scale. This highlights the urgent need for robust security measures and transparent practices to safeguard personal information against unintended disclosures, ensuring that public trust in emerging technologies is not eroded.”
Subscribe to our newsletter to receive a quick overview of the latest news on human risk and the ever-changing landscape of phishing threats.