Thursday, October 23, 2025, 12:59 AM
×

Shadow AI vs Managed AI: Kaspersky Reviews Neural Network Usage for Work in Egypt

Wednesday 22 October 2025 12:44
Shadow AI vs Managed AI: Kaspersky Reviews Neural Network Usage for Work in Egypt

New Kaspersky research titled “Cybersecurity in the Workplace: Employee Knowledge and Behavior” reveals that 87.5% of professionals in Egypt use Artificial Intelligence (AI) tools in their daily work. However, only 46.5% have received training on the cybersecurity aspects of AI and neural network usage — a critical knowledge gap given the growing risks of data leaks, prompt injections, and unauthorized AI use.

The study found that 90% of employees in Egypt are familiar with the term “generative artificial intelligence.” For many, AI has become a regular part of their workflow:

71% use AI to write or edit texts,

53% use it for work emails,

54% use it to generate images or videos, and

54% rely on it for data analytics.

Shadow AI on the Rise

Despite the growing reliance on AI, the research highlights a significant security concern — the rise of “Shadow AI.” These are AI tools employees use without official approval or oversight. While 77% of respondents said AI tools are permitted in their organizations, 20% reported that such tools are banned, and 2% were unsure.

The Training and Policy Gap

Seventeen percent of professionals admitted to receiving no AI-related training at all. Among those who did, 62% said their training focused on how to use AI tools effectively, while fewer than half (46%) learned about the cybersecurity implications of AI use.

Kaspersky warns that organizations need clear, company-wide policies governing AI usage to prevent risks associated with uncontrolled adoption. Such policies should specify which AI tools employees can use, which data can be processed through them, and how information security is maintained.

> “When implementing AI across a company, both complete bans and unrestricted use are typically ineffective,” said Rashed Al Momani, General Manager for the Middle East at Kaspersky.
“A more effective strategy is a balanced policy that grants varying levels of AI access depending on the sensitivity of data handled by each department. Supported by proper training, this approach promotes flexibility and efficiency while maintaining strong security standards.”

Kaspersky’s Recommendations for Securing Corporate AI Use

To mitigate risks and enable safe AI adoption, Kaspersky advises organizations to:

Train employees on responsible AI usage, using resources like the Kaspersky Automated Security Awareness Platform.

Equip IT professionals with knowledge on AI exploitation and defense through the Large Language Models Security training course.

Install cybersecurity solutions such as Kaspersky Next across all employee devices to prevent phishing and fake AI tool attacks.

Monitor AI usage through regular surveys to assess risks and refine policies.

Use AI proxies to automatically clean sensitive data from prompts and enforce access control.

Develop a comprehensive AI policy based on Kaspersky’s secure implementation guidelines.

About the Study

The research was conducted by Toluna on behalf of Kaspersky in 2025, with a sample of 2,800 online respondents — employees and business owners who use computers for work — across Egypt, Saudi Arabia, the UAE, Türkiye, South Africa, Kenya, and Pakistan.