Companies using unregulated AI at risk of ‘Shadow AI’ threat, warns Kaspersky Türkiye chief
Referring to AI tools used by employees or teams without knowledge or approval of organization’s IT department, ‘Shadow AI’ raises alarms about potential risks, including data security breaches, compliance issues, and erroneous outputs

ISTANBUL
Companies relying on unregulated artificial intelligence (AI) tools face growing risks from so-called “Shadow AI,” Kaspersky Türkiye General Manager Ilkem Ozar warned, highlighting potential threats to data security and regulatory compliance.
Shadow AI refers to AI tools used by employees or teams without the knowledge or approval of an organization’s IT department, much like the earlier concept of “Shadow IT”. This unregulated use raises alarms about potential risks, including data security breaches, compliance issues, and erroneous outputs.
Shadow AI typically involves tools used without official authorization, posing risks to data security and leading to compliance issues or inaccurate results. Unauthorized uploads of sensitive data to AI platforms can heighten the risk of data breaches for companies.
“Assuming your data stays only with you when using an AI application can be misleading,” said Ozar told Anadolu.
"These systems operate on cloud-based platforms, processing uploaded data, which means the data could reside in the system’s memory and be indirectly accessed by other users later," she noted.
Ozar emphasized the need for corporate users to select platforms adhering to ethical standards and reliability, cautioning employees about sharing sensitive information through AI tools.
Discussing AI's impartiality, she noted that AI models may provide biased responses based on their training data.
“A China-based AI model and a Western-based one may offer divergent perspectives on the same topic because they reference the data they are fed," she warned.
AI trained with inaccurate data may lead to faulty decisions
Ozar stressed the importance of the quality of data used in AI training, warning that reliance on publicly available datasets may expose AI to inaccurate or biased information.
"AI works based on the datasets it is trained on. If trained with unverified or misleading information from open sources, AI may produce faulty or biased outcomes," she said.
Addressing measures against Shadow AI risks, Ozar argued that traditional security solutions alone are “insufficient.”
"Threats have become more sophisticated, making traditional antivirus solutions inadequate. While antivirus systems can detect known threats, they may not identify new ones. Advanced security solutions powered by AI should be implemented," she explained.
Ozar urged companies to continually update their cybersecurity policies and adopt systems capable of swiftly and effectively countering AI-driven threats.
She advised businesses to conduct "risk assessments" when incorporating AI tools into daily operations, examining which processes can be automated without introducing additional risks and ensuring data compliance with local regulations.
Recommendations for managing Shadow AI
To mitigate risks, Ozar recommended a centralized approach using corporate accounts via a cloud provider instead of ad hoc use of large language model (LLM) services.
She suggested implementing security mechanisms, such as monitoring personal identifiable information (PII) in messages and maintaining audit logs.
"Companies should start educating employees about acceptable AI tool usage and access methods defined by the organization. By understanding the data being processed and the provider’s policies, businesses can ensure control and traceability," Ozar said.
These proactive steps, she added, are crucial in maintaining oversight and preventing unauthorized AI usage in the workplace.
Anadolu Agency website contains only a portion of the news stories offered to subscribers in the AA News Broadcasting System (HAS), and in summarized form. Please contact us for subscription options.