ISCG expert talk: Jakub Modrzewski - Digital Transformation Consultant and Julia Kmita - AI Governance expert.
Artificial intelligence now permeates every aspect of business operations, and the Shadow AI phenomenon is one of the biggest challenges in information security management. These activities, invisible to IT departments, describe the unauthorized use of AI tools by employees outside the official supervision of the organization. Our experts will outline the specifics of this type of threat, and how legal frameworks including the National Cyber Security System (NSC) and the EU AI Act, define a responsible approach to this challenge.
JK: James, let's start with the basics, what exactly is the Shadow AI phenomenon and why does it pose such a great danger to companies?
JM: Shadow AI is a practice in which employees use generative artificial intelligence tools like GPT chat or Gemini without the knowledge or consent of IT departments. Statistics show that one-third of companies do not have full awareness or control over the use of AI tools. This poses a serious risk to data security and compliance with existing regulations, especially in the context of Poland's National Cyber Security System (KSC).
JK: And what is the role of the latest KSC amendment of 2025 relative to Shadow AI?
JM: The amendment expands companies' responsibilities, especially in auditing, reporting and controlling digital technologies.
Its key elements are:
- An expanded catalog of regulated entities,
- Mandatory introduction of AI risk detection and monitoring systems,
- obligation to report incidents to CSIRT Poland, including those related to unauthorized use of AI,
- Audits and documentation of AI processes,
- Strengthening cooperation between different departments of the company,
- sanctions for non-compliance
JK: How do these changes relate to the European AI Act?
JM: The AI Act, which came into force in 2024, establishes EU standards for transparency, risk assessment and auditability of AI systems, with particular emphasis on high-risk ones, and is scheduled for full implementation in 2026. The Polish KSC, as amended, is a national reflection of these standards, creating a coherent regulatory system conducive to enhancing security and accountability.
JK: And what is the level of implementation of sound AI management policies in the context of the above regulations?
JM: According to ISACA's 2025 data, 62% companies have no formal audit procedures for such tools. This determines the urgent need to develop and implement sound AI management policies, which is also a requirement and recommendation of the KSC and the AI Act.
JK: The implication is that most companies are just facing the task of adapting security and compliance standards, so what can be done to ensure there is less or no Shadow AI?
JM: One reason for this is that organizations are poorly prepared to use AI (LLM Copilot), for security reasons IT Departments severely limit functionality and the use of such tools is hampered. Secondly, in an organization, changes have to happen at the grassroots, from the development and implementation of a proper information security policy, through the procedure of circulation, storage, hierarchy of access, and classification of information. Nevertheless, another important factor is that users are not properly trained in the secure use of AI and do not see how to tag documents, where to store them, how to share and what not to share.
JK: What can an organization do to improve the level of security and the ability to use AI?
JM: First of all, implement AI with a head, starting with identifying business and POC needs, through user education and training. Each user must first understand how AI technology works, what use cases can be handled, learn about the types of Copilots and their applications, get inspired, and prepare to use the functionality of Copliot or autonomous agents in Copilot Studio. In addition, as I mentioned before, you need to properly design information security - define classifications, labels, and properly configure Copliot so that it cannot reach certain files or libraries, and implement appropriate monitoring and alerts in case of unauthorized actions. From our experience in the area of Copilot implementations, we can see that most organizations are unable to carry out such activities on their own. This comes from the multi-layered nature of the solution, from its configuration through the underlying security settings of the Microsoft 365 ecosystem, to understanding the needs of the business, as well as from the aspect of constantly changing regulations implying many restrictions. What works great for the companies we start working with is the Copilot/AI Readiness Assessment, which is a check of an organization's readiness to implement Copilot or other AI-based services.
JK: Thank you for the interview and I already invite you to follow the next installments of ISCG's expert tech-talk series.
See you there.
