In the next episode of ISCG Tech-Talks, Izabella Zernicka speaks with Piotr Olszewski, CTO of ISCG, about how to prepare an organization for responsible and compliant use of artificial intelligence. AI is increasingly permeating business processes - regardless of whether a company consciously implements AI solutions or simply uses them by its employees.

Readiness of information security management systems (ISMS) in the AI era
IS: Hi Peter, it's great to hear you on another episode of ISCG Tech-Talks.
With me today is Piotr Olszewski, our CTO, and we will talk about the readiness of information security management systems (ISMS) for symbiosis with AI.
AFTER: Hi, thank you for the invitation and I'm happy to share my experiences. The topic is very timely because AI is entering every organization in practice and whether one wants it or not, it is already being used on a daily basis.
IS: Properly preparing a company for the use of AI is really the key to getting through the process safely, and it is multi-level, from the regulatory area to compliance with security policies to the entire spectrum of employee training.
AFTER: Yes, it is exactly as you said, it is quite a complex process. I recently led a project for the Ministry of Finance, where we were to help prepare the organization for the secure implementation and use of AI. Translating this into practical language, we were to assess how ready the current security processes and policies are for AI implementation,
and then develop the missing elements of the system and guidelines for changes in the organization.
We started, as usual, by adapting the strategy and evaluation method to the nature of the organization. The next step was to develop checklists and questions for the client's team. And then the most interesting part of the work began - that is, the analysis of AI needs and the analysis of documentation and building the missing elements of the security system. At the end of this process, we presented our recommendations and held a very interesting workshop showing how AI can be handled.
This workshop included:
- An overview of current AI solutions on the market.
- Approach to building and deploying AI systems inside the organization and using open systems such as Chat GPT.
- Elements of the AI risk assessment method
- Building AI competencies in specific areas of the organization.
- Legal issues related to AI, including how to assess whether the system we are working on is an AI system at all and whether it is subject to AI Act regulations.
That is, it was a full day of intense discussions.
Legal regulations and prescribed sanctions
IS: You mentioned regulations. Please tell us what regulations we currently have related to AI?
AFTER: In Europe, Regulation 2024/1689 of the European Parliament and the Council (EU) of June 13, 2024, the so-called AI Act, is already in force, and of course we also have the optional but useful ISO 42000 series standards.
IS: Do all organizations fall under the AI Act?
AFTER: Yes. All organizations in Europe that use AI and that offer AI for the European Union area.
IS: So basically all companies should prepare to use AI?
AFTER: Yes, all companies should prepare for this now.
IS: What are the key terms under the AI Act?
PO: The regulation is already in effect to some extent as of Aug. 2, 2024. As of Feb. 2, 2025, regulations on prohibited practices in AI are in effect, and as of Aug. 2, 2025, regulations on regulators and general-purpose AI models are in effect.
On the other hand, as of August 2, 2027, regulations for all other systems including high-risk AI systems will take effect. There are, of course, exceptions and exemptions,
But we will discuss the detailed terms at our next meeting with Mr. Patron.
IS: Yes, this will be the subject of discussion in the next episode of Tech-Talks, to which you are already cordially invited. And what happens if an organization does not comply with regulations?
AFTER: To begin with, let's clarify that sanctions are measures that can be used to protect the market and European Union citizens, not to block business or punish AI providers.
The AI Act provides for financial penalties of up to €35,000,000 or 7% of global annual turnover, with the higher amount applying for the most serious of violations, such as manipulation of sensitive data, through fines of €15,000,000, or 3% of global annual turnover for violations of AI technology provider obligations, or finally €7,500,000 or (for a company) up to 1% of global annual turnover for misrepresentation in reporting and reporting.
There are also penalties that can be even more severe for business, e.g., a recall order, which can end, in an extreme case, with the bankruptcy of a company for which this was a core business.
We will discuss the details of sanctions in detail at the next meeting with Mr. Patron.
It should be remembered, however, that dustingcz the risk of sanctions from the regulator we also have a whole range of risks. In summary - it is better to prepare and reduce the risks, especially since in most cases the preparations required by the AI Act are not very complicated, much more complicated is the security management of AI systems, which the AI Act does not touch.
AI Act regulations form the foundation without which it is difficult to think about responsible implementation of artificial intelligence. While regulations and sanctions seem complicated, this is just the first step - because the biggest challenge is how to practically manage the security of AI systems on a daily basis.
In the second part, we turn precisely to this practical dimension:
- real threats,
- risk analysis,
- incidents,
- Building a safe environment for working with AI.
We invite you to read the second part of the conversation.
