In the first part of our Tech-Talk, Izabella Zernicka and Piotr Olszewski discussed the most important regulations resulting from AI Act and their impact on organizations. Now welcome to the second installment of the conversation, in which we turn to the practical side of working with AI - that is, the risks, risks and incidents that organizations must learn to deal with in order to safely develop their projects .

Threats, risks and incidents
IS: This is moving more to the side of security issues. If the AI system makes a mistake, who is responsible?
AFTER: You asked a very good question. Without getting into the area of philosophy, but sticking to the practice of security management - it should be made clear that AI has no legal personality, and therefore has no legal liability of its own, so any damage to AI is the responsibility of the employee, which comes down at the end to the responsibility of the legal entity, such as a company, ministry, municipality, etc. EU law (including the AI Act) does not create a separate „machine” liability, those who developed, implemented and supervise AI systems are responsible. Here we enter the very interesting and complicated area of copyright and security of the contracts we enter into. Imagine that we signed a contract and used AI to produce a product. Under EU law, we don't have the copyright to that product, meaning that without having the rights we can't transfer them to our customer. Which immediately raises a great many problems in certain types of contracts for both parties to the contract.
We will also discuss the complicated threads of legal liability, system classification and copyright with you. There are also a great many other risks that have emerged with AI.
IS: What AI threats do we face and how do we handle them?
AFTER: In information and systems security, we talk about risks and incidents. That is, the question comes down to how to estimate and mitigate AI risks and how to respond to AI incidents.
Guidelines for AI risk management are described in ISO 23894, but generalizing, we can divide AI risks into several categories:
- A classic threat to information security, the misuse of AI can lead to the disclosure of information that should not be disclosed. Part of the issue is the dangers of AI attacks, which should be prepared for and learned to respond to.
- Another category is the dangers of using AI incorrectly and suffering the consequences of poor decisions or the consequences of delivering a defective product - a major consulting firm recently found this out painfully.
- We also have dangers that IT departments don't discuss - the implementation of AI can lead to dysfunction in the human resources area, for example, to block the development of employees and close the natural channel of junior à senior training, simply AI will take over the tasks of juniors and we will have a staffing hole.
- A very interesting area of risk that combines many areas of knowledge
with technology are the dangers of using AI to unethically influence the behavior or decisions of citizens, or more mundanely, the unethical treatment of our colleagues once AI is deployed. - We have a number of very complicated issues related to legal interpretations, which we discussed a while ago, namely the problem of copyright
and concluded contracts, but also with tax settlements or maturities under labor law. The law in this area is just being formulated so there is a lot of discussion and controversy here. - And finally, we have the entrenchment resulting from the lack of compliance with the AI Act, which we have already discussed.
As part of the project, we usually provide a specific list of risks in a format tailored to the client's risk analysis methodology so that it can be directly imported into the risk management system.
You asked how to handle such threats. Some guidance as to the process and requirements here is given to us by the AI Act and the aforementioned ISO standards. Of course, these documents describe what is already known
and partially tame, but we should use these recommendations and requirements to handle what is already known and to prepare for what is not yet known.
IS: Just how do you find yourself in these meanders?
AFTER: The key to success, in my opinion, is to stay abreast of what's coming to market (and there's a lot of it these days), build awareness among teams, and extend current processes and policies into the AI space.
Change in organization
IS: And could you talk about how to adapt organizations to the use of AI and what areas usually require adaptation?
AFTER: As with the implementation of an ISO 27001-compliant Information Security Management System, changes must address the entire organization. To give an example, let's list a few areas:
- Heads must be aware of the risks and requirements and actively support the entire AI security management process.
- In the area of change management - Project teams must apply AI risk assessment processes during the design, testing and implementation phases of systems. Something GDPR describes as „security by design” for AI also applies.
- The architecture of the systems must be expanded to include AI-related blocks and the delivery method must allow for instant delivery of systems.
- Compliance and legal teams must monitor application of standards
and compliance requirements of the AI Act and possibly the ISO standards of the 42000 series. - Incident response teams must be ready to recognize an AI incident and handle such an incident appropriately.
- The procurement team must be ready to procure AI products or services that are compliant, meaning they must first have a list of requirements for AI ready.
- Legal teams must be ready to evaluate AI systems.
- And finally, users need to know what to use and how to use it, or finally, they need to know how to recognize and report an AI incident.
As you can see, AI is forcing changes throughout the organization.
Safe environment
IS: You mentioned that AI even forces a change in the delivery method of systems. Can you explain what you meant by that?
AFTER: Imagine an organization that would like to process its confidential information in the AI system. For such applications, one usually builds a separate controlled set of environments in which AI components are placed and access to data is strictly controlled. Building such an environment in the classical way usually takes many months, and implementing changes the following weeks or months. AI systems are under very dynamic development - new models and components are appearing every day. To wit, we can say that computer science is an experimental science, meaning that for AI we must have an environment that can be very quickly built test the solution and if something doesn't work as needed destroy the environment and quickly build the next version, of course with the appropriate level of security. This is the only approach that can deliver the final functionality for the business, hours or days, and the only way to achieve this today is to automate the delivery of systems through code. The generic architecture of such AI systems is changing slowly enough that you can very effectively build an automation that will deliver AI environments very quickly and securely.
One of our teams is dedicated to providing such environments.
ABCs of AI implementation
IS: Since these changes in the organization are quite a lot, where should you start?
AFTER: It is necessary to start with the decision of management that they declare support for these changes - without this, the project will not succeed. Here we recommend a workshop for managers to help develop such support.
Next, the requirements and needs of the business department in the area of AI should be well defined.
It gets easier from there, as we move on to more technical issues and work with the legal, security and infrastructure departments:
- What external requirements and regulations do we have, e.g. AI Act, NIS 2, DORA, etc.?.
- What elements are already supported by the current CMS system, and what is missing
- How we handle system delivery and change management
- Creation of missing elements
- Adding the missing architectural blocks
- Implementation of new processes
- Building a test environment - this is a very important topic, because we need to have some sandboxes for AI testing.
In the area of security policies, it is best to start by implementing a classification of the AI systems we will use, for example, divided into open, internal and supervised, and then use information classifications to determine what is allowed in which system.
All the rest of AI policies are derived from these two steps. In this area, we offer a set of best practices and already ready-made solutions that greatly simplify the completion of the entire system.
Information security management
IS: What has changed in practice in security management after this project at the client?
AFTER: We worked out with the client the changes to the documentation and processes of the security management system, which are required by the AI Act and ISO 42001. The key to success was to integrate AI elements into existing processes and avoid creating unnecessary formalisms, e.g., analysis of the impact of AI systems attached to the AI lifecycle management process, and existing systems were used.
We began system modifications by redefining the overall architecture of AI systems in security policies. This approach allowed the client the flexibility to decide in which systems (open, closed or only supervised) the processing of a given group of information described in the classification can take place. The rules and requirements for processing were described in the policies for handling a given group of information.
We then worked out changes to:
- Information Security Policies.
- ICT system policies.
- Incident response policies.
- Risk Analysis Methodology.
- The process of change management in IT systems.
And many other documents of the SZBI.
Elements required by the AI Act were attached to existing processes: IT systems lifecycle management, change management, or risk management. In the end, a coherent and logical system was created in which AI systems are one of many IT systems that are handled in a similar way to other systems, although they have their own very specific requirements.
Interdisciplinarity and comprehensive approach
IS: Who did you work with in this project, what teams were involved in the work and workshops?
AFTER: In practice, the AI + cloud project touches the entire chain:
- Board of Directors / management, because changes to security policies and security and business processes require approval at the management level.
- CISO / security, because the existing organization of the security department had to be modified to expand existing security processes.
- Procurement/contracts, as new rules and requirements had to be defined for the acquisition of AI systems.
- IT/Operations/SOC, as logging, monitoring, incidents and maintaining AI performance quality become an operational responsibility.
- Compliance, AI Act, because ISO 42001 requires specific elements in audits.
- IT on the architecture and maintenance side, because the implementation of AI elements is best delivered in the form of standard automated building blocks that can be delivered very quickly for testing and then deployed in production, but you also have to be ready to maintain them.
- Business owners and data owners, because they are the ones who are responsible for most of the AI impact analysis, or risk analysis, and they are responsible for the end result.
- At each stage of the work, we pointed out that the topic is very broad and the technology is changing very quickly, so it is impossible to handle it effectively without cooperation with external partners.
- IZ: Which is more difficult, to implement AI or to maintain it safely?
AFTER: Definitely the latter. In maintenance, you have to keep up with changes in technology
and in regulations, continuously monitor the behavior of AI systems and respond to incidents that are often not obvious to Security Teams.
IS: Peter, thank you for the interview, and I invite you to follow our social media now, as we will soon have another installment of ISCG tech-talks, delving deeper into the legal meanderings of safe AI adoption.
AFTER: Also, thank you and feel free to cooperate.
