
Artificial intelligence has become a permanent part of companies' processes and private lives. In 2026, with the full entry into force of regulations AI Act and Cyber Resilience Act, these technologies have ceased to be solely the domain of innovation and have become an area of real risk, both legal and financial, for organizations or boards.
In addition to the standard implementations covering AI in the broadest sense, frequent workshops called Master of prompting, we do quite a bit of consulting in the areas of risk assessment of the use of AI, adaptation of security policies, or the audit of systems themselves and their preparation for sharing data used by AI systems.
Why is this crucial for the Board right now?
- AI Act compliance and legal protection: A form of collateral for the Board of Directors in the context of personal responsibility for oversight of new technologies. Demonstrate execution of best practices to safeguard systems and educate staff if the topic is AI
- Controlling „Shadow AI”:Today, the biggest risk is not the lack of AI, but its uncontrolled use by employees. I often write about this, such as the entry of sensitive data, sometimes into models privately purchased by employees or free versions. Implementing processes will allow us to seal them and prevent leaks of sensitive data.
- A clear message to the market:Customers and business partners are increasingly demanding proof that the AI solutions we provide are ethical, safe and hallucination-free. In this way, we are lowering, or eliminating, the effect of giving customers information believed to be our own, but de facto uncontrolled in 100% produced by AI (effect: loss of image, further financial trouble).
Proposed first steps:
- Establish a multidisciplinary team - we use a combination of IT and legal forces in our projects
- Conduct a zero audit to determine the exact cost and time of implementation. Standard risk analysis, audit, write-up of current status, next steps
- Inventory all AI tools used inside the organization, and there are quite a few of them, and let's hope it's only ChatGPT and not Chinese systems like DeepSeek. Here it is worth noting that our security officers do not recommend installing these applications on mobile devices, because the applications are entitled to silent access to the clipboard, files, microphone. So by definition an unnecessary attack surface.
Finally, some questions that are A starting point for the creation of AI management policies:
AI policies
- Do we have a consistent AI policy(separate or integrated with Information Security)?
- Is the policy regularly reviewed for changes in the law (e.g., AI Act)?
Internal organization
- Are roles and responsibilities for AI systems clearly separated?
- Have we designated „model owners” (aka AI Owners)?
- Are mechanisms for reporting AI incidents (e.g., hallucinations) active?
Resources for AI
- Do we provide adequate technical resources (computing power, data)?
- Do we have competent people (or a plan to train them)?
- Is the environment in which AI operates stable and monitored?
Analyzing the impact of AI systems
- Do we carry out AI Impact AssessmentFor each model implemented?
- Do we assess the impact on fundamental rights and ethics?
- Do the results of these evaluations influence the decision to launch the system?
AI system life cycle
- Do we have a framework for managing the entire lifecycle (from concept to retirement)?
- Are the requirements for AI precisely defined?
- Does the design process consider ethics (Privacy/Ethics by Design)?
- Do we verify and validate models (bias tests, hallucinations)?
- Is the implementation process controlled?
- Do we monitor systems in real time after launch?
- Do we have procedures to retire models (Retirement)?
Data for AI systems
- Do we care about data quality(representativeness, purity)?
- Do we know where the data comes from (provenance)?
- Is the data preparation (data labeling) process supervised?
- Do we protect privacy in training data?
Information for stakeholders
- Do we inform users that they are interacting with AI?
- Is the system documentation understandable to non-technical people (e.g., Management/Customer)?
Use of AI systems
- Do employees know how to use AI (prompting instructions) safely?
- Are we monitoring how AI is changing the company's processes?
- Do we have procedures for responding to anomalies in AI performance?
Relationships with suppliers
- Are we vetting vendors on what and how they deliver when it comes to the use of AI and the use of our data?
- Do we have AI liability provisions in our contracts with suppliers?
- Do we assess the risks of using external APIs?
Need to adjust your security policy or conduct an audit on data sharing used by your company's AI systems? Contact the author: https://outlook.office.com/book/Konsultacjewobszarzeaplikacjibiznesowych@ISCG.onmicrosoft.com/s/fKddgNyppECh3KThb8VNMw2?ismsaljsauthenabled
