
🚩 Even if you don't formally run an AI project in your organization, artificial intelligence enters corporate tools implying new categories of risk, which Boards of Directors are personally responsible for after the KSC Law. Using AI requires a systemic approach and compliance with regulations. And to be able to achieve the expected ROI, careful mapping of processes before its implementation.
ᯓ➤ What have we prepared?
» AI Act, NIS 2, DORA, KSC - these are not regulations for IT, but a security standard for AI projects
🔶 We will show how to identify key risks such as shadow AI, lack of control over data, and over-reliance on models. We will outline organizational and legal scenarios with a specific catalog of risks and legal liability for AI in the organization.
» Compliant, secure and auditable AI architecture
🔶 We'll discuss the key phases of an AI adoption project: from defining processes and data scope, to selecting models and vendors, to building access control mechanisms, security policies (SCPs), reporting and monitoring. You'll learn what security audits and system configurations are worth conducting before implementing AI, and how to control access to critical company resources.
» Management Accountability and Maximizing ROI
🔶 The amendment to the KSC Law, signed on February 19 this year, imposes personal financial and criminal liability on the Boards of Directors of key and important entities for overseeing the implementation of security measures, including the allocation of budgets for human, technological and training resources to supply to its requirements.
🔔 How do you reconcile intelligent automation to increase your company's competitiveness with cyber risks and legislative meanderings? Register for the event now to stay always on the winning side: https://lnkd.in/dz6iQWYx
