
Lately I have been writing more about risks, including there are multidimensional risks of using AI in companies (copying confidential data, intellectual property issues, text liability).
As of Friday, I was on EdduCamp led by Mateusz Grzesiak, whom I appreciate for his multi-dimensional approach, e.g., psychology and business. One of the first sessions, probably using the effect of attracting an audience, was session on AI and about what people do with the tool before they even understand what it is.
First the definitions, because without them everyone understands something different. Human intelligence has been described as the biological ability to understand meanings, create a picture of reality, make cause-and-effect inferences, reflect, regulate emotions and make decisions based on experience, values and responsibility. And artificial intelligence: a mathematical and statistical system that analyzes data, recognizes patterns and generates the most likely responses.
I'm not getting into a discussion of whether these are the correct definitions or not. This is where the line of argument stretches and the presenter emphasizes that it's an important distinction, because AI has no consciousness, intention or responsibility. It doesn't understand meanings the way humans do. It produces a result that „fits” the data and language.
From this definition, the author drew a simple conclusion: AI is secondary and selective to human intelligence. Human beings respond with body, context, relationship, emotion. AI doesn't have that. It can write a joke, but it doesn't „feel” if there is tension in the room, if someone is dropping out, or if someone is embarrassed. It also doesn't take consequences for what it prompts. These arguments appeal to me because humans are emotions and imaginations that they build for themselves, not always true.
The most powerful passage for me was about this, How easily we attribute human characteristics to AI. The presenter called it anthropomorphize. People can write to the model as if they were a partner, a therapist, a counselor from life. Then they take offense at the answer, and the model apologizes, so in the head the impression of a „relationship” is made. It's just that it continues to be a system that puts words together in the most likely way (again, some simplification, because it's a mechanism mathematically quite well known, but not fully understood). This mechanism can be addictive, because it gives immediate relief and the feeling that someone is „next door.”.
The second pitfall is confusing prediction with understanding. AI may sound like an expert, but it's still not proof that it understands a company, a team situation or someone's emotions. The third: confusing consistency with truth. Nicely written text, expert terminology and smooth narrative can lull one to sleep, especially when the topic is outside one's specialty. There was also a theme of „hallucinations,” or invented sources and facts. A student may get citations and bibliographies that look plausible but don't exist. If he doesn't check them, he falls into disrepute (I recall the Deloitte slip-ups).
The fourth trap is to treat speed as quality. The answer comes immediately, so the brain adds: „it must be smart.” And the fifth is to delegate responsibility. „The system suggested” is not an excuse. The consequences are taken by the person anyway, even if he tries to talk himself out of it.
In the background were the social and professional consequences. If someone unreflectively shifts thinking to a tool, over time he or she loses stamina for intellectual effort. The presenter cited the example of a study in which chat room users had lower mental engagement and were quicker to forget what they had „written.”. He used a powerful metaphor „steroid” for the brain: it works quickly, has an outward effect, but can weaken the foundation if it replaces the brain's own work.
I personally also think that elaborate emails with additional information produced by AI, people don't read or are tired of the amount of text and use AI to create summaries.
There was also the topic of content homogenization. Just look at social media: many posts written by AI sound similar, have the same rhythm, the same sentences, the same smoothness. This works for a while, and then it gets tiresome. The difference between people disappears, and with it trust. The presenter spoke plainly: publishing material generated without adaptation and verification ends up losing authenticity and quality.
I also remembered from the whole session the simple „what not to do”. Do not use AI for high-stakes decisions without your own analysis and verification. Investments, legal issues, health, layoffs at work, these are not topics for „take me advice”.”. Don't use AI as a substitute for relationships, because people usually feel it's not yours, and it makes distance instead of closeness. Don't copy mindlessly generated content. Don't take generated facts and sources for granted. And don't put sensitive data where you have no control.
And how to use it? The fairest way was to treat AI as a generator of hypotheses and material for analysis. The tool can suggest scenarios, ideas, conversation options, plan structure. You test, choose and take responsibility. Same with prompts: the more specific, the better the outcome. Role, context, constraints, response format, quality criteria. Examples like a recipe with a budget and calories, scenarios for a conversation with an employee, a diagnosis of a washing machine malfunction with a safety limit, a trip plan with a date, budget and vacation style were given. The common denominator was one: precision.
Strongly resounded another thought that sits in my head: the tool is like a crane. The crane doesn't build anything by itself. If the operator is skilled, big things are built. If the operator is poor, the damage will also be big. And here the presenter put the matter sharply: in order to ask a meaningful question, one must have at least a basic knowledge of the field. Without this, it's easy to get into a „car without a license.”.
I, after this session, take for myself a simple rule: AI is supposed to speed up my work, not replace it. If I start using it not to think, then sooner or later I will pay. If I use it to think better and faster, then there is a point. Very cool lectures, I am thinking about the next edition. By the way, it's still worth thinking about security and doing an examination of conscience, whether it's compliant or if I'm a user then whether I'm copying too much data there or whether the answer is actually checked and not just copied.
Contact the author and learn how to safely and responsibly implement AI in your organization: https://outlook.office.com/book/Konsultacjewobszarzeaplikacjibiznesowych@ISCG.onmicrosoft.com/s/fKddgNyppECh3KThb8VNMw2?ismsaljsauthenabled
#AI #iscg
