Artificial intelligence has arrived in our professional lives not with a grand revolution, but quietly—through helpful tools that write emails, translate documents, draft articles, and manage content. In tourism, especially within a diverse and multilingual network like Skal Europe, these tools have already begun to lighten our workload. They save us time, streamline communication, and help us include more people in what we do. At first glance, they seem like the ideal assistants.
But as these tools grow more sophisticated, something deeper is taking place. AI is no longer limited to logistics or formatting. It is now helping us write messages, select images, and even shape how we present our values to others. And so we find ourselves facing a quiet but profound question:
“Are we still the ones guiding the story, or are we letting the tool decide?”
As someone working on communication for Skal Europe, I’ve welcomed AI’s offerings. I’ve used it to assist with multilingual newsletters, summarize long documents, and organize articles. When used thoughtfully, AI allows us to do more with less—especially valuable in an international association run by volunteers. But over time, I’ve come to understand something else: these tools are learning from us, just as we are learning to use them.
“Every prompt is a lesson. Every correction is a training moment.”
When we interact with AI, we shape it. Not through code, but through behavior. If we ask lazy questions, we get lazy answers. If we demand clarity, nuance, and respect, we reinforce those values. What we feed into the system affects what others will get from it.
“Your use of AI affects what others will read tomorrow.”
This has serious implications. If our interactions are rushed, careless, or guided only by convenience, we risk strengthening exactly those tendencies in the system. But if we engage with AI using the same care and critical thinking we bring to our work in tourism—guided by respect, ethics, and empathy—then we help shape tools that reflect those values. I believe that even a small number of people doing this—perhaps just one in a hundred—can begin to shift the culture of how AI is used.
European values give us a foundation for this approach. The Charter of Fundamental Rights of the European Union speaks of human dignity, fairness, freedom of expression, the protection of personal data. These principles are not abstract. They are already present in our work: when we write with care, when we protect guest privacy, when we communicate without discrimination. They can—and should—guide how we work with AI, too.
At Skal Europe, we’ve already seen this in action. When preparing our articles on the Sustainability Awards, we used AI to help structure information, but the final stories were crafted by people who knew the projects and cared deeply about their meaning. When drafting multilingual outreach for Skal Roma, AI offered a first translation, but the final version was shaped by human hands, line by line, to preserve tone and sincerity. In both cases, the technology was useful—but it was not in charge.
“AI can support your message. But it should never shape your values.”
This is the message I want to share, not as a rule, but as a reflection. I believe that we can work with AI while remaining true to our values. We can use it as a partner, not a voice of authority.
“Let AI assist. Let humans decide.”
The more we stay human in how we guide these tools, the more human the results will be. And that, I believe, is a responsibility worth embracing.
Paolo Bartolozzi
Vice President, Skal Europe