The end of the Wild West for AI: what the European law on artificial intelligence in marketing will change

The end of the Wild West for AI: what the European law on artificial intelligence in marketing will change
0
254
10min.

Artificial intelligence is no longer the Wild West of marketing. If a few years ago AI seemed to be a limitless space of experiments, today it has already become a part of the rulebook. The European Union adopted the EU AI Act – the world’s first law that defines how artificial intelligence can be used in business, advertising, and communication.

For marketers, this is not just another document from Brussels. It is a signal that the era of uncontrolled experiments with AI creatives, automated campaigns, and content generation is coming to an end. Now it is not only about speed and creativity, but also about transparency, security, and ethics.

Ukraine is not yet in the EU, but Ukrainian agencies, startups, and SaaS companies are already working with European clients. This means that the new standards apply to us right now. Ignoring the requirements of the EU AI Act means risking not only contracts but also the trust of partners, which will be much harder to restore than adapting to the new rules.

Briefly about EU AI Act: what is it and for whom?

EU AI Act is the first law in the world that creates a complete system of artificial intelligence regulation. It was adopted in 2024, and most provisions will come into effect in 2025-2026. The document defines how AI can be developed, implemented, and used to avoid manipulation, discrimination, and risks for users.

The law is based on the principle of risk classification. Each AI service or model belongs to a specific category depending on the potential impact on humans:

Unacceptable risk – systems that pose a threat to human safety, rights, or dignity. For example, AI for mass surveillance or manipulation of user behavior. Such systems are banned completely.
High risk – tools that can influence key decisions in education, employment, lending, healthcare, or public administration. They are subject to certification, mandatory reports, audits, and transparent documentation.
Limited risk – systems that interact directly with users, create or modify content, and collect behavioral data. They can be used, but with mandatory labeling and explanation of the AI principles.
Minimal risk – basic, non-invasive technologies that do not require additional restrictions (for example, recommendation filters in apps or simple chatbots without data collection).

For marketers, the key categories are the second and third. They cover the majority of modern scenarios of AI usage in advertising and communication:

Automated targeting and audience segmentation. AI that predicts user behavior and determines who to show ads to is considered “high risk”.
Personalization of ads and recommendations. If the system adapts the message to a specific person, it is necessary to ensure transparency and explain that the content is generated algorithmically.
Generative content. Texts, photos, videos, or voices generated by AI should be clearly labeled. This is a requirement of the “limited risk” category.
Collection and analysis of user data. Any tool that works with behavioral or personal data must be tested for compliance with the GDPR and the principles of ethical use.

The purpose of the law is not to limit the development of AI, but to make it safe and understandable for society. For marketing, this means a shift from “creativity without rules” to responsible use of technologies, where transparency and trust become as important a metric as clicks or conversions.

5 requirements of the EU AI Act that should be taken into account already

1. Transparency and labeling of AI content

One of the basic requirements of the EU AI Act is transparency in the use of artificial intelligence. Users have the right to know when they are interacting with an algorithm rather than a human, and when content is created by a machine rather than a human.

The law directly obliges companies to label any content generated or modified by AI if it can be perceived as genuine. This applies not only to texts or images, but also to video, audio, chats, voice messages, and even 3D models.

For marketing, this rule means a few specific things:

AI-generated content should be labeled. If a video is created by a neural network, a photo by Midjourney, or a text by ChatGPT, the publication should state that it is AI content. This is not only a matter of ethics, but also of legal compliance.
Prohibit the use of “deepfake” effects without warning. If AI changes the appearance, voice, or behavior of a person in an advertisement, the user should know that they are seeing a modeled image, not a real person.
Indication of the use of bots. If a customer or user is communicating with a chatbot (for example, in support, subscription, or consultation), this should be clearly stated at the beginning of the interaction.

A practical example is AI influencers and virtual brand ambassadors. If a company uses an AI-generated model for product promotion or communication, it must disclose that the character is not a real person. The same applies to voice assistants in ads or videos created with generative models.

The requirement of transparency is not intended to “stifle creativity”, but to maintain trust between the brand and the audience. The European market is moving towards a format where AI content is not a shame but a marker of honesty. Companies that openly indicate the use of artificial intelligence gain customer loyalty faster and reduce the risk of legal claims.

In the coming years, this practice will become a standard not only for the EU but also for companies operating in international markets, including Ukrainian marketing agencies.

2. Control over training data

The second key requirement of the EU AI Act is full traceability of data on which artificial intelligence is trained or operates. In other words, a company must know where the data used by its AI model came from and whether it violates the law or the rights of others.

The law establishes the principle of data provenance – the origin and transparency of data. It means that any AI tool that creates, analyzes, or automates content should have documented information about data sources, collection methods, and conditions of its use.

This rule is especially important for marketing, as AI systems in creative, copywriting, or analytics work on large sets of other people’s content – texts, images, videos, user data. If these sets contain copyrighted material or personal data without the consent of the owners, the company using AI may be held liable.

What it means in practice:

A company should know what data its tool is “learning” from. If you use an external AI service (e.g. ChatGPT, Midjourney, Jasper, Runway, Synthesia), you should check whether the service policy specifies the source of training data and whether the GDPR requirements are met.
You cannot use models that violate copyright. If AI generates a visual or video that partially copies the design, logo, or visual elements of another brand, it may be considered an intellectual property infringement.
The use of personal data must be legal. All systems that analyze user behavior or collect information for targeting should work only with data obtained with consent and comply with the GDPR principles.

For Ukrainian marketers, this means: responsibility begins at the stage of tool selection. It is necessary to check whether the AI platform you use complies with European standards of transparency and copyright.

Example of a risk: If a neural network creates an image for an advertising campaign and recognizable elements of the Nike or Apple brand accidentally appear on it, such content can be interpreted as unauthorized use of someone else’s intellectual property. At best, this will lead to the removal of the advertisement, at worst to legal claims.

Control over training data is not just a formality. It is an insurance policy for a brand that wants to use AI legally and without reputational risks. In the coming years, European clients will require contractors to prove compliance with this principle, just as they require GDPR compliance today.

3. Prohibition of manipulative and discriminatory algorithms

One of the most important provisions of the EU AI Act concerns the ethical impact of artificial intelligence. The law explicitly prohibits the use of AI for emotional or cognitive manipulation, i.e. any actions that intentionally influence human decisions without leaving them with a conscious choice.

In marketing, this provision is of particular importance, as it is here that AI most often plays on emotions, from personalized recommendations to psychologically accurate messages in advertising. European regulators have established a clear line between acceptable personalization and manipulation.

What is prohibited:

“Dark patterns” in UX. These are hidden interface mechanisms that force users to click a button, make a purchase, or provide data without realizing it. For example, when the “Refuse” button is less visible, or the window closes only after agreeing to the terms and conditions.
AI personalization through emotional pressure. Algorithms that push users to take action due to fear, anxiety, or an artificially created sense of urgency are prohibited. For example, advertising based on a user’s psychological profile (“You risk losing your job – buy our course now”).
Discriminatory targeting. Algorithms cannot generate or limit the display of ads based on age, gender, ethnicity, religion, or health status if it affects equality of access to goods or services. For example, AI that hides job offers for a certain age group or does not show ads for medical services to women violates the EU AI Act.

Example: A system that analyzes the emotional state of a user by facial expression or tone of voice and adapts ads to this state (for example, shows “comforting” or “motivational” messages when a person is upset) is now considered manipulative and will be banned.

For marketers, this means: AI should not exploit user weaknesses even if it increases conversions. Instead of psychological pressure, trust, benefit, and honesty should be at the center of communication.

In the future, this requirement will affect the way creatives are developed, UX design, and content personalization systems. European regulators are gradually moving from the principle of “what sells works” to a new standard: “what is transparent and ethical is allowed”. For brands, this is not a restriction, but an opportunity to create marketing that does not manipulate, but builds long-term loyalty.

4. Human supervision and responsibility

Another key principle of the EU AI Act is human control over artificial intelligence. All systems that make decisions or influence users should work according to the principle of “human in the loop” that is, AI decisions cannot be final without human involvement.

The purpose of this provision is to ensure that AI remains a tool, not a replacement for human thinking. In marketing, this means that any AI, from a text generator to an analytics or campaign optimization system, should be used under the control of a human being who is responsible for the result.

How it looks like in practice:

No AI ads can be published without human review. The content manager or editor should evaluate the generated material for accuracy, ethics, and compliance with the brand policy.
AI analytics is not the only factor in business decision-making. An algorithm can suggest which segment responds better, but the final decision on targeting, budget, or messages is made by a human.
The company should have a person responsible for AI risks. It can be a separate role, for example, AI Compliance Manager or Data Protection Officer, who controls the use of neural networks within the law.

The “human in the loop” requirement is aimed at reducing the risks of automated errors and manipulations, as well as providing the ability to explain each AI decision. As a result, brands that retain human control have not only legal protection but also greater customer trust.

5. Documentation and reporting of AI usage

EU AI Act introduces a new rule – mandatory documentation of processes where artificial intelligence is used. This means that the company must clearly understand and record:

which AI systems it uses;
for what tasks;
what data is processed;
what results or solutions AI generates.

Such records become the basis for internal audit and proof of compliance with the law. If a business works with European clients or partners, AI reporting may be a prerequisite for cooperation.

In practice, it looks like this:

A marketing agency that uses AI to generate creative or predict CTR should have an internal document that specifies which tool is used (for example, Jasper or Runway), what data is loaded into it, and how these results are checked by humans.
If a company uses AI analytics to make targeting decisions, it should record what parameters were taken into account and who approved the final decision.
In the event of an audit or dispute, the brand can show auditors that the system is used legally, data is protected, and decisions are under the control of specialists.

Such documentation may seem like bureaucracy, but in reality it is a tool of trust and security. It not only protects the company legally but also enhances its reputation among partners: a business that honestly keeps records of AI solutions is perceived as mature and reliable.

EU AI Act actually creates a new standard – “AI accountability”, i.e. responsible use of artificial intelligence. And for marketers, it means that the future belongs not to those who experiment faster, but to those who do it consciously and transparently.

AI in marketing is entering the maturity phase

The EU AI Act has become a turning point for the entire industry, from developers to marketers. What looked like technological freedom without rules yesterday is now turning into a system with clear standards, ethical boundaries, and responsibilities.

Artificial intelligence ceases to be just a tool for experiments and becomes a marketing infrastructure that requires control, transparency, and trust. For Ukrainian companies, this is not a distant prospect, but a practical necessity, especially if they work with European clients or in the global market.

Compliance with the EU AI Act means more than just “legal purity”. It is a sign of a brand’s maturity, reputation, and readiness to operate in a world where technology is evolving faster than regulation.

The marketing of the future is a combination of innovation and ethics. Those who learn how to use AI responsibly will not only be able to create effective campaigns, but also build long-term trust, something that is valued above any click or reach today.

Artificial intelligence has opened up a lot of opportunities for marketers, but now the main question is not “what else can AI do?” but “how do we use it fairly, safely, and intelligently?” And this is what will determine who will stay in the game in the coming years.

Share your thoughts!

TOP