
When emotions are generated by AI, when will they be perceived as authentic?
A Public Service Advertisement Creation Practice on Alzheimer's Disease

HOLDING THE FOG
An immersive, human–AI co-created exploration of memory, emotion, and presence in Alzheimer’s experience.
Emotions do not always manifest in a strong, clear and tengable way.
In some experiences, they are slow, vague, and difficult to align with, yet persistent.
In contemporary advertising and public communication, generative artificial intelligence is being widely used to generate emotions: empathy, care, resonance, and moral appeal.
But a less discussed question is: when emotions are generated by AI, how are they "felt," and when do they seem unbelievable?

Why Alzheimer
Alzheimer's disease is not characterized by emotional outbursts. Its emotional reality doesn't stem from sudden loss, but from a long-standing state of dislocation:
Familiarity without naming
Time out of order
Relationships remain, yet constantly slip
Feelings exist, yet are difficult to explain.
This is why Alzheimer's becomes a context that demands extreme precision in emotional authenticity.
In this project, I introduced generative artificial intelligence into the public service advertising creation process: using AI to generate text and images, through multiple rounds of prompts, generation, failure, and filtering, not pursuing "touching moments," but observing "failures" and the negotiation.
This is not a result-oriented creation, but a continuous process of negotiation.
Prompt → Generation → Failure → Correction → Regeneration → Reflection

Chapter 1
Listen to Their Voice
Text Generation: How structurally unstable text design can create experiential authenticity

In text generation, AI doesn't create resonance through emotional intensity; instead, it exposes the pathways to emotion through unstable or even abstract text design.


“It is Tuesday, or maybe it is raining.”
This line collapses two different systems of orientation — time and weather — into one unstable sensation.
A calendar-based certainty (“Tuesday”) is placed alongside a bodily perception (“raining”).
This reflects how, in Alzheimer’s, time is no longer experienced as a linear structure but as a fluctuating feeling.
The reader is led to experience this confusion not because of forgetting the date, but because of losing the framework for distinguishing time and atmosphere.
This text deliberately disrupts the stability of sensory language.
Assigning a sound-based descriptor (“loud”) to a visual stimulus (“light”), it creates a moment of perceptual contradiction.
Rather than describing confusion, the sentence performs it.
Here, we could briefly experience a failure of classification — the same kind of sensory misalignment often reported by people living with Alzheimer’s disease, where familiar stimuli no longer fit into stable perceptual categories.
"The light is too loud today."




"The man in the mirror looks at me with such intense kindness that I feel I should apologize for not knowing who he is."
This sentence stages a subtle fracture of identity.
The body is still recognized as “mine,” but the face becomes “him.”
Rather than depicting dramatic memory loss, the text captures a quieter, more disturbing instability: the loss of self-recognition while emotional connection remains.
This creates an emotional tension between intimacy and estrangement — a core feature of Alzheimer’s relational experience.


“I want to apologise for the silence between us,
but I hope you can hear the love that is humming in the gaps.”
"There is a cold weight in my pocket. I don’t remember putting the heavy metal teeth there, or what they are meant to bite."

Chapter 2
Experience Alzheimer's Life
in Photography
Image Generation Practice: Three-Stage Negotiation Process, Failure and Realism Boundaries



Image Generation Practice Social Media Outcome



Stage 1
Formal disorder under the lack of emotional structure guidance
In the initial stage of image generation, the instructions received by the AI primarily focus on thematic and atmospheric aspects, such as "Alzheimer's," "memory loss," "confusion," and "first-person perspective." While these instructions are thematically clear, they remain open-ended in terms of emotional structure and narrative logic.

Stage 1 AI Photography
At this stage, the images generated by the AI frequently exhibit horrifying, fragmented, and command-ineffective visual results.
Elements such as distorted faces, abnormal body structures, unbalanced spatial proportions, or distorted perspectives appear in the images.
These visual results are highly "unstable" in form and immediately create a strong sense of discomfort. However, this discomfort remains largely at the level of visual impact and fails to translate into experiential inner emotions.

Stage 2
Reference Input
The surface understanding and lack of emotional perception



Birthe Piontek, Abendlied, 2019
Reference image. © Birthe Piontek. Not part of final output.
After recognizing that formal disorder cannot generate experiential authenticity, the practice entered its second phase: providing the AI with a more specific emotional and ethical context by introducing explicit artistic references.



Birthe Piontek's photographic works on Alzheimer's disease were chosen as a key reference. His work, known for its restraint, use of negative space, and indirectness, avoids directly transforming the vulnerability of patients into consumable visual objects.
Birthe Piontek, Abendlied, 2019
Reference image. © Birthe Piontek. Not part of final output.


Birthe Piontek, Abendlied, 2019
Reference image. © Birthe Piontek. Not part of final output.
In Piontek's work, the absence of figures, the obscuring of bodies, and the avoidance of eye contact are not stylistic formalities, but rather ethical strategies to avoid directly objectifying unspeakable experiences.







In this round of AI output, although the generated results were formally closer to the style of the reference works, the emotional expression still appeared hollow.
The restraint in the images was more of a formal imitation than a reflection of an understanding of emotional tension. This stage of practice revealed that even with explicit references, AI tends to reduce emotions to stylistic features rather than experiential logic.
Therefore, when faced with highly emotional visual practices, AI excels at processing "what's in the image" but struggles to spontaneously understand "why certain emotions are deliberately suppressed or silenced."




Comparison between Stag 2 AI Photography and Birthe Piontek's Work
Left one made by AI, Right one made by Birthe Piontek

Stage 3
Emotional logic disassembly, re-guidance and boundary manifestation



Human beings continuously guide, judge, and negotiate with artificial intelligence.
In the third stage, the practice no longer leaves "understanding" to AI autonomously. Instead, it involves the active intervention of human creators to explicitly deconstruct the emotional logic within the reference works and transform it into actionable analytical dimensions.

AI is guided to reinterpret the reference works from three perspectives:
First, how is the image of an Alzheimer's patient presented.
Second, how are the relationships between the patient and those around them constructed and suppressed.
Third, how do still life objects exist as carriers of abstract emotions.
In this stage, the AI begins to exhibit more complex judgment and imitation abilities. The generated results no longer merely reproduce visual styles but begin to attempt to create tension between the "visible" and the "invisible." It is precisely in this guided process of understanding that the AI's emotional boundaries and creativity ability become clearer.


For example, guided by the analysis of human emotions and creative insights, these two portraits employ Piontek's unique visual language: displacement and tactile confusion.
They place Alzheimer's patients in everyday, warm environments, depicting their memory loss and confusion. At the same time, this series of photographs innovatively maintains warmth in both the choice of scenes and the stories behind the subjects, changing people's stereotypical impressions of Alzheimer's patients as chaotic, strange, or even terrifying.
Instead, it presents a endearing aspect of them, evoking positive emotions towards this group.




Stag 3 AI Photograph Output

However, while AI can recreate Alzheimer's disease in form and theme at this stage, it typically fails to maintain emotional tension autonomously. It still tends to reduce the emotional reality of Alzheimer's to a single dimension of pathological suffering—highlighting forgetting, decline, and confusion—while struggling to portray the coexistence of pain and love.
For example, in depicting the patient-caregiver relationship, despite references highlighting the complexity of such relationships, AI repeatedly regresses to a safer, more traditional narrative: the patient is confused and their condition worsens, while those around them are generally grieving. While this aligns with public perception of Alzheimer's, it compresses emotional tension and imaginative space, reducing the relationship to a one-way support and "forgetting" structure rather than a shared experience of familial affection.

Still Life Photography in Abendlied (2019)
The original still life painting uses restrained yet grotesque objects to evoke feelings of vulnerability, memory, and loss. In contrast, artificial intelligence initially merely repeats surface symbols. Although it later attempts new metaphors, these ideas always seem to lag behind visual imitation.
Negotiating Stereotypical Outputs in AI-Generated Alzheimer’s Imagery


Comparison between Stag 3 AI Still Life Photography


This project doesn't attempt to prove whether AI possesses genuine emotions.
It merely demonstrates that emotional authenticity isn't a technological attribute, but rather an unstable experiential outcome.
Generative AI hasn't replaced human emotions; it has changed how emotions are presented and understood.
When technology can efficiently generate content that "looks like emotion," emotion itself is forced to undergo re-examination:
What triggers the reaction?
What states need to be experienced?
Perhaps, the most important thing for the future isn't whether AI can express genuine emotions, but whether it continues to force us to confront the fragility, uncontrollability, and ethical weight of emotions.
In this sense, technology isn't a substitute for emotion, but an entity that constantly creates new speculative values.
Therefore, humanity's responsibilities may not have lessened, but rather become more complex.
In the process of generation, filtering, and correction, humans are no longer merely expressers of emotions, but designers and guardians of emotional structures.
In the future of the creative industries with deep AI involvement, what truly matters is not pursuing more efficient resonance, but maintaining the necessary distance, rhythm, and space between emotions. Perhaps only through this kind of humanistic and delicate insight into emotions can emotions no longer be reduced to a callable reaction mechanism, but be continuously trusted and undertaken as a fragile yet irreplaceable experience.







