Wszystkie realizacje
In development styczeń 2024

Feminoteka Foundation

AI Training Simulator for Crisis Professionals

An AI-powered simulation platform that lets professionals practise realistic disclosure conversations — safely, repeatedly, and without involving real survivors.

AI e-learning NGO Python Vue AWS

Who is Feminoteka

Feminoteka Foundation is a Polish NGO founded in 2000 dedicated to ending violence against women. Their work spans advocacy, public education, and — critically — the professional training of people on the frontline: social workers, law enforcement officers, healthcare providers, and crisis intervention specialists who support survivors of sexual violence.

Their training programmes exist because disclosure conversations are among the most consequential a professional will ever have. Done well, they open a path toward safety, justice, and healing. Done poorly — through unintended pressure, disbelief, or clumsy questioning — they can retraumatise the person who came for help.

The challenge

Professionals who work with survivors of sexual violence need to learn how to conduct intake conversations: how to listen, how to ask without leading, how to hold space for disclosure at the survivor’s pace. This is a skill. Skills require practice.

The problem: you cannot practise on real survivors.

It is unethical. A person in crisis, who has just found the courage to speak, is not a training exercise. The power imbalance, the emotional stakes, the potential for harm — these make live practice on real cases simply not acceptable.

The alternatives are inadequate. Role-play with colleagues lacks realism; everyone knows it is a simulation, nobody is in genuine distress, the emotional weight is absent. Paper case studies are static. Video demonstrations are passive.

What Feminoteka needed was a way to create a realistic practice environment — where professionals could make mistakes, correct them, develop instincts — without putting a single real person at risk.

Why AI simulation

AI does not replace the experience of working with real survivors. Nothing does. But it fills a specific, critical gap: the gap between theoretical training and first real-world contact.

A well-calibrated AI simulation can respond with hesitation. It can express the contradictory signals a survivor in distress actually sends — partial disclosure followed by deflection, answers that circle back rather than go forward. It can model the emotional weight of a conversation that a professional needs to learn to hold. And it can do this repeatedly, at any time, without any of the ethical constraints that make real-case practice impossible.

This is not “we wanted to use AI.” This is: an AI simulation is the only tool that can create a safe practice environment for conversations this consequential. The alternative is sending professionals into real crisis situations with only theoretical preparation.

Feminoteka made the ethical case. Mutant Unit built the technical solution.

What we built

The platform lets professionals log in through their existing training environment, select a scenario, and conduct a practice conversation with an AI-simulated survivor. The AI plays the role of a person navigating a disclosure — not a perfect, cooperative subject, but a person who is frightened, uncertain, and processing something difficult.

The conversation happens via text, or optionally via voice — professionals can practise spoken interaction, which is closer to real conditions. When the session ends, a transcript is available for supervisors to review with the professional, turning the simulation into a structured learning moment rather than a private exercise.

Scenarios are grounded in Feminoteka’s training methodology, reviewed by psychologists, and calibrated to reflect a range of disclosure types and emotional states. The system is embedded in Feminoteka’s existing learning management platform (LearnWorlds), so professionals encounter it as part of their normal training flow — not a separate tool to learn.

The technical architecture:

  • Backend: Python / FastAPI, deployed on AWS. Conversation state persists in DynamoDB so sessions can be interrupted and resumed.
  • AI: AWS Bedrock with Claude as the primary model in production. Local Ollama is also supported for development. The AI layer is abstracted so the underlying model can be swapped without rewriting the simulation logic.
  • Voice: TTS/STT integration for spoken-conversation mode.
  • Frontend: Vue 3 / TypeScript, Tailwind CSS, deployed on Cloudflare Pages.
  • LMS integration: Embedded in LearnWorlds via iframe with SSO and webhook-based progress tracking. Completion events surface in the learner’s training record automatically.
  • Prompts: Bilingual (Polish and English), written in close collaboration with Feminoteka’s team and reviewed by psychologists. The prompts that shape the AI’s behaviour are treated as training artefacts — versioned, reviewed, not improvised.
  • Session recording: Full transcripts stored for supervisor review with appropriate access controls.

Where we are

This project is in active development. We are building iteratively with Feminoteka — each feature ships when it is ready, not before. First training cohorts have not yet completed the full programme; real-world data on the platform’s effectiveness is coming.

We are deliberately not publishing metrics we do not yet have. When the first cohorts complete, we will report what we observe honestly — including what works less well than expected and what we adjusted.

The project is progressing. The technology is sound. What matters is that Feminoteka’s professionals are closer to having a practice tool that actually serves the people they work to protect.