Synopsis: In this Techstrong.ai video interview, Angela Nakalembe, engineering program manager for trust and safety for the YouTube arm of Google, discusses if making artificial intelligence (AI) models compassionate is actually achievable.

There’s been plenty of buzz about “compassionate AI.” Geoffrey Hinton even floated the idea recently. But let’s be clear: AI doesn’t feel anything. It’s pattern recognition and prediction, not empathy or instinct. What we can do is design systems that act in ways that put people first. That’s the real measure of “compassion” here.

Angela Nakalembe laid out how that works in practice. Guardrails have to be built before models ever get released. That means filtering out harmful prompts during training, testing models to avoid bad outputs, and scanning for issues once they’re live. Even then, things slip through—which is why humans stay in the loop. Reviewers flag problems, that data feeds back into the model, and the system learns. And if something goes completely sideways? There has to be a kill switch. No magic here—just the ability to pull the plug.

AI “personalities” complicate things further. Some users like constant affirmation, others want straight answers. That’s where optionality matters—giving people the ability to adjust how the model interacts with them. Education is part of it too. Most users don’t realize they can steer a model’s behavior with the right prompts.

The other big piece is trust. That comes from diversity in teams and data, transparency around how models reach answers, and heavy adversarial testing before anything goes public. Pressure-testing models with tough prompts is what helps prevent harmful surprises down the line.

So, can AI really be compassionate? Not in the human sense. But it can—and should—be engineered to serve human interests. That means keeping people in control, building systems that can explain themselves, and never losing sight of the fact that these are machines, not caretakers.