Q: 11
You work at a subscription-based company. You have trained an ensemble of trees and neural
networks to predict customer churn, which is the likelihood that customers will not renew their
yearly subscription. The average prediction is a 15% churn rate, but for a particular customer the
model predicts that they are 70% likely to churn. The customer has a product usage history of 30%, is
located in New York City, and became a customer in 1997. You need to explain the difference
between the actual prediction, a 70% churn rate, and the average prediction. You want to use Vertex
Explainable AI. What should you do?
Options
Discussion
B . Some folks go for C but integrated gradients is mostly for images and text, not tabular data like churn. Easy trap there.
B
I don’t think it’s B. C. Integrated gradients sometimes get used for explainability, especially when you want to trace prediction changes as inputs shift from baseline, and might still help even if it’s not image data. Maybe there’s a trap here with Shapley?
Probably B, since Vertex Explainable AI's sampled Shapley explanations break down the prediction difference by feature. Makes sense for tabular data like this (churn, usage, location). Integrated gradients are more for images or text, so not a fit here. Pretty sure this matches what Google recommends. Disagree?
B tbh, because sampled Shapley in Vertex Explainable AI is the way to show individual feature contributions for tabular models. Confident pick.
Be respectful. No spam.