Legacy firewalls bottleneck traffic which can seriously slow down business apps, especially for cloud stuff. So A makes sense as the real risk, not things like reduced management or licensing. If these were managed differently or in a tiny network, maybe it changes but in most enterprise setups, I think it's definitely A. Anyone see a context where B or D could matter more?
I get where you're coming from, Jack. SAML autoprovisioning (JIT) kicks in only when users first authenticate via SAML, so if initial onboarding needs to be hands-off, D makes sense. Directory sync is good for bulk updates but doesn't catch users who join through SAML only. Pretty sure that's why SCIM plus SAML autoprovisioning is a better fit here, but correct me if I've misunderstood.
Option A is the one I remember from a mock. The Forwarding Profile deals with what happens when DTLS can't be set up, usually falling back to TLS. It's specific about DTLS, not TLS in general. I think A is spot on here but if someone has evidence for D, happy to hear it.
Had something like this in a mock and A was the answer there too. Forwarding Profile is mainly about managing how ZCC should react if DTLS fails, so fallback from UDP to TCP (DTLS to TLS) is key. The other options are more about PAC files which aren't Forwarding Profile settings. Pretty sure it's A but open to correction if someone has seen different behavior.
I don't think it's D, since that's more generic about TLS tunnels. The Forwarding Profile in Zscaler specifically sets what to do if a DTLS tunnel can't be created, so A fits better here. Sometimes people mix up TLS and DTLS, easy trap. Pretty sure A is correct but open to other reasoning if folks disagree.
A pretty strong case for A here, since preventing a sensitive doc from going to a USB feels like inline protection too (just at the endpoint). That said, maybe it's less about network traffic and more device control. I'd still lean A, but not positive.
Yeah, that's inline. D for sure-blocking an attachment in webmail is classic inline data protection since the gateway is actually inspecting and stopping the traffic as it happens. The others are more endpoint or API based, not really 'in the line.' Pretty confident but open to corrections if someone has a different take.
I was thinking C here since inspecting the ZIA Web Policy might help spot blocks or misconfigurations that impact app behavior. Maybe not as direct as logs, but still part of the process for policy-driven issues. Let me know if I missed something with this logic.
Pretty sure A is it, since SSL logs will actually show handshake failures from cert pinning. Rebooting the endpoint (B) or checking web policy (C) won't surface these TLS errors directly. I've seen support teams go straight to logs for this reason. Anyone disagree?
Has to be B here. In Zscaler (and really most policy engines), the exception gets processed first because it's more specific, so put that policy up top. If you reverse it, the generic inspect-all rule would catch everything and the bypass never applies. Pretty sure that's how their order logic works, but let me know if someone saw it act differently.
Had something like this in a mock. ZIA evaluates policies from top to bottom, so you need the exception (bypass) rule above the generic inspect-all one. If you put the catch-all first, nothing else gets a chance to match. Pretty sure that's the logic here but let me know if I missed something.