Honestly, I think D. Data replication feels like the key for making sure everything keeps working, since it protects against node loss. Seems like a trap for C here.
B is the way to go here, since iostat directly shows disk IO bottlenecks. I remember seeing a similar scenario pop up in an exam simulation and it was always about matching the tool to the suspected hardware issue. The other options don't really give you block device stats. Pretty sure B is right, agree?
I actually thought D (htop) because it shows system resource usage in real-time, including IO wait. But now realizing it doesn’t break down disk IO specifics like iostat does. Maybe I’m missing something, but htop was my first guess. Disagree?
B looks close since -p is there, but I thought -p was mostly for specifying a profile or path, not chaining commands together. Shouldn't it be used only when you need to select something specific to run inside cmsh? Correct me if that's off.
cmsh -c will run both commands as a single string and then exit, which matches what the system admin wants. Haven't seen cmsh-system used like in D. If anyone's seen something different, let me know.cmsh -c lets you run both commands in sequence straight from the shell, non-interactively. The -p flag in B isn't right for passing multiple commands, and the other options don't really line up with Base Command Manager syntax. If anyone's seen cmsh accept -p for something like this, let me know, but I doubt it.InfiniBand is the key upgrade here, so D. It directly targets the latency and bandwidth issues common in distributed training jobs, whereas B (jumbo frames) only tweaks Ethernet but can't match InfiniBand performance. Pretty sure D is right unless there's a restriction on hardware changes.
This comes down to what "most effective" really means here. D is right since BCM's Cluster Extension actually automates spinning up AWS nodes only when you hit the local limit, so it saves time and manual effort compared to option B. But if you had some advanced compliance or network config that BCM automation can't handle, B could technically be safer in rare cases. For most setups though, I'm pretty sure D is what they want.
Call it it's D since BCM's Cluster Extension actually automates the provisioning of AWS nodes when local capacity maxes out. Manual options like B work but aren't the most effective for seamless cloudbursting. Open to other views if anyone’s seen different on practice exams.
D imo, Cluster Extension is built for this. Options B and C both want you to do manual work which defeats the point of cloudbursting. A is a distractor, since BCM's load balancer alone can't handle auto-provisioning into AWS. Seen similar in practice questions, so pretty sure D is right.
I saw a similar question on a practice test, and I was debating between B and D. Isn’t manually provisioning in AWS (B) more reliable if you want tighter control? BCM’s automation sounds good but can be unpredictable sometimes, right?
docker logs gives you STDOUT and STDERR from the container, the others are for stats, process info or config details. Not 100 percent on the STDIN part but C matches most exam reports.docker inspect shows everything about the container, so you might find the logs there. Not 100% sure, maybe missing something obvious with inspect output. If anyone has tried this recently let me know.docker logs does the job. Anyone disagree?Option C flips things here. If you don't have cluster-admin rights in your kubeconfig, the Run:AI CLI basically can't automate anything meaningful across nodes. Saw a similar catch on a practice quiz, so pretty sure that's what they're looking for. Happy to be challenged if someone has made scripting work without it.
ibtracert does-like traceroute for InfiniBand. D (ibnetdiscover) just gives you the general topology, not the node-to-node path. Easy trap there if you're not careful! Open to any counterpoints but pretty sure it's A.Yeah, for just connectivity testing ibping (C) is quick, but here you need to see the actual path between nodes. ibtracert shows all the hops in the InfiniBand fabric which helps spot if routing or cabling is off. Pretty sure A fits best for troubleshooting slow routes. Correct me if I missed something.
Don’t think it’s B or D. A is the only one that actually traces the path between two hosts, which is what you want for slow data between specific nodes. D gives you the whole topology, but not hop-by-hop between nodes. Correct me if I’m off here.