Had something really similar in a mock, picked D. Invocations metric gives you exactly the API call counts, and CloudWatch alarms let you alert right away when it hits the threshold. Way more straightforward than using CloudTrail or Debugger for this use case. Pretty sure that's what AWS expects here, but happy to hear other takes.
Yeah, D fits best here. If both training and validation loss are high and wobbling up and down, that's usually what happens when the learning rate is set too high for SGD. Lowering it should help the model converge better and reduce those jumps. Pretty confident on this one but open if someone sees it differently.
A company is building a web-based AI application by using Amazon SageMaker. The application will provide the following capabilities and features: ML experimentation, training, a central model registry, model deployment, and model monitoring. The application must ensure secure and isolated use of training data during the ML lifecycle. The training data is stored in Amazon S3. The company needs to use the central model registry to manage different versions of models in the application. Which action will meet this requirement with the LEAST operational overhead?
I picked D because tags can track model versions and give you quick filtering if you want to organize by extra metadata later. Thought tags were easier, but maybe it gets messy at scale? Not totally sure, someone correct me if model groups do more out of the box.
I don't think D is right for least operational overhead. Tagging each version gets messy fast and isn't meant for actual version management. Model Registry model groups (C) handle version tracking and cataloging natively in SageMaker, so you set it up once and it's managed for you. The tags option is more a trap because it looks flexible but adds work. Pretty sure C is what AWS expects here, unless I'm missing some hidden requirement?
Option D here. Data Wrangler's balance data is specifically designed for quick class balancing and sits right inside SageMaker, so fewer steps needed than Glue or Athena. Small catch: if the source data wasn't S3 or SageMaker, setup could be more involved, but in this scenario it's the fastest. Pretty sure that's why D wins.
I’d say this is same as a common exam questions on practice tests and D is usually the right move. SageMaker Data Wrangler has that balance data operation built in, so oversampling takes just a couple clicks. The workflow stays in SageMaker too, so it's definitely less effort than using Glue or Athena. I think D but let me know if you disagree.
?Embedding, Retrieval Augmented Generation (RAG), Temperature
These match the AWS generative AI patterns in official docs and labs. Used similar terms in practice tests too. I think this is what they're after, but open to other takes if someone disagrees!
Yeah, I'd pick Embedding, Retrieval Augmented Generation (RAG), and Temperature. Embedding is about turning text into vectors, RAG mixes in fresh external data, and Temperature tweaks the randomness in generated responses. Pretty sure that's what AWS expects here, but let me know if you see it differently.
Embedding, Retrieval Augmented Generation (RAG), Temperature. These match the core concepts from LLMs in Bedrock: embeddings for vector space meaning, RAG for bringing in external context, temperature for randomness in outputs. Pretty sure this is right but correct me if I missed something!
Why would anyone pick Token here? I think it's meant as a distractor, since the main focus is on concepts like Embedding, RAG, and Temperature. If you interpret the prompts closely, Token doesn't really fit those definitions. Anyone disagree?
HOTSPOT An ML engineer needs to use Amazon SageMaker Feature Store to create and manage features to train a model. Select and order the steps from the following list to create and use the features in Feature Store. Each step should be selected one time. (Select and order three.) • Access the store to build datasets for training. • Create a feature group. • Ingest the records.
Yeah, the right flow is create the feature group, then ingest records, finally access the store for training datasets. You need that structure before you can put any records in, and you can’t build a dataset until the features are there. Pretty sure this matches how SageMaker expects you to use Feature Store. Let me know if anyone sees it differently.
This was in my actual exam. The correct sequence is: create a feature group, ingest the records, then access the store to build datasets for training. If the question asks for the first step, it's definitely making the feature group but if it's about prepping data already available, that could flip things.
