Q: 9
Scenario
You are working as an Enterprise Architect within an Enterprise Architecture (EA) team at a large
government agency. The agency has multiple divisions.
The agency has a well-established EA practice and follows the TOGAF standard as its method for
architecture development. Along with the EA program, the agency also uses various management
frameworks, including business planning, project/portfolio management, and operations
management. The EA program is sponsored by the Chief Information Officer (CIO), who has actively
promoted architecting with agility within the EA department as her preferred approach for projects.
The government has mandated that the agency prepare themselves for an Artificial Intelligence (AI)-
first world, which they have called their “AI-first” plan. As a result, the agency is looking to determine
the impact and role that AI will play moving forward. The CIO has approved a Request for
Architecture Work to look at how AI can be used for services across the agency. She has noted that
digital platforms will be a priority for investment in order to scale the AI applications planned. Using
AI to automate tasks and make things run smoother is seen as a big advantage. Process automation
and improved efficiency from manual, repetitive activities have been identified as the key benefits of
applying generative AI to their agency’s business. This will include back-office automation, for
example, for help center agents who receive hundreds of email inquiries. This should also improve
services for citizens by making them more efficient and personalized, tailored to each individual’s
needs.
Many of the agency leaders are worried about relying too much on AI. Some leaders think their
employees will need to learn new skills. Some employees are worried they might lose their jobs to
AI. Other leaders worry about security and cyber resilience in the digital platforms needed for AI to
be successful.
The leader of the Enterprise Architecture team has asked for your suggestions on how to address the
concerns, and how to manage the risks of a new architecture for the AI-first project.
Based on the TOGAF standard, which of the following is the best answer?
Options
Discussion
Kinda C. It lines up with how TOGAF wants stakeholder concerns and requirements documented in the Architecture Vision and Requirements Spec. D splits out groups, but C fits the actual ADM way of tracking risk and feedback. Not 100% sure though, curious what others think.
D
C imo. It’s the only one that fully links stakeholder analysis to both the Architecture Vision and the Architecture Requirements Spec, which is straight from TOGAF. B feels like a trap because it skips over the need to capture concerns and cultural issues in detail. Could be wrong but C fits what I’ve seen in other practice sets.
Probably C since TOGAF really emphasizes capturing stakeholder concerns and documenting them in the Architecture Vision. Also, tracking risk in the Architecture Requirements Spec lines up with the ADM process. Saw something similar in some exam reports, so feels right here. Anyone disagree?
Be respectful. No spam.