📖 About this Domain
This domain covers data preparation and modeling within the Microsoft Fabric ecosystem. You will ingest and transform data using dataflows and notebooks, then create and serve data through semantic models.
🎓 What You Will Learn
- You will learn to clean, transform, and load data into a lakehouse or data warehouse using Power Query or Spark.
- You will learn to design and build semantic models, including defining tables, relationships, and hierarchies.
- You will learn to write Data Analysis Expressions (DAX) to create calculated tables, columns, and measures.
- You will learn to optimize semantic model performance using Direct Lake mode, aggregations, and query caching.
🛠️ Skills You Will Build
- You will build skills to implement data ingestion and transformation pipelines using Dataflow Gen2 and Spark notebooks.
- You will build the ability to design and implement relational data models based on star schema principles.
- You will build proficiency in writing complex DAX queries and calculations to enhance semantic models.
- You will build skills in performance tuning semantic models for scalability and fast query response.
💡 Top Tips to Prepare
- Get hands-on practice creating and configuring Dataflow Gen2 to ingest data into a Fabric lakehouse.
- Master DAX fundamentals, including evaluation context and time intelligence functions, through practical exercises.
- Focus on understanding the differences and use cases for import, DirectQuery, and Direct Lake storage modes.
- Practice implementing row-level security (RLS) and object-level security (OLS) to secure semantic models.
📖 About this Domain
This domain covers the end-to-end lifecycle of a Microsoft Fabric analytics solution. You will focus on planning Fabric capacity and workspaces, implementing governance and security, and managing the solution's operational health.
🎓 What You Will Learn
- Plan a Fabric environment by recommending capacity, workspace configurations, and security strategies.
- Implement and manage Fabric workspaces, including items, Git integration, and deployment pipelines.
- Establish governance and security by managing access, sensitivity labels, and data privacy.
- Monitor and optimize the solution by analyzing capacity metrics and troubleshooting performance issues.
🛠️ Skills You Will Build
- Architecting end-to-end analytics solutions by selecting appropriate Fabric capacity and workspace settings.
- Implementing CI/CD for Fabric artifacts using Git integration and deployment pipelines.
- Administering Fabric environments, including capacity management, monitoring, and security enforcement.
- Optimizing solution performance through capacity metrics analysis and query troubleshooting.
💡 Top Tips to Prepare
- Master Fabric capacity concepts, including SKUs, scaling, and monitoring with the Capacity Metrics app.
- Practice workspace administration, including creating workspaces, managing roles, and configuring Git integration.
- Understand the different layers of Fabric security, from workspace roles to row-level security (RLS).
- Familiarize yourself with deployment pipelines for managing development, test, and production environments.
📖 About this Domain
This domain covers querying data stored within Microsoft Fabric. You will use various query languages and tools to interact with lakehouse and KQL database assets. The focus is on performing initial data exploration and analysis to understand data characteristics.
🎓 What You Will Learn
- You will learn to query the lakehouse SQL analytics endpoint using T-SQL to retrieve and manipulate data.
- You will learn to query a KQL database using Kusto Query Language for real-time data analysis.
- You will learn to perform exploratory data analysis using Fabric notebooks with Spark.
- You will learn to use tools like Data Wrangler for interactive data profiling and cleaning.
🛠️ Skills You Will Build
- Build proficiency in writing T-SQL queries to filter, join, and aggregate data in a lakehouse.
- Develop the ability to construct KQL queries for time-series analysis and pattern detection.
- Gain skills in using PySpark or Spark SQL within Fabric notebooks for programmatic data investigation.
- Master the use of Data Wrangler to visually explore data distributions and apply cleaning steps.
💡 Top Tips to Prepare
- Practice T-SQL queries specifically within the context of the lakehouse SQL analytics endpoint.
- Focus on understanding the KQL query structure, especially the pipe operator and common tabular operators.
- Know the specific use cases for when to use a Fabric notebook versus Data Wrangler for data exploration.
- Familiarize yourself with creating and running queries using KQL querysets in the Fabric portal.
📖 About this Domain
This domain focuses on the design, development, and optimization of semantic models within Microsoft Fabric. You will work with Power BI datasets, which serve as the semantic layer for reporting and analysis. Key activities include data modeling, implementing business logic with DAX, and ensuring model performance.
🎓 What You Will Learn
- You will learn to design and build tabular models by defining tables, relationships, and data types.
- You will learn to implement complex business logic using Data Analysis Expressions (DAX) for measures and calculated columns.
- You will learn to optimize semantic model performance by choosing appropriate storage modes like Direct Lake and applying best practices.
- You will learn to manage the lifecycle of semantic models, including deployment, security, and monitoring.
🛠️ Skills You Will Build
- Create scalable semantic models by implementing star schema designs and configuring relationships.
- Write and debug complex DAX queries, including time intelligence functions and calculation groups.
- Implement robust security models using row-level security (RLS) and object-level security (OLS).
- Troubleshoot and tune model performance using tools like DAX Studio and Performance Analyzer.
💡 Top Tips to Prepare
- Master DAX evaluation context, as it is fundamental to writing correct and efficient calculations.
- Gain hands-on experience with different storage modes, understanding the trade-offs between Direct Lake, Import, and DirectQuery.
- Practice implementing calculation groups to reduce measure proliferation and enable dynamic formatting.
- Utilize external tools like Tabular Editor for advanced modeling tasks and to improve development efficiency.