Microsoft Azure AI Fundamentals (AI-900) Practice Exam

Disable ads (and more) with a membership for a one time $4.99 payment

Prepare for the Microsoft Azure AI Fundamentals certification with flashcards and multiple-choice questions. Enhance your understanding with helpful hints and explanations. Get ready for your certification success!

Practice this question and more.


In Azure ML designer, where must you deploy a model to create a real-time inference pipeline?

  1. Azure Functions

  2. Azure Kubernetes Service (AKS)

  3. Azure Blob Storage

  4. Azure Virtual Machines

The correct answer is: Azure Kubernetes Service (AKS)

To create a real-time inference pipeline in Azure ML designer, deploying the model on Azure Kubernetes Service (AKS) is the optimal choice. AKS is a managed Kubernetes service that simplifies the deployment, management, and scaling of containerized applications, including machine learning models. When using AKS, models can be deployed as containerized applications that provide RESTful endpoints for real-time inference. This capability is crucial for scenarios where low latency and high availability are required, allowing applications to send data to the model and receive predictions almost instantly. AKS also supports auto-scaling, enabling resources to be adjusted dynamically based on demand, which is essential for handling varying workloads effectively. In contrast, while Azure Functions can offer serverless capabilities for running code in response to events, they are not specifically tailored for the deployment of complex machine learning models that require robust scaling and orchestration. Azure Blob Storage serves primarily as storage for data and models, but does not provide the necessary infrastructure for real-time inference. Azure Virtual Machines allow for hosting applications, but they lack the container orchestration benefits provided by AKS, making them less efficient for deploying machine learning models in a scalable, responsive manner.