From the course: Microsoft Azure Data Scientist Associate (DP-100) Cert Prep

Unlock this course with a free trial

Join today to access over 25,300 courses taught by industry experts.

Configure compute for a batch deployment

Configure compute for a batch deployment

- When you're deploying models via Azure ML Studio, a couple of options include batch in real time. There also is the ability to deploy directly as a web service, and essentially abstract away some of the complexity with packaging a model. In the time that you're doing a batch, often you're going to be periodically running it. So let's say once a night you would go through and do credit card scoring. In the case of real time, you would do this on a 24/7 possibility. So you would go through and have some endpoint deployed and continuously put packages into this real-time endpoint. So really there's a couple of key options, and then there's a peripheral option, which packages the entire service. Let's go ahead and take a look at how this would work. So if we go over to Azure ML Studio here, and we dive into the model prediction interface, one of the things that we see here is that there's deploy to real time, deploy to batch, and deploy to service. So this is only for models that are…

Contents