From the course: Building Apps with Azure AI Language and Python
Orchestration workflow using Python
From the course: Building Apps with Azure AI Language and Python
Orchestration workflow using Python
- [Instruction] We can also create orchestration workflow projects using Python. We first need to install the Azure A Language Conversations library if you haven't done so. We then load our Azure configurations, specify the project name, model label, and deployment name. We then create our conversation authoring client. We identify our project settings. We specify the language, the project kind, which is now set to orchestration, the project name, description, and confidence threshold. We then use this project setting to create our project using the create project method. We then define our project assets. We simply define our intents. For each intent setting, we need to properly set the target project kind to either conversation or question answering, and to properly set the project name and deployment name. In Intent_CLU, the project kind is conversation and we specify our CLU project name and deployment name. For intent_QA, we specify the target project kind to be question answering, and we specify our question answering project. For question answering, it does not need a deployment name in the settings. We can then use the begin import project method to import our project assets and define the project. We can manually check the language studio to see the result. You can see the two intents created and each intent properly connected to the right project. Heading back to our code, we can now train the model. We provide the model label, the training mode, and the evaluation options. We specify the kind of percentage and specify the testing and training split percentages. Once the training's complete, we can view the results through code. We can also see the model performance through the studio. We see the overview, model performance, dataset details, dataset distribution, and confusion matrix. Heading back to our code, we can now deploy the model basing the begin deploy project method. This method requires the project name, deployment name, and deployment. Now we can start making requests. We need to create a conversation analysis client, and from here I created two helper functions. The first helper function, display result, will display the results to our screen. The helper function would first identify under what intent the utterance is identified for. Because CLU provides intent and entity details while question answering provides answer information. We have an if statement for conversation result and for see, if it's an intent_CLU, or an intent_QA before printing the results. The second helper function is analyze utterance. Analyze utterance calls the analyze conversation method. This method requires the task parameter, which contains the utterance of the user and the orchestration project name and deployment name. Now let's run a few examples. The first utterance is properly identified to be under CLU and the book table intent is identified along with its different entities. The second utterance is properly identified as well to be under question answering and the answer details are displayed. I also shared you an example in code on how to export a project using the begin export project and to save the result into adjacent file so that you can import this again in the future. This file is also located in the code space.