What size limitations exist in the language detection API?
the document size must be under 5,120 characters. The size limit is per document and each collection is restricted to 1,000 items (IDs).
If multiple sentiments exist in a text with multiple sentences, how will the sentiment analyzer classify the text?
What is the difference between question answering and conversational language?
Question Answering focuses on providing direct answers to user questions by creating a knowledge base of question-answer pairs.
CLU analyzes the intent and meaning behind a user’s natural language utterance, identifying key information within the text to determine the overall action needed, rather than just providing a direct answer
The two services are in fact complementary. You can build comprehensive natural language solutions that combine language understanding models and question answering knowledge bases.
What properties are required in the body of a call to the API for question answering ?
Property Description
1. question - Question to send to the knowledge base.
2. top - Maximum number of answers to be returned.
3. scoreThreshold - Score threshold for answers returned.
4. strictFilters - Limit to only answers that contain the specified metadata
What two approaches can be taken to improve performance of a question answering model?
After creating and testing a knowledge base, you can improve its performance with active learning and by defining synonyms.
You want to create a knowledge base from an existing FAQ document. What should you do?
Create a new knowledge base, importing the existing FAQ document.
You can create a knowledge base from an existing document or web page.
How can you enable users to use your knowledge base through email?
You can create a bot for your published knowledge base and configure a channel for email communication.
What preconfigured features exist in natural language processing?
Summarization
Named entity recognition
PII detection
Key phrase extraction
Sentiment analysis
Language detection
Describe conversational language understanding (CLU)
CLU helps users to build custom natural language understanding models to predict overall intent and extract important information from incoming utterances. CLU does require data to be tagged by the user to teach it how to predict intents and entities accurately.
What steps do you take to build a Conversational Language Understanding project using the REST API?
These tasks are done asynchronously; you’ll need to submit a request to the appropriate URI for each step, and then send another request to get the status of that job.
Each call requires authentication using the header Ocp-Apim-Subscription-Key
You have an app named App1 that analyzes social media mentions and determines whether comments are positive or negative.
During testing, you notice that App1 generates negative sentiment analysis in response to customer feedback that contains positive feedback.
You need to ensure that App1 includes more granular information during the analysis.
What should you add to the API requests?
opinionMining=true will add aspect-based sentiment analysis, which in turn will make the sentiment more granular so that positive and negative in a single sentence can be returned.
What range of scores of the Bilingual Evaluation Understudy (BLEU) indicates a high quality translation?
between 40 and 60 indicates a high-quality translation.
You are creating an orchestration workflow for Language Understanding.
You need to configure workflows for multiple languages. The solution must minimize administrative effort.
What should you create for each language?
separate workflow projects
Orchestration workflow projects do not support the multilingual option, so you need to create a separate workflow project for each language.
How can you enable users to use your knowledge base through email?
You can create a bot for your published knowledge base and configure a channel for email communication.
You are building an app that will enable users to create notes by using speech.
You need to recommend the Azure AI Speech service model to use. The solution must support noisy environments.
Which model should you recommend?
The custom speech-to-text model is correct, as you need to adapt the model because a factory floor might have ambient noise, which the model should be trained on.
You are building an app that will recognize the intent and entities of user utterances in real-time.
You are evaluating the use of intent recognition with the Azure AI Speech and Azure AI Language services or simple pattern matching.
When should you use pattern matching?
You are only interested in matching strictly what the user said.
You are building a custom translation model.
You need to use bilingual training documents to teach the model your terminology and style.
Which rule should you follow?
Be liberal is correct. Any in-domain human translation is better than machine translation. Add and remove documents as you go and try to improve the Bilingual Evaluation Understudy (BLEU) score.
Be strict is incorrect. Compose them to be optimally representative of what you are going to translate in the future.
Be restrictive is also incorrect. A phrase dictionary is case-sensitive, and any word or phrase listed is translated in the way you specify. In many cases, it is better to not use a phrase dictionary and let the system learn.
You are building an app that uses Azure AI Services Document Translation.
You need to improve the quality of the translation for user-uploaded documents.
What should you ask the users to include when they upload a document?
If the language of the content in the source document is known, it is recommended to specify the source language in the request to get a better translation.
You are creating an orchestration workflow for Language Understanding.
You need to configure workflows for multiple languages. The solution must minimize administrative effort.
What should you create for each language?
Orchestration workflow projects do not support the multilingual option, so you need to create a separate workflow project for each language.
You are building a multilingual conversational app by using Conversational Language Understanding (CLU), part of Azure AI Language service.
You create a CLU model that will serve multiple languages.
You need to optimize the performance of the model. The solution must minimize development effort.
What should you do?
Add utterances for languages that are performing poorly in the model.
With CLU, there is no need to use multiple projects for a model. For example, you can train a model in English and query it in German. There is no project language, therefore, adding utterances for languages in the model that are performing poorly is the appropriate solution to increase performance.
What are the guidelines for entering utterances in a Conversational Language Understanding model?
What are the three different types of entities that can be used in a CLU model?
What is the limit of prebuilt components per entity in a CLU model?
You can have up to five prebuilt components per entity
What are the steps to building an iterative CLU model?