pepper.language.process_utterance module¶
-
pepper.language.process_utterance.
analyze_statement
(speaker, words, chat_id, chat_turn, viewed_objects)¶ This function analyzes statements, by extracting an rdf A complete rdf is stored in the brain, while an incomplete one should raise an error and trigger asking-for-clarification #TODO
-
pepper.language.process_utterance.
analyze_utterance
(utterance, speaker, chat_id, chat_turn, brain, viewed_objects)¶ When the microphone creates a transcript of text-to-speech, this function is the first to be called After the utterance is classified and processed, we get a json template to either query the brain or to store it The brain returns a response after querying/storing the template
-
pepper.language.process_utterance.
classify_and_analyze_question
(speaker, words, chat_id, chat_turn, viewed_objects)¶ If an utterance is classified as a question, this function is called to determine whether it is a wh_ or a verb question (depending on the first word) Based on this, we extract an RDF triple of subject-predicate-object, and then pack it in the template The template is a json formatted to query the brain
-
pepper.language.process_utterance.
classify_and_process_utterance
(utterance, speaker, chat_id, chat_turn, viewed_objects)¶ Depending on the first word, the utterance is classified as a question or a statement and then processed accordingly
-
pepper.language.process_utterance.
reply
(response, speaker)¶ This function is called to generate the response to be said aloud by Leolani It needs the response generated from the brain as input, and based on its type (response to question or a statement) it triggers different reply functions