Browse Source

Merge branch 'dev'

Conflicts:
	neurons/Neurone.py
	test.py
nico 8 years ago
parent
commit
9cc19c5fad

+ 8 - 5
Docs/brain/brain.md

@@ -6,10 +6,11 @@ Brain is the link between input and output actions.
 
 An input action can be:
 - **an order:** Something that has been spoke out loud by the user.
-- **an event:** A date
+- **an event:** A date or a frequency (E.G: repeat each morning at 8:30)
 
 An output action is
-- a neuron: A module that will perform some actions like simply talking, run a script, run a command or a complex Ansible playbook.
+- a neuron: A module or plugin that will perform some actions like simply talking, run a script, run a command or a complex Ansible playbook.
+- a list of neurons
 
 Brain is expressed in YAML format (see YAML Syntax) and have a minimum of syntax, which intentionally tries to not be a programming language or script, 
 but rather a model of a configuration or a process.
@@ -26,7 +27,7 @@ Let's look a basic brain:
       - order: "say hello"
 ```
 
-Let's break this down in sections so we can understand how these files are built and what each piece means.
+Let's break this down in sections so we can understand how the file is built and what each piece means.
 
 The file starts with:
 ```
@@ -51,15 +52,17 @@ neurons:
 ```
 
 Neurons are modules that will be executed when the input action is triggered.
-Some neuron need parameters that can be passed as argument following the syntax bellow:
+Some neurons need parameters that can be passed as argument following the syntax bellow:
 ```
 neurons:
     - neuron_name:
         parameter1: "value1"
         parameter2: "value2"
 ```
+Not here that parameters are indented this one tabulation.
 
-In this example, the neuron say will make Jarvis speak out loud the phrase in parameter **message**.
+
+In this example, the neuron called "say" will make Jarvis speak out loud the phrase in parameter **message**.
 
 The last part, called **when** is a list of input action. This last works exactly the same way as neurons. You must place here at least one action.
 ```

+ 110 - 0
Docs/default_settings.md

@@ -0,0 +1,110 @@
+# JARVIS settings
+
+This part of the documentation deals with the main configuration of JARVIS. 
+This configuration is a file placed at the root of the project tree and called settings.yml.
+
+The syntax used is YAML.
+
+# General defaults
+
+In the settings.yml, the following settings are tunable:
+
+#### trigger
+
+The current hotword(also called a wake word or trigger word) detector is based on [Snowboy](https://snowboy.kitt.ai/).
+Common usage of hotword include Alexa on Amazon Echo, OK Google on some Android devices and Hey Siri on iPhones.
+With JARVIS project, you can set the Hotword you want. You can create your magic word by connecting to [Snowboy](https://snowboy.kitt.ai/) 
+and then download the trained model file.
+
+Once downloaded, place the file in **trigger/snowboy/resources**.
+
+Then, specify the name of the Snowboy model use the following syntax
+```
+trigger:
+  name: "my_model_name.pmdl"
+```
+
+#### default_speech_to_text
+
+A Speech To Text(STT) is an engine used to translate what you say into a text that can be processed by JARVIS core. 
+By default, JARVIS use google STT engine.
+
+You must provide an engine name in this variable following the syntax bellow
+```
+default_speech_to_text: "stt_name"
+```
+
+Available STT for JARVIS are:
+- google
+- bing
+
+#### default_text_to_speech
+A Text To Speech is an engine used to translate written text into a speech, into an audio stream.
+By default, JARVIS use Pico2wave TTS engine.
+
+You must provide a TTS engine name in this variable following the syntax bellow
+```
+default_text_to_speech: "tts_name"
+```
+
+Available TTS for JARVIS are:
+- pico2wave
+- voxygen
+
+#### random_wake_up_answers
+When JARVIS detects your trigger/hotword/magic word, he lets you know that he's operational and now waiting for order by answering randomly 
+one of the sentences provided in the variable random_wake_up_answers.
+
+This variable must contain a list of string following the syntax bellow
+```
+random_wake_up_answers:
+  - "You sentence"
+  - "Another sentence"
+```
+
+E.g
+```
+random_wake_up_answers:
+  - "Yes sir?"
+  - "I'm listening"
+  - "Sir?"
+  - "What can I do for you?"
+  - "Listening"
+  - "Yes?"
+```
+
+#### speech_to_text
+Speech to text configuration.
+Each STT has it own configuration. This configuration is passed as argument following the syntax bellow
+```
+speech_to_text:
+  - stt_name:
+      parameter_name: "value"
+```      
+
+```
+speech_to_text:
+  - google:
+      language: "fr-FR"
+  - bing
+```
+
+Please refer to the STT doc to know available parameter for each supported TTS
+#### text_to_speech
+Text to speech configuration
+Each TTS has it own configuration. This configuration is passed as argument following the syntax bellow
+```
+text_to_speech:
+  - tts_name:
+      parameter_name: "value"
+```
+
+E.g
+```
+text_to_speech:
+  - pico2wave:
+      language: "fr-FR"
+  - voxygen:
+      language: "fr"
+      voice: "michel"
+```

+ 30 - 18
Docs/dev_env_install.md

@@ -1,35 +1,47 @@
 # Dev environment installation
 
-This documentation deals with the manual installation of components for developement of JARVIS.
+This documentation aims at explaining the step by step manual deployment of JARVIS.
 
-## System tools
-## Audio tools
+Tested env
+- Ubuntu 16.04
+
+
+
+## Prerequisite
+
+### Packages installation
+On Ubuntu distribution:
 ```
-sudo apt-get install libsmpeg0
+sudo apt-get install python-pip python-dev libsmpeg0 libtts-pico-utils sudo apt-get install libsmpeg0 flac
 ```
 
-## Text to spreech engine
-Install pico2wave on Linux
+### Python lib
+
+Install libs
 ```
-apt-get install libtts-pico-utils sudo apt-get install libsmpeg0
+pip install SpeechRecognition
+pip install pyaudio
+pip install ansible
+pip install pygame
+pip install python2-pythondialog
+pip install jinja
 ```
 
+### Test your env
+Run the follwing command to capture audio from your microphone
+```
 
-install audio utils
 ```
-apt-get install flac
+
+Then play the recorded audio file
 ```
 
-## Python lib
-You must have python pip to install library
 ```
 
+## Installation
+
+Clone the project
 ```
-Then install libs
+git clone <TODO set github address>
 ```
-pip install SpeechRecognition
-pip install pyaudio
-pip install ansible
-pip install pygame
-pip install python2-pythondialog
-```
+

+ 2 - 1
README.md

@@ -2,7 +2,8 @@
 
 JARVIS is voice controlled personal assistant. 
 
-TODO: insert video demo
+TODO: insert video demo EN
+TODO: insert video demo FR
 
 
 ## Installation

+ 14 - 2
brain.yml

@@ -4,7 +4,7 @@
       - say:
           message:
             - "Bonjour monsieur"
-            - "Bonjour Nicolas"
+            - "Bonjour maitre"
       - sleep:
           seconds: 1
       - say:
@@ -18,6 +18,7 @@
       - say:
           message:
             - "42"
+          tts: "voxygen"
     when:
       - order: "sens de la vie"
 
@@ -32,10 +33,21 @@
 
   - name: "Say local date"
     neurons:
-      - systemdate
+      - systemdate:
+          say_template:
+            - "Il est {{ hours }} heures et {{ minutes }} minutes"
+          tts: "voxygen"
     when:
       - order: "quelle heure"
 
+  - name: "Say local date from template"
+    neurons:
+      - systemdate:
+          file_template: fr_systemdate_template_example.j2
+          tts: "voxygen"
+    when:
+      - order: "quelle heure 2"
+
   - name: "Close rolling shutter"
     neurons:
       - command: "curl http://192.168.0.22:5000/fermeture -d \"password=monpass\" -X POST"

+ 76 - 7
neurons/Neurone.py

@@ -1,8 +1,23 @@
 import importlib
+from jinja2 import Template
+import random
+import os.path
 
 from core import ConfigurationManager
 
 
+class NoTemplateException(Exception):
+    pass
+
+
+class MultipleTemplateException(Exception):
+    pass
+
+
+class TemplateFileNotFoundException(Exception):
+    pass
+
+
 class TTSModuleNotFound(Exception):
     pass
 
@@ -12,22 +27,67 @@ class TTSNotInstantiable(Exception):
 
 
 class Neurone:
-    def __init__(self, tts=None):
+    def __init__(self, **kwargs):
         # get the name of the plugin
         # print self.__class__.__name__
         # load the tts from settings
-        self.tts = tts
-        if tts is None:
+        # get the tts if is specified otherwise use default
+        tts = kwargs.get('tts', None)
+        if tts is not None:
+            self.tts = tts
+        else:
             self.tts = ConfigurationManager.get_default_text_to_speech()
         # get tts args
         self.tts_args = ConfigurationManager.get_tts_args(self.tts)
+        # capitalise for loading module name
         self.tts = self.tts.capitalize()
-        print "tts args: %s" % str(self.tts_args)
-
-        # instantiate the TTS
+        # load the module
         self.tts_instance = self._get_tts_instance()
 
-    def say(self, message):
+    def say(self, message, kwargs):
+        # get the tts if is specified otherwise use default
+        tts = kwargs.get('tts', None)
+        if tts is not None:
+            self.tts = tts
+            self.tts_args = ConfigurationManager.get_tts_args(self.tts)
+
+        # check if it's a single message or multiple one
+        if isinstance(message, list):
+            # then we pick randomly one message
+            message = random.choice(message)
+
+        # Check if there is a template associate to the output message
+        say_template = kwargs.get('say_template', None)
+        # check if there is a template file associate to the output message
+        file_template = kwargs.get('file_template', None)
+
+        # we check if the user provide a say_template or a file_template, Not both
+        if say_template is not None and file_template is not None:
+            raise MultipleTemplateException("You must provide a say_template or a file_template, not both")
+
+        # check on of the two option is set
+        if isinstance(message, dict):
+            if (say_template is not None and file_template is None) or \
+                    (say_template is None and file_template is not None):
+                if say_template is not None:    # the user choose a say_template option
+                    if isinstance(say_template, list):
+                        # then we pick randomly one template
+                        say_template = random.choice(say_template)
+                    t = Template(say_template)
+                    message = t.render(**message)
+                if file_template is not None:   # the user choose a file_template option
+                    real_file_template_path = "templates/%s" % file_template
+                    if os.path.isfile(real_file_template_path):
+                        # load the content of the file as template
+                        t = Template(self._get_content_of_file(real_file_template_path))
+                        message = t.render(**message)
+                    else:
+                        raise TemplateFileNotFoundException("Template file %s not found in templates folder"
+                                                            % real_file_template_path)
+
+            else:
+                raise NoTemplateException("You must specify a say_template or a file_template", message.keys())
+
         # here we use the tts to make jarvis talk
         # the module is imported on fly, depending on the selected tts from settings
         self.tts_instance.say(words=message, **(self.tts_args if self.tts_args is not None else {}))
@@ -45,3 +105,12 @@ class Neurone:
             return klass()
         else:
             raise TTSNotInstantiable("TTS module %s not instantiable" % self.tts)
+
+    @staticmethod
+    def _check_file_exist(real_file_template):
+        return os.path.isfile(real_file_template)
+
+    @staticmethod
+    def _get_content_of_file(real_file_template_path):
+        with open(real_file_template_path, 'r') as content_file:
+            return content_file.read()

+ 2 - 11
neurons/say/say.py

@@ -1,5 +1,4 @@
 from neurons import Neurone
-import random
 
 
 class NoMessageException(Exception):
@@ -8,19 +7,11 @@ class NoMessageException(Exception):
 
 class Say(Neurone):
     def __init__(self, *args , **kwargs):
-        # get the tts if is specified
-        tts = kwargs.get('tts', None)
-        Neurone.__init__(self, tts=tts)
-
+        Neurone.__init__(self)
         # get message to spell out loud
         message = kwargs.get('message', None)
         # user must specify a message
         if message is None:
             raise NoMessageException("You must specify a message string or a list of messages as parameter")
         else:
-            # check if it's a single message or multiple one
-            if isinstance(message, list):
-                # then we play randomly one message
-                self.say(random.choice(message))
-            else:
-                self.say(message)
+            self.say(message, kwargs)

+ 10 - 4
neurons/systemdate/systemdate.py

@@ -5,9 +5,15 @@ from neurons import Neurone
 
 
 class Systemdate(Neurone):
-    def __init__(self):
-        Neurone.__init__(self)
+    def __init__(self, *args , **kwargs):
+        Neurone.__init__(self, **kwargs)
+
+        # get hours and minutes
         hour = time.strftime("%H")
         minute = time.strftime("%M")
-        message = "Il est %s heure %s" % (hour, minute)
-        self.say(message)
+
+        message = {
+            "hours": hour,
+            "minutes": minute
+        }
+        self.say(message, kwargs)

+ 1 - 0
neurons/systemdate/template/fr_template1.j2

@@ -0,0 +1 @@
+il est {{ hours }} heures et {{ minutes }} minutes

+ 1 - 0
neurons/systemdate/template/fr_template2.j2

@@ -0,0 +1 @@
+{{ hours }} heures et {{ minutes }} minutes précisément

+ 1 - 0
templates/fr_systemdate_template_example.j2

@@ -0,0 +1 @@
+ma montre indique {{ hours }} heures et {{ minutes }} minutes

+ 0 - 1
tts/pico2wave/pico2wave.py

@@ -1,5 +1,4 @@
 import subprocess
-import os
 from core import AudioPlayer
 from tts import TTS