Nicolas Marcq před 8 roky
rodič
revize
9648c6ce43
4 změnil soubory, kde provedl 149 přidání a 24 odebrání
  1. 8 5
      Docs/brain/brain.md
  2. 110 0
      Docs/default_settings.md
  3. 29 18
      Docs/dev_env_install.md
  4. 2 1
      README.md

+ 8 - 5
Docs/brain/brain.md

@@ -6,10 +6,11 @@ Brain is the link between input and output actions.
 
 An input action can be:
 - **an order:** Something that has been spoke out loud by the user.
-- **an event:** A date
+- **an event:** A date or a frequency (E.G: repeat each morning at 8:30)
 
 An output action is
-- a neuron: A module that will perform some actions like simply talking, run a script, run a command or a complex Ansible playbook.
+- a neuron: A module or plugin that will perform some actions like simply talking, run a script, run a command or a complex Ansible playbook.
+- a list of neurons
 
 Brain is expressed in YAML format (see YAML Syntax) and have a minimum of syntax, which intentionally tries to not be a programming language or script, 
 but rather a model of a configuration or a process.
@@ -26,7 +27,7 @@ Let's look a basic brain:
       - order: "say hello"
 ```
 
-Let's break this down in sections so we can understand how these files are built and what each piece means.
+Let's break this down in sections so we can understand how the file is built and what each piece means.
 
 The file starts with:
 ```
@@ -51,15 +52,17 @@ neurons:
 ```
 
 Neurons are modules that will be executed when the input action is triggered.
-Some neuron need parameters that can be passed as argument following the syntax bellow:
+Some neurons need parameters that can be passed as argument following the syntax bellow:
 ```
 neurons:
     - neuron_name:
         parameter1: "value1"
         parameter2: "value2"
 ```
+Not here that parameters are indented this one tabulation.
 
-In this example, the neuron say will make Jarvis speak out loud the phrase in parameter **message**.
+
+In this example, the neuron called "say" will make Jarvis speak out loud the phrase in parameter **message**.
 
 The last part, called **when** is a list of input action. This last works exactly the same way as neurons. You must place here at least one action.
 ```

+ 110 - 0
Docs/default_settings.md

@@ -0,0 +1,110 @@
+# JARVIS settings
+
+This part of the documentation deals with the main configuration of JARVIS. 
+This configuration is a file placed at the root of the project tree and called settings.yml.
+
+The syntax used is YAML.
+
+# General defaults
+
+In the settings.yml, the following settings are tunable:
+
+#### trigger
+
+The current hotword(also called a wake word or trigger word) detector is based on [Snowboy](https://snowboy.kitt.ai/).
+Common usage of hotword include Alexa on Amazon Echo, OK Google on some Android devices and Hey Siri on iPhones.
+With JARVIS project, you can set the Hotword you want. You can create your magic word by connecting to [Snowboy](https://snowboy.kitt.ai/) 
+and then download the trained model file.
+
+Once downloaded, place the file in **trigger/snowboy/resources**.
+
+Then, specify the name of the Snowboy model use the following syntax
+```
+trigger:
+  name: "my_model_name.pmdl"
+```
+
+#### default_speech_to_text
+
+A Speech To Text(STT) is an engine used to translate what you say into a text that can be processed by JARVIS core. 
+By default, JARVIS use google STT engine.
+
+You must provide an engine name in this variable following the syntax bellow
+```
+default_speech_to_text: "stt_name"
+```
+
+Available STT for JARVIS are:
+- google
+- bing
+
+#### default_text_to_speech
+A Text To Speech is an engine used to translate written text into a speech, into an audio stream.
+By default, JARVIS use Pico2wave TTS engine.
+
+You must provide a TTS engine name in this variable following the syntax bellow
+```
+default_text_to_speech: "tts_name"
+```
+
+Available TTS for JARVIS are:
+- pico2wave
+- voxygen
+
+#### random_wake_up_answers
+When JARVIS detects your trigger/hotword/magic word, he lets you know that he's operational and now waiting for order by answering randomly 
+one of the sentences provided in the variable random_wake_up_answers.
+
+This variable must contain a list of string following the syntax bellow
+```
+random_wake_up_answers:
+  - "You sentence"
+  - "Another sentence"
+```
+
+E.g
+```
+random_wake_up_answers:
+  - "Yes sir?"
+  - "I'm listening"
+  - "Sir?"
+  - "What can I do for you?"
+  - "Listening"
+  - "Yes?"
+```
+
+#### speech_to_text
+Speech to text configuration.
+Each STT has it own configuration. This configuration is passed as argument following the syntax bellow
+```
+speech_to_text:
+  - stt_name:
+      parameter_name: "value"
+```      
+
+```
+speech_to_text:
+  - google:
+      language: "fr-FR"
+  - bing
+```
+
+Please refer to the STT doc to know available parameter for each supported TTS
+#### text_to_speech
+Text to speech configuration
+Each TTS has it own configuration. This configuration is passed as argument following the syntax bellow
+```
+text_to_speech:
+  - tts_name:
+      parameter_name: "value"
+```
+
+E.g
+```
+text_to_speech:
+  - pico2wave:
+      language: "fr-FR"
+  - voxygen:
+      language: "fr"
+      voice: "michel"
+```

+ 29 - 18
Docs/dev_env_install.md

@@ -1,35 +1,46 @@
 # Dev environment installation
 
-This documentation deals with the manual installation of components for developement of JARVIS.
+This documentation aims at explaining the step by step manual deployment of JARVIS.
 
-## System tools
-## Audio tools
+Tested env
+- Ubuntu 16.04
+
+
+
+## Prerequisite
+
+### Packages installation
+On Ubuntu distribution:
 ```
-sudo apt-get install libsmpeg0
+sudo apt-get install python-pip python-dev libsmpeg0 libtts-pico-utils sudo apt-get install libsmpeg0 flac
 ```
 
-## Text to spreech engine
-Install pico2wave on Linux
+### Python lib
+
+Install libs
 ```
-apt-get install libtts-pico-utils sudo apt-get install libsmpeg0
+pip install SpeechRecognition
+pip install pyaudio
+pip install ansible
+pip install pygame
+pip install python2-pythondialog
 ```
 
+### Test your env
+Run the follwing command to capture audio from your microphone
+```
 
-install audio utils
 ```
-apt-get install flac
+
+Then play the recorded audio file
 ```
 
-## Python lib
-You must have python pip to install library
 ```
 
+## Installation
+
+Clone the project
 ```
-Then install libs
+git clone <TODO set github address>
 ```
-pip install SpeechRecognition
-pip install pyaudio
-pip install ansible
-pip install pygame
-pip install python2-pythondialog
-```
+

+ 2 - 1
README.md

@@ -2,7 +2,8 @@
 
 JARVIS is voice controlled personal assistant. 
 
-TODO: insert video demo
+TODO: insert video demo EN
+TODO: insert video demo FR
 
 
 ## Installation