Search

The talking boat - Part 1 - A Raspberry Pi project

As some compensation for missing out on this year's cruising due to engine problems, I treated myself to a Raspberry Pi and time to work on a pet project of mine, the talking boat.  It's an opportunity to have a holiday from XQuery to do some Python programming, at which I'm pretty much a novice.

Data comes to the yachtsman in visual form on dials or screens. Several factors reduce attention and can lead to problems.  Foremost for me is worsening eyesight - I now need glasses so reading instruments especially in poor light is a problem, especially at night when you need to preserve night vision and in poor weather.  Another factor is the need for vigilance. Boats are not infrequently lost through going off course when the watch keeper is tired. Even when at anchor or in a harbour, constant monitoring is needed in case of drifting or approaching bad weather. 

So I'm interested in a system which can monitor the boat status and report vocally on request or on detection of a problem. I started playing with the Arduino but when the Raspberry Pi came along, it was obviously the platform to use, with the Arduino handling interfaces with 

The first task is selecting software for Text-to-Speech (TTS).  I've used the Java FreeTTS before, and played with Festival, but found that espeak is a common package in Unix distros (though not in raspbian wheezy). espeak is quite small, the supplied voices are adequate and it supports a version of SSML  (Speech Synthesis Markup Language).  Although it provides an API, at present I'm running in a shell from Python:

This seems OK for short utterances, although the SSML XML needs to be escaped to pass through in a command line:

so a simple talking clock would be :

In a yacht application, the skipper needs to be able to select which data is to be vocalised, or which condition is to be monitored.  When at anchor, the computer might monitor the position of the boat using GPS data and compute the distance from a marked position and report (with varying degrees of urgency) if the distance exceeds a threshold, whilst when sailing the computer might monitor the deviation from the expected track, or when approaching land, the depth of the water. We might also want to get arbitrary data, such as the speed or the engine temperature.  So we need a way to switch between these modes from the cockpit.

I've toyed with the idea of speech recognition but a boat is a noisy environment and not the place to wear a headset, accuracy is a problem and the resource demand might exceed those of the RPi. 

When a lecturer at UWE I used a wireless Powerpoint presenter (Labtec).  It is an egg-shaped device with 4 buttons (and a laser pointer) and a USB receiver, giving it a range of about 5 meters. It freed me to wander, probably very annoyingly, around the stage. However I dislike Powerpoint so I developed an XML/XQuery browser-based version.  I wanted to retain the ability to step from slide to slide and realised that to the computer, the presenter must look like a very limited keyboard, whose characters could be captured in JavaScript and used to control the browser.

The presenter makes a great little device for capturing limited input.  I first needed to find out what characters each key generated. As seen by sys.stdin.read() (in character mode) they are

  • Left 27 91 53 126
  • Down 98
  • Right 27 91 54 126
  • Up  27,  27 91 49 53 126 

This is a bit of a mess. Up alternates between two outputs and the single escape can't be recognised without read-ahead or timeout.  A state-based lexer could be written but the strings  contain unique character pairs. I accept that UP[1] will not be recognised so Up will require two clicks.

The readkey function is written as a generator.

Thus a simple script to read and vocalised key presses is :

Four keys only gives four possible inputs but these are enough to move around a tree-based menu.  I use XML to descibe the menu, the prompts and the action to be performed at each node of the tree.

Here is a first cut of a yacht menu :

and a class to represent a menu. The XML is parsed with miniDom, and the menu function to read presenter input and move through the menu has a function parameter so that the caller can supply the action to be performed when a menu item is visited.

Now we can walk though a menu, saying the item titles as each item is visited:

The menu can be elaborated with action attributes:

action="'The Date is ' + time.strftime('%A %d of %B')"

which can be executed (with exec) to create the  text to say:

The code  for this first experiment is in Github. There are a few glitches to clean up. It has run on my Raspberry Pi but:

  • when a script using the presenter exits, command line input is no longer echoed (same on my Ubuntu laptop) [Update 13 Sept - added try: finally: block to the presenter code, so when  generator is closed, the tty echo setting can be reverted to its original status - code]
  • ASLA generates lots of error messages when espeak executes (audio is via HDMI) [Update 13 Sept - there doesnt seem to be a way to supress these messages so the best I can do is dump the stderror stream]
  • the Presenter interferes with other USB devices on the RPi, rendering flash drives invisible and the keyboard generates spurious inputs.  This may be due to low voltage so I need to change the power supply, currently from a 4 port hub.

Monitoring tasks require asynchrous processes and I'm still working on the best way to accomplish this.  I think the simplest approach is to use separate UNIX processes with communication via the filesystem, for example using serialized objects. Refresh times are relatively slow so the cost of serialising and re-creation(I'm using Pickle) is not a problem.

Part 2 looks at vocalising weather from remote weather stations.

 

 

 

 

If you want a different "remote" option, how about DS touchscreen and arduino combo. Low power and tons of flexibility. Something like http://tbm.wikia.com/wiki/Remote_Control but probably more waterproof..
Hi Mark - that looks interesting - I actually have a DS touchscreen but I might need the hackspace crowd to do that for me :)