9 March 2016

Pi sense-hat slideshow

Just recently I've been looking into using the Pi sense hat to control screen content. What I expected to be quite simple turned out to be harder than expected for one reason or another. What I set out to acheive is a picture slide-show which could be triggered by moving the Pi (with the sense-hat attached).

The first issue I encountered was using PIL (the python image library). It seemed to just be a way of passing filenames from python to a seperate image handler within Raspbian. PIL seemed to pass the data to either 'xv' (xviewer?) or 'display' which you can manually trigger from a console using 'display filename.png' or 'xv filename.png'. I encountered articles that suggested installing something called 'Image Magick' if this didn't work. Fortunately it did work, but not how I wanted. From within python, it would start a new instance each time I tried to open it with a new filename so instead of a single window with a picture that changed, I ended up with multiple pictures which also had an icon of the image slapped right in the centre of the image. Not particularly useful for what I wanted.

For my next tests, I though I'd try the same thing in Scratch as I read Scratch is able to input/output from/to the sense hat by referencing the GPIO. Unfortunately the accelerometer sensor support isn't quite there yet. I experienced frequent crashes, especially when trying to make Scratch play nicely with the sense-hat. I went back to the drawing board.

I had to go back to using python but this eventually paid off once I'd done a lot of searching and experimenting and installed a few extra applications (and changed the way my code works). First of all I went through a succession of different image viewers and finally found the right one for me - called 'Feh'. I also changed the way I send images to it. Initially I was trying to cycle through a list by continually changing the filename using python strings. In the final version, I give Feh the list of files to use and then I check if the accelerometer readings show any signs of movement. If so, I then emulate a user pressing the 'advance to next image' button in Feh. Doing this requires another installation (known as x-automation).

The end result is not perfect as it shows errors on the console when you quit 'feh' (using either Q or the escape key), but it does sort of do what I wanted. The instructions and code below are a memory aid as I know I will probably need something like this again at some point.

Set-up (Pre-Requisites): From the command line, enter:

sudo apt-get install feh
sudo apt-get install xautomation

Create a folder on the desktop called python and a subfolder in there called images. Save .PNG images into this folder, numbering them sequentially (e.g. 1.png, 2.png to 8.png). Create a new file in the images folder called slideshow.py (You can use the Geany editor for this. Use sudo apt-get install geany if it's not already installed). Just copy the code here and save it.

 from subprocess import Popen, PIPE  
 from time import *  
 from sense_hat import SenseHat  
 sense = SenseHat()  
 pic = 1  
 #cursRight = 'keydown Right'  
 #cursRightUp = 'keyup Right'  
 kN = 'key N'  
 def keypress(sequence):  
   p = Popen(['xte'], stdin=PIPE)  
 proc = Popen (["feh", "-x", "-F", "-N", "-Y", "1.png", "2.png", "3.png", "4.png", "5.png", "6.png", "7.png", "8.png"], stdin=PIPE)  
 # cmdline args enclosed in quotes, seperated by commas - all enclosed within square brackets  
 while True:  
   x, y, z = sense.get_accelerometer_raw().values()  
   x=round(x, 0)  
   y=round(y, 0)  
   z=round(z, 0)  
   # m is the sum of all three axis   
  # z seems to typically have a value of 1 so this becomes our baseline for movement detection  
   if m>1 :  
   # i.e. something moved  
     pic = pic + 1  
     if pic>8 :  
     #we could use Popen here if we wanted to run different shell commands e.g.  
     #fname = "/home/pi/Desktop/python/jd/" + str(pic) + ".png"  
     #flop = Popen (["xv", fname])  
     #kN is defined above. This passes 'key N' to the keypress function which in turn outputs it  
     # to 'xte' using its stdin pipe (via p.communicate)  

To run the program, you need to be running in the x-windows environment (startx if your pi boots into command line). Start a command terminal and enter

cd Desktop/python/images
python slideshow.py

Sit back, give your pi (with the sense-hat fitted) a slight shake and it should cycle through the images in response to the movement. Feel free to expand on this code or create a better version. The additional settings after "feh" set a few command line switches for full-screen and turning off comments etc. Refer to the feh docs for info on these (or use the command line feh --help). Code on this page was formatted using CodeFormatter - thanks to the author of that little gem whoever you are :)