Merry Christmas to our visitors (4k hits yay).
It's that time of year again (for me at least) when you need to be thinking about what critical systems will be left running and unattended over the festive period. In these days of virtualised systems, some of us need only worry about our personal workstations and desktop accessories. If you are looking after your company servers or infra-structure over the festive period then my thoughts and gratitude are with you. We take so much of our digital lifestyle for granted these days (home internet, film streaming services, banking systems, IP telephony etc) and often forget that while most of us may be away from the office, some poor soul may be called out at a moments notice to fix these things.
I consider myself lucky not to be in that situation these days, although it wouldn't be a holiday without at least one person bringing me a broken computer to fix. As for work, I shall be removing the power cable from my machine at 5pm this afternoon and not have to worry about it getting hacked. No doubt the price of this will be lots of updates on January 6th but at least I'm doing my bit to save the planet by turning things off :).
All the best for 2014 from the oblogs crew.
Usual disclaimers: I'm not a doctor, legal professional or financial advisor. This article is for information/education only and reflects my own opinions. It should not be taken as financial, legal or medical advice. Do your own research and never invest anything you cannot afford to lose (including your time).
20 December 2013
17 December 2013
MIT App Inventor 2
DON'T DO IT!!!!
MIT what have you done? Now the blocks editor opens in a tabbed window rather than a new pop-up. Can I change it back? It was useful to have the second window as I dragged it over to my second monitor. Much easier to use when you could see the design and blocks windows at the same time.
Not only that but the AI2 Companion app doesn't seem to work on my original Nexus 7 (Your device isn't compatible with this version?). I know there have been lots of driver issues connecting to Windows boxes over USB but a shortcut to the adb.exe was enough to fix this when we encountered it.
Oh well I guess it's back to AI-1 for now.
MIT what have you done? Now the blocks editor opens in a tabbed window rather than a new pop-up. Can I change it back? It was useful to have the second window as I dragged it over to my second monitor. Much easier to use when you could see the design and blocks windows at the same time.
Not only that but the AI2 Companion app doesn't seem to work on my original Nexus 7 (Your device isn't compatible with this version?). I know there have been lots of driver issues connecting to Windows boxes over USB but a shortcut to the adb.exe was enough to fix this when we encountered it.
Oh well I guess it's back to AI-1 for now.
28 November 2013
Using HTTP Post in App Inventor
This is a quick & dirty demo of how to post data from App Inventor and process it using a PHP script on a web-server. Although there are plenty of examples of using HTTP_Get, I couldn't find a demo of how to use HTTP_Post, so here is my streamlined version. It's taken me a while to grasp how this works so I want to keep a record for future use.I will go into the reasons not to use this "as is" afterwards.
First of all here is the important bit (for impatient people).
The important bits to grasp here are:
Set the Web.Url property to the PHP file which processes the user data
(http://server:port/filename.php)
Use Web.PostText to send the data. Add "call make text" to send multiple items.
Multiple items should be in the format data_item_name=data and the second (and any further fields) should have an ampersand preceeding them. A quick look at the Wikipedia article on HTTP_Post structure is a good idea if this is puzzling you.
So I have two fields, username and password which are posted to my PHP server when the Authenticator button is pressed. The next step is to look at what happens at the server. It is a very simple script which just echoes the variables out to the browser. Here's the listing of Appresponder3.php
<script language="php">
# Test routine: echoes form data
$uname = $_POST['username'];
$upass = $_POST['password'];
echo ("$uname,$upass");
</script>
So very simple. Notice there are no HTML tags output by this code. This is not by accident. Once the button is clicked, our data should be sent to the PHP server which returns the data to the sending app in the format username,password. Whatever is output from the Appresponder3.php script ends up in the responseContent block so if you have HTML formatting in that file, you will see the HTML tags in with the result data when it is later passed into lbl_Result.Text. I have avoided this to keep the code simple - no extra parsing of the PHP output to extract the bits we want. This is not difficult to do in App Inventor, but I like to keep things simple when explaining.
The Web.GotText function is triggered by the data returned from the Web.PostText function (or possibly a network time-out!). Here we see whatever the result, it is passed into our lbl_Result.Text which is displayed on the app screen. It's a good idea not to assume everything will work perfectly every time and do some simple checks for some common errors (like what happens if there's no network for example?). I'm only checking for a few of the possible errors here (200=success, 401=Not authorised to access the php file and 1101 is returned when there's no network connection).
You can check the PHP is working by calling it via this file. Just save this code on your web-server in the same directory as your Appresponder3.php file (call it Appresponder3.htm)
<html>
<head>
<title>Appresponder Test</title>
</head>
<body>
<form action="Appresponder3.php" name="form" method="post">
<table width="600" border="1">
<tr>
<th width="91"> <div align="center">Username</div><div align="center">Password</div></th>
</tr>
<tr>
<td><div align="center"><input type="text" name="username"></div></td>
<td><div align="center"><input type="text" name="password"></div></td>
</tr>
</table>
<input type="submit" name="submit" value="submit">
</form>
</body>
</html>
Notice that the names of those form elements is exactly what we have in the "call make text" block. We are effectively defining names for our data and these must match the data names later plucked out by our PHP script. Now just browse to http://yourserver/Appresponder3.htm and submit some values. If everything is ok the submit button should return a web-page which just displays the chosen username and password. If not then you have a server configuration issue to resolve which is beyond the scope of this tutorial.
If you have the code working on your browser but all you get in your app is a 401 error, you need to ensure that your web-server is configured to allow anonymous access to the PHP file. It should be possible to specify an account with access rights but I haven't figured out how to do that yet so for now you will need to allow anonymous access to the php file.
Now I will point out the very important reasons not to use this code as-is.
At the moment the web-server must be configured to allow anonymous access for this to work and also those usernames and passwords are going to be sent in clear-text so anyone monitoring the network could easily discover them and get into your system. The only reason I'm posting this is because there are so few examples of using HTTP_Post online at the moment. This could be used as the basis for simple client-server requests so I have published it to help get people started. Please don't use this for real logins without some form of data-encryption.
I also beleive that those values in the "make text" block should be properly encoded according to the information on the Wikipedia article. Oddly enough this does seem to work without encoding, even if you have a username like "Bob Brown" in which the space should really be converted to %32 to be compliant. I have tested this on a Nexus 7 using Win Server 2k8 for the PHP processing and it seems to work ok but as I said it's quick and dirty.
First of all here is the important bit (for impatient people).
The important bits to grasp here are:
Set the Web.Url property to the PHP file which processes the user data
(http://server:port/filename.php)
Use Web.PostText to send the data. Add "call make text" to send multiple items.
Multiple items should be in the format data_item_name=data and the second (and any further fields) should have an ampersand preceeding them. A quick look at the Wikipedia article on HTTP_Post structure is a good idea if this is puzzling you.
So I have two fields, username and password which are posted to my PHP server when the Authenticator button is pressed. The next step is to look at what happens at the server. It is a very simple script which just echoes the variables out to the browser. Here's the listing of Appresponder3.php
<script language="php">
# Test routine: echoes form data
$uname = $_POST['username'];
$upass = $_POST['password'];
echo ("$uname,$upass");
</script>
So very simple. Notice there are no HTML tags output by this code. This is not by accident. Once the button is clicked, our data should be sent to the PHP server which returns the data to the sending app in the format username,password. Whatever is output from the Appresponder3.php script ends up in the responseContent block so if you have HTML formatting in that file, you will see the HTML tags in with the result data when it is later passed into lbl_Result.Text. I have avoided this to keep the code simple - no extra parsing of the PHP output to extract the bits we want. This is not difficult to do in App Inventor, but I like to keep things simple when explaining.
The Web.GotText function is triggered by the data returned from the Web.PostText function (or possibly a network time-out!). Here we see whatever the result, it is passed into our lbl_Result.Text which is displayed on the app screen. It's a good idea not to assume everything will work perfectly every time and do some simple checks for some common errors (like what happens if there's no network for example?). I'm only checking for a few of the possible errors here (200=success, 401=Not authorised to access the php file and 1101 is returned when there's no network connection).
You can check the PHP is working by calling it via this file. Just save this code on your web-server in the same directory as your Appresponder3.php file (call it Appresponder3.htm)
<html>
<head>
<title>Appresponder Test</title>
</head>
<body>
<form action="Appresponder3.php" name="form" method="post">
<table width="600" border="1">
<tr>
<th width="91"> <div align="center">Username</div><div align="center">Password</div></th>
</tr>
<tr>
<td><div align="center"><input type="text" name="username"></div></td>
<td><div align="center"><input type="text" name="password"></div></td>
</tr>
</table>
<input type="submit" name="submit" value="submit">
</form>
</body>
</html>
Notice that the names of those form elements is exactly what we have in the "call make text" block. We are effectively defining names for our data and these must match the data names later plucked out by our PHP script. Now just browse to http://yourserver/Appresponder3.htm and submit some values. If everything is ok the submit button should return a web-page which just displays the chosen username and password. If not then you have a server configuration issue to resolve which is beyond the scope of this tutorial.
If you have the code working on your browser but all you get in your app is a 401 error, you need to ensure that your web-server is configured to allow anonymous access to the PHP file. It should be possible to specify an account with access rights but I haven't figured out how to do that yet so for now you will need to allow anonymous access to the php file.
Now I will point out the very important reasons not to use this code as-is.
At the moment the web-server must be configured to allow anonymous access for this to work and also those usernames and passwords are going to be sent in clear-text so anyone monitoring the network could easily discover them and get into your system. The only reason I'm posting this is because there are so few examples of using HTTP_Post online at the moment. This could be used as the basis for simple client-server requests so I have published it to help get people started. Please don't use this for real logins without some form of data-encryption.
I also beleive that those values in the "make text" block should be properly encoded according to the information on the Wikipedia article. Oddly enough this does seem to work without encoding, even if you have a username like "Bob Brown" in which the space should really be converted to %32 to be compliant. I have tested this on a Nexus 7 using Win Server 2k8 for the PHP processing and it seems to work ok but as I said it's quick and dirty.
17 October 2013
Slackware, Puppies and the Debians
It's been a while since I've posted anything and this might sound like a rant-post but I need to make a record of recent discoveries while it's still fresh in my mind.
As part of our annual update process I recently had to move 4 Slackware boxes from a KVM environment in a move towards setting them up as standalone Windows servers. The guy who previously looked after these was a devoted Slackware fan and probably has a dozen different Slackware machines at home now he's retired.
Now being the more cautious person in our team the thought struck me that it might be a good idea to have a backup of the htdocs directory from these four servers just in case there is anything on them which needs to be preserved. It surprised me how much of an issue this can be. First of all, it's easy enough to mount a USB stick and copy the files over from the command line but trying to then backup these to a Windows box caused all sorts of issues with unreadable files.
The next idea was to connect up a second drive for the data and boot from a Debian live (7) disk - the idea being that we can reverse this process quite easily to restore the files later if needed. The first problem here turned out to be that the data drive was already NTFS formatted. Debian would only mount this as read-only. I was further baffled by the new start-up menu's and then I discovered the second main problem - the apparent lack of the GFDisk partitioning tool. Even CFdisk seemed determined to only show the primary drive for some reason. The logical solution seemed to be to connect the data disk as the primary drive and install Debian to the drive (disconnecting the Slack drive in the process of course), then create some folders in the root for SERVER1, SERVER2... etc.
While the installation process was aesthetically pleasing with the blue install screens with a red progress bar, I was a little surprised at the end to reboot into x-windows and then discover that I can not log in as the root user (seriously?). Not a big problem as I also set-up a student account and I used that to log in. Then I discovered I could not create my SERVER1... etc folders in the root of the file-system. Well that's understandable as a student I suppose. Then I tried sudo mkdir only to be told that my student account was not allowed to use the sudo command and I would be reported to the administrator. At this point I'm thinking... "What? You're going to pester me every time a student tries to sudo?". While this is obviously the operating system behaving how it should in a big multi-user enterprise I began to question it's usability for our technically-savvy-just-about-sane-but-non-historical-unix-users group.
It was then that a little lightbulb (or LED) turned on somewhere (probably the Pi lab) and I started searching for my most recent Puppy linux disc. It booted up perfectly on all four servers, I could see which drives were connected, it let me mount them and create folders where I wanted to create them and it even copied the files faster than I had managed from the Slackware command line earlier. I could only fault Puppy on one very minor thing. The properties option on directories does not actually state the number of files they contain. This would have been useful as checking my folders showed some differences (1-2 Mb) on some of the directories. I put this down to transferring from Slackwares ext2 file-system to Debians ext4 but a file count would have just added a little more confidence in the copying process.
So while I may need to brush up on my Debian for our Pi students, I think I will keep my Puppy linux disc around for the important work; like restoring the servers after the new guy has installed Windows on them without checking if the data is important first. If this ever appears in a BOFH episode I want credit or a complimentary insulation-tester <Kzzzeerttttt!!>
As part of our annual update process I recently had to move 4 Slackware boxes from a KVM environment in a move towards setting them up as standalone Windows servers. The guy who previously looked after these was a devoted Slackware fan and probably has a dozen different Slackware machines at home now he's retired.
Now being the more cautious person in our team the thought struck me that it might be a good idea to have a backup of the htdocs directory from these four servers just in case there is anything on them which needs to be preserved. It surprised me how much of an issue this can be. First of all, it's easy enough to mount a USB stick and copy the files over from the command line but trying to then backup these to a Windows box caused all sorts of issues with unreadable files.
The next idea was to connect up a second drive for the data and boot from a Debian live (7) disk - the idea being that we can reverse this process quite easily to restore the files later if needed. The first problem here turned out to be that the data drive was already NTFS formatted. Debian would only mount this as read-only. I was further baffled by the new start-up menu's and then I discovered the second main problem - the apparent lack of the GFDisk partitioning tool. Even CFdisk seemed determined to only show the primary drive for some reason. The logical solution seemed to be to connect the data disk as the primary drive and install Debian to the drive (disconnecting the Slack drive in the process of course), then create some folders in the root for SERVER1, SERVER2... etc.
While the installation process was aesthetically pleasing with the blue install screens with a red progress bar, I was a little surprised at the end to reboot into x-windows and then discover that I can not log in as the root user (seriously?). Not a big problem as I also set-up a student account and I used that to log in. Then I discovered I could not create my SERVER1... etc folders in the root of the file-system. Well that's understandable as a student I suppose. Then I tried sudo mkdir only to be told that my student account was not allowed to use the sudo command and I would be reported to the administrator. At this point I'm thinking... "What? You're going to pester me every time a student tries to sudo?". While this is obviously the operating system behaving how it should in a big multi-user enterprise I began to question it's usability for our technically-savvy-just-about-sane-but-non-historical-unix-users group.
It was then that a little lightbulb (or LED) turned on somewhere (probably the Pi lab) and I started searching for my most recent Puppy linux disc. It booted up perfectly on all four servers, I could see which drives were connected, it let me mount them and create folders where I wanted to create them and it even copied the files faster than I had managed from the Slackware command line earlier. I could only fault Puppy on one very minor thing. The properties option on directories does not actually state the number of files they contain. This would have been useful as checking my folders showed some differences (1-2 Mb) on some of the directories. I put this down to transferring from Slackwares ext2 file-system to Debians ext4 but a file count would have just added a little more confidence in the copying process.
So while I may need to brush up on my Debian for our Pi students, I think I will keep my Puppy linux disc around for the important work; like restoring the servers after the new guy has installed Windows on them without checking if the data is important first. If this ever appears in a BOFH episode I want credit or a complimentary insulation-tester <Kzzzeerttttt!!>
15 May 2013
Raspberry Pi - The Time-lapse Dolley Project and Windows compatible SD partitions
About a week ago I sat down and thought about our impending VC visit to see our little Raspberry Pi lab and I started thinking about what we could do as a little example project. I've deliberately kept this as my little secret because I wanted to do something that nobody else here has thought of but what to tackle and could I do it in a week?
I started to think about what visual computer-jiggery (technical term) has most impressed me this year and the answer came from Netflix. Yes I was impressed by the opening credits for the US version of House of Cards. The technique they use is called motion-time-lapse. Time-lapse is where you create an animation from single photo's or stills taken every minute or so then joined together into a film. It looks far more effective with the added motion though. This comes from something called a time-lapse-dolly which is a technical term for slow-moving-thing-that-the-camera-sits-on.
I've recently been playing with 'Motion' on the Pi, which is a great little web-cam application that can also do time-lapse. I did some time-lapse using windows applications a few years back so I already knew how to do that part (as you can see here). I can say that if anything, Motion makes the process even easier as it will chain the stills together for you and output an Mpeg clip.
So my challenges now are two-fold. I would like to output the clip to a windows box so I can add some music. I want this to be from the Schizm tracker (ie produced on the Pi). I doubt the Pi is quite powerful enough to do the video editing - at least not in the timeframe required (tomorrow). I'm not even sure if my old Amiga Pro-tracker days are enough to quickly get me up-to-speed with Schizm tracker but that's for later. First I need a motion-platform and I'd like that to be controlled by the Pi and I also need to be able to get the footage off the Pi for editing.
Luckily I have an 8GB SD-card with a 4GB Wheezy image installed so that meant there would be space. I had to use 'sudo apt-get gparted' from the terminal to install Gparted which didn't seem to be there to start with. If that command fails, remember to try 'sudo apt-update' & 'sudo apt-upgrade' and go for coffee while they do their thing). Once I got Gparted installed, I could run it from the terminal but it wouldn't let me create a dos-compatible partition. Odd I thought, and then I went down the CFdisk route (cfdisk /dev/mmcblk0) which did the same thing. It turns out the Windows file-system support comes from another package which may not be installed. Try 'Sudo apt-get install dosfstools' and at this point the fat16 & fat32 (ie MS Windows compatible) formats appeared in Gparted - they also fixed the command line version 'mkfs.vfat /dev/mmcblk0p3' which was previously not working. Now I just need to mount this at boot time and get Motion to output it's files to the Windows compatible partition - I will find out later if this works.
== Edit ==
While the extra space was usable by the Pi, my Windows XP box insisted the data partition was not available because "the partition or volume is not enabled"? Restarting my PC as suggested obviously did nothing to fix this. It looks as though Windows is not inclined to play nicely with partitions on SD cards so in the end I used winscp to copy the files I needed over to my Windows PC.
== End Edit ==
It's typical that with this rush job on the go, I've had loads support of requests further depleting my available time. Luckily I started thinking about the dolley last week and this is where my lack of electronics knowledge has let me down a little. The more I read about interfacing the Pi's GPIO to things, the more it sounds like a bad idea and I don't want to risk my precious Pi (any of them). Anyway, back to the dolley. If you watched my timelapse example you might have noticed that my son was fascinated by recycling-trucks when he was younger. I have now recycled one of these which has a simple motor (forwards only) and makes a perfect dolly; I just need to control the motor.
Once I started reading into it, I found there were lots of ideas to try but they all seemed a bit risky. I read things which suggested the Pi should not be used to power any loads (i.e. the motor) and some even suggested they could not provide enough power to control a relay without risk. Luckily, someone out there mentioned opto-isolators and the idea took. I already have a project where the Pi drives some low-power LEDs and so far that Pi has not blown up. The recycling truck also has an LDR fitted which I figured I could use inline with a battery source to drive the motor. And this is where I really need the electronics background because it just didn't work. It seems even when you shine a torch at it, the resistance is still too high to be inline with the motor.
=Update=
I discovered some opto-isolator chips at Maplins which I figured would be worth a go for the price. Sadly I didn't manage to get the project working in time so I need to do some further debugging on the motor circuit. It works fine when the opto is bypassed and the opto works as proven by an in-line LED. The issue seems to be when the two are connected in series. I didn't manage to get this working in time but we still did a timelapse demo. Sadly I also didn't have enough time to knock together a quick Schism module as I lost a lot of time debugging sound (eventually fixed using sudo apt-get uninstall pulseaudio).
I started to think about what visual computer-jiggery (technical term) has most impressed me this year and the answer came from Netflix. Yes I was impressed by the opening credits for the US version of House of Cards. The technique they use is called motion-time-lapse. Time-lapse is where you create an animation from single photo's or stills taken every minute or so then joined together into a film. It looks far more effective with the added motion though. This comes from something called a time-lapse-dolly which is a technical term for slow-moving-thing-that-the-camera-sits-on.
I've recently been playing with 'Motion' on the Pi, which is a great little web-cam application that can also do time-lapse. I did some time-lapse using windows applications a few years back so I already knew how to do that part (as you can see here). I can say that if anything, Motion makes the process even easier as it will chain the stills together for you and output an Mpeg clip.
So my challenges now are two-fold. I would like to output the clip to a windows box so I can add some music. I want this to be from the Schizm tracker (ie produced on the Pi). I doubt the Pi is quite powerful enough to do the video editing - at least not in the timeframe required (tomorrow). I'm not even sure if my old Amiga Pro-tracker days are enough to quickly get me up-to-speed with Schizm tracker but that's for later. First I need a motion-platform and I'd like that to be controlled by the Pi and I also need to be able to get the footage off the Pi for editing.
Luckily I have an 8GB SD-card with a 4GB Wheezy image installed so that meant there would be space. I had to use 'sudo apt-get gparted' from the terminal to install Gparted which didn't seem to be there to start with. If that command fails, remember to try 'sudo apt-update' & 'sudo apt-upgrade' and go for coffee while they do their thing). Once I got Gparted installed, I could run it from the terminal but it wouldn't let me create a dos-compatible partition. Odd I thought, and then I went down the CFdisk route (cfdisk /dev/mmcblk0) which did the same thing. It turns out the Windows file-system support comes from another package which may not be installed. Try 'Sudo apt-get install dosfstools' and at this point the fat16 & fat32 (ie MS Windows compatible) formats appeared in Gparted - they also fixed the command line version 'mkfs.vfat /dev/mmcblk0p3' which was previously not working. Now I just need to mount this at boot time and get Motion to output it's files to the Windows compatible partition - I will find out later if this works.
== Edit ==
While the extra space was usable by the Pi, my Windows XP box insisted the data partition was not available because "the partition or volume is not enabled"? Restarting my PC as suggested obviously did nothing to fix this. It looks as though Windows is not inclined to play nicely with partitions on SD cards so in the end I used winscp to copy the files I needed over to my Windows PC.
== End Edit ==
It's typical that with this rush job on the go, I've had loads support of requests further depleting my available time. Luckily I started thinking about the dolley last week and this is where my lack of electronics knowledge has let me down a little. The more I read about interfacing the Pi's GPIO to things, the more it sounds like a bad idea and I don't want to risk my precious Pi (any of them). Anyway, back to the dolley. If you watched my timelapse example you might have noticed that my son was fascinated by recycling-trucks when he was younger. I have now recycled one of these which has a simple motor (forwards only) and makes a perfect dolly; I just need to control the motor.
Once I started reading into it, I found there were lots of ideas to try but they all seemed a bit risky. I read things which suggested the Pi should not be used to power any loads (i.e. the motor) and some even suggested they could not provide enough power to control a relay without risk. Luckily, someone out there mentioned opto-isolators and the idea took. I already have a project where the Pi drives some low-power LEDs and so far that Pi has not blown up. The recycling truck also has an LDR fitted which I figured I could use inline with a battery source to drive the motor. And this is where I really need the electronics background because it just didn't work. It seems even when you shine a torch at it, the resistance is still too high to be inline with the motor.
=Update=
I discovered some opto-isolator chips at Maplins which I figured would be worth a go for the price. Sadly I didn't manage to get the project working in time so I need to do some further debugging on the motor circuit. It works fine when the opto is bypassed and the opto works as proven by an in-line LED. The issue seems to be when the two are connected in series. I didn't manage to get this working in time but we still did a timelapse demo. Sadly I also didn't have enough time to knock together a quick Schism module as I lost a lot of time debugging sound (eventually fixed using sudo apt-get uninstall pulseaudio).
9 January 2013
Raspberry Pi, Risc OS and playing MP3 files from BASIC
I am currently really enjoying a quiet spell in which I can finally perform a few experiments with my Raspberry Pi. I'm sticking with Risc OS for now because it seems like a very useful hobbyist platform. Today I have figured out how to play MP3 files using the included version of BBC basic.
I should point out straight away that this uses a third-party application which fortunately is freeware. So to start, use the !packman icon on the desktop and download the application which is called Madplay. I found it easily by entering Mp3 into the search filter. Once downloaded, just double click it (it should be in your Apps/Audio directory). It appears this survives a restart so you only need to do it once.
The next bit is to download an mp3 file (I'll leave that to you but suggest you keep it legal). I set up a folder called Mp3 in the root folder for this. Then you just need a single line of code from within Basic to activate the sound. This is:
*madplay "/SDFS::RISCOSpi.$/Mp3/filename"
Notice the asterisk at the start is important. The player does not seem to multi-task at present but if you need to get control back before your mp3 ends, just press escape. I may well use this as a starting point for my ultimate alarm-clock project at some point. I quite like the idea of a random wake-up alarm instead of the standard electronic bleeps which cuts short my blissful rest every morning. Not sure how the other half will react to the sound of Robin Williams shouting "GOOOODDD MORNNNINNNG VIEEEEETNAMMMMM!!" but it's one of those projects you know will be memorable.
I should point out straight away that this uses a third-party application which fortunately is freeware. So to start, use the !packman icon on the desktop and download the application which is called Madplay. I found it easily by entering Mp3 into the search filter. Once downloaded, just double click it (it should be in your Apps/Audio directory). It appears this survives a restart so you only need to do it once.
The next bit is to download an mp3 file (I'll leave that to you but suggest you keep it legal). I set up a folder called Mp3 in the root folder for this. Then you just need a single line of code from within Basic to activate the sound. This is:
*madplay "/SDFS::RISCOSpi.$/Mp3/filename"
Notice the asterisk at the start is important. The player does not seem to multi-task at present but if you need to get control back before your mp3 ends, just press escape. I may well use this as a starting point for my ultimate alarm-clock project at some point. I quite like the idea of a random wake-up alarm instead of the standard electronic bleeps which cuts short my blissful rest every morning. Not sure how the other half will react to the sound of Robin Williams shouting "GOOOODDD MORNNNINNNG VIEEEEETNAMMMMM!!" but it's one of those projects you know will be memorable.
8 January 2013
Risc OS vs Trolls
It seems hard to beleive how something so old can be recycled into something so wonderful. Twenty-five years ago the UK home computing market was dominated by two machines; the Sinclair ZX Spectrum and the Commodore 64. Both had vast games collections and everybody I knew back then belonged to one camp or the other.
There was of course another alternative as schools favoured the BBC model B and the company that made them (Acorn electronics) also released a home computer known as the Acorn Electron. Nobody I knew back then had either of these machines, as none of us could afford the Model B or really wanted an Electron to do homework on when we could be playing Manic Miner or Jet-Set Willy. Looking back, there wasn't much encouragement to learn to program as our secondary school had one computer room full of Link Research computers (480Z's) which was something too specialised to have at home (and we wanted games). At some point I figured out the hexadecimal numbering system by myself (it was never taught in our maths classes) and learned to do a few simple programs in machine code.
Now some people will try to tell you that nobody programmed in machine code and that you used a higher-level language known as assembler but some of us didn't have an assembler so we picked the intructions decimal values from the Z80 reference book (a small paperback) and created a basic program to put those values into memory. Where we usually got stuck was trying to interact with the computers ROM routines which did all the interesting stuff (like print a character on the screen) but we just didn't have access to how that worked. Well the internet didn't really take off for us average folks until 1996 - over a decade after we got our first home computers.
As we entered the 90's we finally ditched our old 8-bit machines for something new and more powereful. Sinclair sold out to Amstrad and the Spectrum died by not keeping up with changing technologies. Instead we witnessed two new challengers for the title of best home computer. There was the Commodore Amiga (500) and the Atari ST (STE/STFM). Now everyone I knew was a member of the Commodore group as it was a graphically impressive machine and had 4-channel stereo sound. The Atari ST became the machine of choice for musical types, mainly because of its built in midi interface. We weren't convinced though as it had the AY-39something sound chip which was the same chip Amstrad had fitted to the Sinclair Spectrum 128K models a few years earlier. There was also a third contender in the group which was the Acorn Archimedes and I only remember knowing one person (friend of a friend) who was excited about this. I could never understand why.
Fast forward to the present and it seems like the Archimedes was something of a Schwarzeneger. The CPU (the computer bit that does most of the work) of the Archimedes was an early version of the chip which is now powering the Raspberry Pi. Why is this significant? Well at the time these chips used a technology called RISC (Reduced Instruction Set Chip?). The idea was that if you kept the number of instructions the processor knew to a minimum, it would run faster. Of course this would mean more work for programmers because they would need to write simple programs to do things that CISC (Complete/Complex Instruction Set Chips) could do in a single instruction. But the Archimedes was a 16 bit machine with a graphical interface - an early competitor to the rising behemoth of Microsoft Windows which would eventually do to the Amiga & ST what those machines had done to the Spectrum and C64.
We thought the Archimedes and it's operating system (called RISC OS) would ultimately go the way of the Enterprise (a relatively unknown machine with a built in joystick which was advertised on the TV as having obsolecense built-out but failed to convince any of us about that and became... well... obsolete). Yet here we are in 2013 and old RISC OS is back in a big way (or should that be a small way). It's now available as a free download for the Raspberry Pi and I recommend all of you Pi owners get yourself a 2GB SD card and have a look.
How much credibility should techie-types give to an OS which has practically died once already? Well my answer is lots. Microsoft went the way of lets code it and the next round of technology will run faster so everything will run better. The trouble is that keeping up in this way means that to run their stuff well you pretty much need to buy a new system every year or two to get decent performance. This puts people off trying to keep up and keeps technology out of the hands of those hoodied groups who've actually got enough spare time to dedicate towards becoming the future of computing but can't afford the premium prices. People now seem to want tablets or phones with enough power to do basic computing. My own nephew got a tablet computer this Christmas - not the latest Nintendo game system or desktop to run power-hungry games, but something portable. I find it interesting to see youngsters opting for less power but greater portability.
I can't help wondering if there will be a point where the computers internals and form-factor will become even more seperated. Not so long ago you bought a case and then chose your motherboard, processor, memory, graphics card, hard-drive and then Linux also brought us more choice of operating system too. Maybe soon we will see people choosing their processor board, memory board, solid-state storage card and then sliding those into a tablet case or a netbook case or even a portable media-player case. Maybe those modules will slide into bigger desktop cases to create multi-processor power systems. Who knows?
Initiatives like the $100 laptop have paved the way for small inexpensive project boards like the Pi & Beagle-board etc. to find their way into the hands of hobbyists and a push to get the Pi used more in education is having the effect of putting computing back into the grasp of youngsters. Throw RISC OS into the mix and you get something which is quite unique. RISC OS runs incredibly well on the Pi - rightly so given the heritage. It's quick to start up, seems to have an industrial level of stability and has the old BBC model B basic programming language built-in. Not only that, but it seems from basic you can turn assembler language into runnable code quite easily without a third party application. Within a day I manged to pick up enough BBC basic and use the GPIO package to program a simple traffic-light system. The hobbyists are going to love this speed and simplicity as it is what this platform does well.
Sadly to use the Schwarzeneger simile from before, he's back but it's obvious he's a robot now. RISC OS isn't perfect. At the moment it has the Netsurf browser which provides a very fast and very good browsing experience (I use it almost daily) but it lacks certain features like javascript. I'm all in favour of being able to turn javascript off (reduces malware/viruses y'know) but it seems I couldn't write this blog-post without it. Also, not having flash has made for a browsing experience that reminds me of early days of the internet. It's a bit like being back in the mid-late nineties when we started to see advertising images appearing everywhere on web-pages, but before they started to launch pop-ups or use up your bandwidth by downloading streaming video adverts which you never really wanted anyway. It's a great browsing experience - less annoying and much faster. Will it appeal to todays generation who feel the need to upload everything they do to Youtube though?
Well the internet trolls seem to think not. That's what gave me the title for this post after spending some time over at the RISC OS forum on the Pi foundations website. The trolls wasted no time in posting links along the lines of "have a look at this" and then linking to sites which couldn't be viewed due to the lack of javascript. While some of us are impressed at a tiny, speedy OS which turns the Pi into something useful, there is sadly a minority who take pleasure in pointing out flaws. Some of the criticisms are quite justified as these days I think we have an expectation to be able to do a certain amount with a computer without spending a fortune on third-party software to do basic things.
The essential features of an operating system have gone way beyond what they were when RISC OS was originally developed. Nowadays we expect as minimum the ability to browse the internet, open a PDF or MS Office document, print something out, connect to a network drive (or cloud storage of some sort), transfer files from one device to another (between USB flash drives for example) or view a streaming video file. All of these are valid requirements and at the moment RISC OS can only do a few of these tasks well without buying extra software.
Sites like Youtube and Video Jug provide tutorials as well as videos of peoples pranks. At the moment you can't just browse their site and view content as you can on Windows (or Linux variants with a suitable flash plug-in). Would such a plug-in reduce the speed of browsing in Netsurf to the point where you might as well go back to using a Windows/Linux system? If so that would be too high a price in my opinion. A statement which also raises another point. RISC OS has a desktop link to it's very own Pling-Store. At the moment, many of the packages on there seem over-priced and there don't seem to be demo or lite versions to find out if they work or do anything useful.
So it seems while RISC OS definitely will appeal to us techs, anyone who remembers using a BBC model B and all the hobbyists out there in the world, I can't see it being the saviour of hoodies everywhere just yet. For that to happen, it will need to address its short-comings. Also lets hope that the internet can help todays interested youth overcome the obstacles to programming which we had back then - a lack of good information about how to program what you wanted. Most importantly of all though, give the kids what we wanted back then which is some decent games that don't cost an ARM and a leg (apologies for the pun!).
There was of course another alternative as schools favoured the BBC model B and the company that made them (Acorn electronics) also released a home computer known as the Acorn Electron. Nobody I knew back then had either of these machines, as none of us could afford the Model B or really wanted an Electron to do homework on when we could be playing Manic Miner or Jet-Set Willy. Looking back, there wasn't much encouragement to learn to program as our secondary school had one computer room full of Link Research computers (480Z's) which was something too specialised to have at home (and we wanted games). At some point I figured out the hexadecimal numbering system by myself (it was never taught in our maths classes) and learned to do a few simple programs in machine code.
Now some people will try to tell you that nobody programmed in machine code and that you used a higher-level language known as assembler but some of us didn't have an assembler so we picked the intructions decimal values from the Z80 reference book (a small paperback) and created a basic program to put those values into memory. Where we usually got stuck was trying to interact with the computers ROM routines which did all the interesting stuff (like print a character on the screen) but we just didn't have access to how that worked. Well the internet didn't really take off for us average folks until 1996 - over a decade after we got our first home computers.
As we entered the 90's we finally ditched our old 8-bit machines for something new and more powereful. Sinclair sold out to Amstrad and the Spectrum died by not keeping up with changing technologies. Instead we witnessed two new challengers for the title of best home computer. There was the Commodore Amiga (500) and the Atari ST (STE/STFM). Now everyone I knew was a member of the Commodore group as it was a graphically impressive machine and had 4-channel stereo sound. The Atari ST became the machine of choice for musical types, mainly because of its built in midi interface. We weren't convinced though as it had the AY-39something sound chip which was the same chip Amstrad had fitted to the Sinclair Spectrum 128K models a few years earlier. There was also a third contender in the group which was the Acorn Archimedes and I only remember knowing one person (friend of a friend) who was excited about this. I could never understand why.
Fast forward to the present and it seems like the Archimedes was something of a Schwarzeneger. The CPU (the computer bit that does most of the work) of the Archimedes was an early version of the chip which is now powering the Raspberry Pi. Why is this significant? Well at the time these chips used a technology called RISC (Reduced Instruction Set Chip?). The idea was that if you kept the number of instructions the processor knew to a minimum, it would run faster. Of course this would mean more work for programmers because they would need to write simple programs to do things that CISC (Complete/Complex Instruction Set Chips) could do in a single instruction. But the Archimedes was a 16 bit machine with a graphical interface - an early competitor to the rising behemoth of Microsoft Windows which would eventually do to the Amiga & ST what those machines had done to the Spectrum and C64.
We thought the Archimedes and it's operating system (called RISC OS) would ultimately go the way of the Enterprise (a relatively unknown machine with a built in joystick which was advertised on the TV as having obsolecense built-out but failed to convince any of us about that and became... well... obsolete). Yet here we are in 2013 and old RISC OS is back in a big way (or should that be a small way). It's now available as a free download for the Raspberry Pi and I recommend all of you Pi owners get yourself a 2GB SD card and have a look.
How much credibility should techie-types give to an OS which has practically died once already? Well my answer is lots. Microsoft went the way of lets code it and the next round of technology will run faster so everything will run better. The trouble is that keeping up in this way means that to run their stuff well you pretty much need to buy a new system every year or two to get decent performance. This puts people off trying to keep up and keeps technology out of the hands of those hoodied groups who've actually got enough spare time to dedicate towards becoming the future of computing but can't afford the premium prices. People now seem to want tablets or phones with enough power to do basic computing. My own nephew got a tablet computer this Christmas - not the latest Nintendo game system or desktop to run power-hungry games, but something portable. I find it interesting to see youngsters opting for less power but greater portability.
I can't help wondering if there will be a point where the computers internals and form-factor will become even more seperated. Not so long ago you bought a case and then chose your motherboard, processor, memory, graphics card, hard-drive and then Linux also brought us more choice of operating system too. Maybe soon we will see people choosing their processor board, memory board, solid-state storage card and then sliding those into a tablet case or a netbook case or even a portable media-player case. Maybe those modules will slide into bigger desktop cases to create multi-processor power systems. Who knows?
Initiatives like the $100 laptop have paved the way for small inexpensive project boards like the Pi & Beagle-board etc. to find their way into the hands of hobbyists and a push to get the Pi used more in education is having the effect of putting computing back into the grasp of youngsters. Throw RISC OS into the mix and you get something which is quite unique. RISC OS runs incredibly well on the Pi - rightly so given the heritage. It's quick to start up, seems to have an industrial level of stability and has the old BBC model B basic programming language built-in. Not only that, but it seems from basic you can turn assembler language into runnable code quite easily without a third party application. Within a day I manged to pick up enough BBC basic and use the GPIO package to program a simple traffic-light system. The hobbyists are going to love this speed and simplicity as it is what this platform does well.
Sadly to use the Schwarzeneger simile from before, he's back but it's obvious he's a robot now. RISC OS isn't perfect. At the moment it has the Netsurf browser which provides a very fast and very good browsing experience (I use it almost daily) but it lacks certain features like javascript. I'm all in favour of being able to turn javascript off (reduces malware/viruses y'know) but it seems I couldn't write this blog-post without it. Also, not having flash has made for a browsing experience that reminds me of early days of the internet. It's a bit like being back in the mid-late nineties when we started to see advertising images appearing everywhere on web-pages, but before they started to launch pop-ups or use up your bandwidth by downloading streaming video adverts which you never really wanted anyway. It's a great browsing experience - less annoying and much faster. Will it appeal to todays generation who feel the need to upload everything they do to Youtube though?
Well the internet trolls seem to think not. That's what gave me the title for this post after spending some time over at the RISC OS forum on the Pi foundations website. The trolls wasted no time in posting links along the lines of "have a look at this" and then linking to sites which couldn't be viewed due to the lack of javascript. While some of us are impressed at a tiny, speedy OS which turns the Pi into something useful, there is sadly a minority who take pleasure in pointing out flaws. Some of the criticisms are quite justified as these days I think we have an expectation to be able to do a certain amount with a computer without spending a fortune on third-party software to do basic things.
The essential features of an operating system have gone way beyond what they were when RISC OS was originally developed. Nowadays we expect as minimum the ability to browse the internet, open a PDF or MS Office document, print something out, connect to a network drive (or cloud storage of some sort), transfer files from one device to another (between USB flash drives for example) or view a streaming video file. All of these are valid requirements and at the moment RISC OS can only do a few of these tasks well without buying extra software.
Sites like Youtube and Video Jug provide tutorials as well as videos of peoples pranks. At the moment you can't just browse their site and view content as you can on Windows (or Linux variants with a suitable flash plug-in). Would such a plug-in reduce the speed of browsing in Netsurf to the point where you might as well go back to using a Windows/Linux system? If so that would be too high a price in my opinion. A statement which also raises another point. RISC OS has a desktop link to it's very own Pling-Store. At the moment, many of the packages on there seem over-priced and there don't seem to be demo or lite versions to find out if they work or do anything useful.
So it seems while RISC OS definitely will appeal to us techs, anyone who remembers using a BBC model B and all the hobbyists out there in the world, I can't see it being the saviour of hoodies everywhere just yet. For that to happen, it will need to address its short-comings. Also lets hope that the internet can help todays interested youth overcome the obstacles to programming which we had back then - a lack of good information about how to program what you wanted. Most importantly of all though, give the kids what we wanted back then which is some decent games that don't cost an ARM and a leg (apologies for the pun!).
Subscribe to:
Posts (Atom)