Tuesday, December 17, 2013

The kernel of an Arduino audio player

The vs10x3 DSP chips allow you to decode or encode various audio formats. This includes codec to mp3, ogg vorbis, as well as decoding flac and support for many other formats. So naturally I took the well trodden path and decided to use an arduino to build a "small" audio player. The breadboard version is shown below. The show is driven by the Uno on the right. The vs1063 is on a Sparkfun breakout board in the top of image. The black thing you see to the right of the vs1063 is an audio plug. The bottom IC in the line on the right is an attiny84. But wait you say, don't you already have the Uno to be the arduino in the room? But yes I say, because the attiny84 is a SPI slave device which is purely a "display driver" for the 4 bit parallel OLED screen at the bottom. Without having to play tricks overriding the reset pin, the tiny84 has just enough for SPI (4), power (2), and the OLED (7) so it's perfect for this application.

The next chip up from the attiny84 is an MCP23017 which connects to the TWI bus and provides 16 digital pins to play with. There is a SPI version of the muxer, the 23S17. The muxer works well for buttons and chip selects which are not toggled frequently. It seems either my library for the MCP is slow or using TWI for CS can slow down SPI operations where selection/deselection is in a tight loop.

Above the MCP is a 1Mbit SPI RAM chip. I'm using that as a little cache and to store the playlist for ultra quick access. There is only so much you can do with the 2kb of sram on the arduino. Above the SPI ram is a 4050 and SFE bidirectional level shifter. The three buttons in bottom left allow you to control the player reasonably effectively. Though I'm waiting for my next red box day for more controller goodies to arrive.

I've pushed some of the code to control the OLED screen from the attiny up to github, for example at:
https://github.com/monkeyiq/attiny_oled_server_spi2
I'll probably do a more just write up about those source files and the whole display driver subsystem at some stage soon.

Sunday, December 8, 2013

Asymmetric Multiprocessing at the MCU level

I recently got a 16x2 character OLED display from sparkfun. Unfortunately that display had a parallel interface so wanted to mop up at least 7 digital pins on the controlling MCU. Being a software engineer rather than an electrical engineer the problem looked like one that needed another small microcontoller to solve it. The attiny84 is a 14 pin IC which has USI SPI and, once you trim off 4 pins for SPI, 3 for power, gnd, and reset, leaves you 7 digital pins to do your bidding with. This just so happens to be the same 7 pins one needs to drive the OLED over its 4 bit parallel interface!


Driving an OLED screen over SPI using an attiny84 as a display driver from Ben Martin on Vimeo.

Shown in the video above is the attiny84 being used as a display driver by an Arduino Uno. Sorry about having the ceiling fan on during capture. On the Uno side, I have a C++ "shim" class that has the same interface as the class used to drive the OLED locally. The main difference is you have to tell the shim class which pin to use as chip select when it wants to talk to the attiny84. On the attiny commands come in over SPI and the real library that can drive the OLED screen is used to issue the commands to the screen.

The fuss about that green LED is that when it goes out, I've cut the power to the attiny84 and the OLED. Running both the later with a reasonable amount of text on screen uses about 20mA, with the screen completely dark and the tiny asleep that drops to 6mA. That can go down to less than 2mA if I turn off the internal power in the OLED. Unfortunately I haven't worked out how to turn the power back on again other than resetting the power to the OLED completely. But being able to drop power completely means that the display is an optional extra and there is no power drain if there is nothing that I want to see.

Another way to go with this is using something like an MCP23S17 chip as a pin muxer and directly control the screen from the Uno. The two downsides to that design are the need to modify the real OLED library to use the pin muxer over SPI, and that you don't gain the use of a dedicated MCU to drive the screen. An example of the later is adding a command to scroll through 10 lines of text. The Uno could issue that command and then forget about the display completely while the attiny handles the details of scrolling and updating the OLED.

Some issues of doing this were working out how to tell the tiny to go into SPI slave mode, then getting non garbage from the SPI bus when talking to the tiny, and then working out acceptable delays for key times. When you send a byte to the tiny the ISR that accepts that byte will take "some time" to complete. Even if you are using a preallocated circular array to dispense with the new byte as quickly as possible, the increment and modulo operations take time. Time that can be noticeable when the tiny is clocked at 8mhz and the Uno at 16mhz and you ramp up the SPI clock speed without mercy.

As part of this I also made a real trivial SPI calculator for the attiny84. By trivial I mean store, add, and print are the only operations and there is only a single register for the current value. But it does show that the code that interacts with the SPI bus on the client and server side gets what one would expect to get from a sane adding machine. I'll most likely be putting this code up on github once I clean it up a little bit.

Friday, November 15, 2013

Sparkfun vs1063 DSP breakout

The Sparkfun vs1063 breakout gives you a vs1063 chip with a little bit of supporting circuit. You have to bring your own microcontroller, sdcard or data source, and level shifting.


One thing which to my limited knowledge seems unfortunate is that the VCC on the breakout has to be 5v. There are voltage regulators on the vs1063 breakout which give it 3.3v, 2.8v and 1.8v. Since all vregs are connected to VCC and it wants to make its own 3v3 then I think you have to give 5v as VCC on the breakout board.

With the microsd needing to run on 3.3v I downshifted the outbound SPI ports, the sdcard chip select, and the few input pins to the vs1063 board. Those are the two little red boards on the breadboard. The sdcard is simply on a breakout which does no level shifting itself. The MISO pin is good to go without shifting because a 3.3v will trip as high on a 5v line. Likewise the interrupt pin DREQ which goes to pin 2 on the Uno doesn't have any upshifting.

I had a few issues getting the XDCS, XCS, and DREQ to all play well from the microcontroller. A quick and nasty hack was to attach that green LED in the middle of the photo to the interrupt line so I could see when it was tripped. During playback it gives a PWM effect as 32byte blocks of data are shuffled to the vs1063 as it keeps playing. The DREQ is fired when the vs1063 can take at least another 32 bytes of data from the SPI bus to it's internal buffer. Handy to have the arduino doing other things and also servicing that interrupt as needed.

I'm hoping to use a head to tail 3v3 design for making a mobile version of the circuit. I would have loved to use 2xAA batteries, but might have to go another way for power. Unfortunately the OLED screen I have is 5v but there are 3v3 onces of those floating around so I can get a nice modern low power display on there.

The next step is likely to prototype UI control for this. Which I'll probably use the 5v OLED in the meantime to get a feel for how things will work. I get the feeling that an attiny might sit between the main arduino and the parallel OLED screen so it can be addressed on the SPI bus too. Hmm, attiny going into major power save mode until chip selected back into life.

Saturday, November 9, 2013

RePaper 2.7 inch epaper goodness from the BeagleBone

A little while back I bought a rePaper 2.7 inch eInk display. While the smaller, down to 1.4 inch screens have few enough pixels to be driven from an Arduino, the 264x176 screen should need around 5.5k for a single frame buffer, and you need two buffers to "wax on, wax off" the image on the display in order to update. The short story is that these displays work nicely from the BeagleBone Black. You have to have a fairly recent kernel in order to get the right sys files for the driver. Hint: if you have no "duty" file for your pwm then you have too old of a kernel.

So the first image I chose to display after the epd_test was a capture of fontforge editing Cantarell Regular. Luckily, I've made no changes to the splineset so my design skills are not part of the image. The rendering of splines in the charview of fontforge uses antialiasing, as it was switched over to cairo around a year ago. As the eInk display is monochrome the image displayed is dithered back to 1 bit.

With the real time collaboration support in fontforge this does raise the new chance to see a font being rendered on eInk as you design it (or hint it). I'm not sure how many fonts are being designed with eInk as the specific consumption ground. If you are interested in font design, checkout Crafting Type which uses fontforge to create new type, and you should also be able to see the collaboration and HTML preview modes in action.

Getting the actual eInk display to go from the BeagleBone had a few steps. Firstly, I managed to completely fill up the 2gb of eMMC where my Angstrom was installed. So now I'm running the whole show off a high speed 8gb sandisk card. I spent a little extra cash on a faster card, its one of the extreme super panda + extra adjective sandisk ones. The older kernel I had didn't have a duty file for the PWM pin that the driver wanted to use. Now I that I have a fully updated beaglebone black boot area I have that file. FWIW I'm on kernel version 3.8.13-r23a.49.

Trying out the epd_test initially showed me some broken lines and after a little bit what looked like a bit of the cat from the test image. After rechecking the wireup a few times I looked at the code and saw it was expecting a 2 inch screen. That happens in a few places in the code. So I changed those to reflect my hardware. Then the test loop ran as expected!

The next step was getting the FUSE driver installed (change for size needed too). Then the python demos could run. And thus the photo above was made. My next step is to create a function to render cairo to /dev/epd/display in order to drive the display directly from a cairo app.

A huge thank you to rePaper for making this so simple to get going. The drivers for Raspberry and Beagle are up on their github page. I had been looking at the Arduino driver and it's SPI code thinking about porting that over to Linux, but now that's not necessary! I might design some cape love for this, perhaps with a 14 pin IDC connector on it for eInk attaching. Shouldn't look much worse than last night's SPI only monster, though something etched would be nicer.



The 2.7 inch changes are below, the first one is just slightly more verbose error reporting. You'll also want to set EPD_SIZE=2.7 in /etc/init.d/epd-fuse.

diff --git a/PlatformWithOS/BeagleBone/gpio.c b/PlatformWithOS/BeagleBone/gpio.c
index b3ded6f..d1df3df 100644
--- a/PlatformWithOS/BeagleBone/gpio.c
+++ b/PlatformWithOS/BeagleBone/gpio.c
@@ -767,7 +767,7 @@ static bool PWM_enable(int channel, const char *pin_name) {
                                usleep(10000);
                        }
                        if (pwm[channel].fd < 0) {
-                               fprintf(stderr, "PWM failed to appear\n"); fflush(stderr);
+                               fprintf(stderr, "PWM failed to appear pin:%s file:%s\n", pin_name, pwm[channel]
                                free(pwm[channel].name);
                                pwm[channel].name = NULL;
                                break;  // failed
diff --git a/PlatformWithOS/demo/EPD.py b/PlatformWithOS/demo/EPD.py
index da1ef12..41cc6c1 100644
--- a/PlatformWithOS/demo/EPD.py
+++ b/PlatformWithOS/demo/EPD.py
@@ -48,8 +48,8 @@ to use:

     def __init__(self, *args, **kwargs):
         self._epd_path = '/dev/epd'
-        self._width = 200
-        self._height = 96
+        self._width = 264
+        self._height = 176
         self._panel = 'EPD 2.0'
         self._auto = False

diff --git a/PlatformWithOS/driver-common/epd_test.c b/PlatformWithOS/driver-common/epd_test.c
index e2f2b5a..afe3cb8 100644
--- a/PlatformWithOS/driver-common/epd_test.c
+++ b/PlatformWithOS/driver-common/epd_test.c
@@ -72,7 +72,7 @@ int main(int argc, char *argv[]) {
        GPIO_mode(reset_pin, GPIO_OUTPUT);
        GPIO_mode(busy_pin, GPIO_INPUT);

-       EPD_type *epd = EPD_create(EPD_2_0,
+       EPD_type *epd = EPD_create(EPD_2_7,
                                   panel_on_pin,
                                   border_pin,
                                   discharge_pin,

Wednesday, October 23, 2013

Open Logic Sniffing

Since I've started doing a little I2C/SPI work I finally got a hold of the gdb of the wire, a logic sniffer. The poor little 8 pin chip in the below is a small EEPROM which I'm doing a read/write cycle to every second from the BeagleBone Black. The sniffer is an Open Workbench Logic Sniffer which is available for around $50. It was a hard choice between that and the more expensive sniffers because a longer capture buffer is usually a handier capture buffer. Though if there are no artificial delays on the bus then I think the OWLS will probably capture the interesting stuff that is happening.



The first trick was to work out how to use triggers in a simple way. Which for me was if I find a 'read' command (byte=3) on the MOSI then that's a trigger. I've told the bone to clock the SPI back to 5khz and the sniffer to run at 10khz which gives me about 2.46 seconds of capture time. So I will surely see two read/write iterations one second apart.

I haven't worked out how to tell the software to ignore lines 1,2,4,5 so they don't show up as noise. So for now those physical snouts are explicitly grounded. After running the SPI analyser in mode 0 I get the below. The only MISO use is on byte 4 where the old value of 0x1 is read from the EEPROM. The activity on the right is setting write enable (0x6) writing the new value (0x2) and then write disable (0x4). Handy to see that chip select is held and released between those three write related tasks as the enable/disable have to be single byte commands where CS is dropped before they are effective.


The conversation is available in byte format in the analyse window shown below. Here you can see the read value go from 3 to 0 to 1. This is because I triggered on the first read, and the first read got back 3 because that is where I stopped the test program last time (ie, it left 3 at the nominated EEPROM address).


Now to get sigrok to have a sniff too.

Monday, July 29, 2013

GDrive mounting released!

So version libferris-1.5.18.tar.xz is hot off the make dist; including this much ado about mounting Google Drive support. The last additional feature I decided to add before rolling the tarball was support for viewing and adding to the sharing information of a file. It didn't really do much for me being able to "cp" a file to google://drive without being able to unlock it for given people I know to have access to it. So now you can do that from the filesystem as well.

So, since the previous posts have been about the GDrive API and various snags I ran into along the way, this post is about how you can actually use this stuff.

Firstly run up the ferris-capplet-auth app and select the GDrive tab. I know I should overhaul the UI for this auth tool, but since it's mostly only used once for a web service I haven't found the personal desire to beautify it. So inside the GDrive tab, clicking on the "Authenticate with GDrive" button opens a dialog (should become a wizard), the first thing to do as it tells you is visit the console page on google to enable the GDrive API. Then click or paste the auth link in the dialog to allow libferris to get its hands on your data. The auth link goes to google and tells you what libferris is wanting. When you OK that you are given a "code" that you have to copy and paste back into the lower part of the auth capplet this dialog window. Then OKing the dialog will have libferris get a proper auth token from google and you are all set.

So to get started the below command will list the contents of your GDrive:

$ ferrisls google://drive


To put a file up on there you can do something like;

$ date >/tmp/sample.txt
$ ferriscp /tmp/sample.txt google://drive


And you can get it back with cat if you like. Or ferriscp it somewhere else etc.

$ fcat google://drive/sample.txt
Mon Jul 29 17:21:28 EST 2013


If you want to see your shares for this new sample file use the "shares" extended attribute.

$ fcat -a shares google://drive/sample.txt
write,monkeyiq

The shares attribute is a BINEBO (Bytes In Not Equal Bytes Out). Yay for me coining new terms! This means that what you write to it is not exactly what you will get when you read back from it. The handy part of that is that if you write an email address into the extended attribute, you are adding that person to the list of folks who can write to the file. Because I'm using libferris without FUSE and bash doesn't understand libferris URLs, I have to use ferris-redirect in the below command. You can think of ferris-redirect like the shell redirection (>) but you can also supply the extended attribute to redirect data into with (-a). If I read back the shares extended attribute I'll see a new entry in there. Google will have sent a notification email to my friend with a link to the file for me also.

$ echo niceguy@example.com \
   | ferris-redirect -a shares google://drive/sample.txt
$ fcat -a shares google://drive/sample.txt
write,monkeyiq
write,Really Nice Guy

I could also add some hookup to your "contacts" to this, so your evolution addressbook nick names or google contacts could be used to lookup a person. In this case, with names changed to protect the innocent etc, so hypothetically google thinks the name for that email address is Really Nice Guy because he is in my contacts on gmail.

All of this extends to other virtual filesystem that libferris supports. You can "cp" from your scanner or webcam or a tuple of a database directly to google drive if that floats your boat.

I've already had a bit of a sniff at the dropbox API and others, so you might be able to bounce data between clouds in a future release.

Saturday, July 27, 2013

The new google://drive/ URL!

The very short story: libferris can now mount Google Drive as a filesystem. I've placed that in google://drive and will likely make an alias from gdrive:// to that same location so either will work.

The new OAuth 2.0 standard is so much easier to use than the old 1.0 version. In short, after being identified and given the nod once by the user, in 2.0 you have to supply a single secret, in 1.x you have to use per message nonce, create hashes, send the key and token, etc. The main drawback of 2.0 is that you have to use TLS/SSL for each request to protect that single auth token. A small price to pay, as you might well want to protect the entire conversation if you are doing things that require authentication anyway.

A few caveats of the current implementation: mime types on uploaded files are based on file name sniffing. That is because the upload you might be using cp foo.jpg google://drive and the filesystem copies the bytes over. But GDrive needs to know the mimetype for that new File at creation time. The GDrive PATCH method doesn't seem to let you change the mimetype of a file after it has been sent. A better solution will involve the cp code prenotifying the target location so that some metadata (mimetype) can be prefetched form the source file if desired. That would allow full byte sniffing to be used.

Speaking of PATCH, if you change metadata using it, you always get back a 200 response. No matter what. Luckily you also get back a JSON file string with all the metadata for the file you have (tried to) updated. So I've made my PATCH caller code to ignore the HTTP response code compare the returned file JSON to see if the changes actually stuck or not. If a value isn't set how it is expected my PATCH returns an exception. This is in contrast to the docs for the PATCH method which claims that the file JSON is only returned "if successful".

Oh yeah, one other tiny thing about PATCH. If you patch the description it didn't show up in Firefox for me until I refreshed the page. Changing the title does update the Firefox UI automatically. I guess the sidepanel for description hasn't got the funky web notification love yet.

There are two ways I found to read a directory, using files/list and children/list. Unfortunately the later, while returning only the direct children of a folder, also only returns a few pieces of information for those children the most interesting being the child's id. On the other hand the files/list gives you almost all the metadata for each returned File. So on a slower link, one doesn't need thinking music to work out if one round trip or two are the desired number. The files/list also returns metadata for files that have been deleted, and files which other's have shared with you. It is easy to set a query "hidden = false and trashed = false" for files/list to not return those dead files. Filtering on the server exclusively for files that you own is harder. There is a query alias sharedWithMe but no OwnedByMe to return the counter set. I guess perhaps "not sharedWithMe" would == OwnedByMe.

Currently I sort of ignore the directory hierarchy that files/list returns. So all your drive files are just in google://drive/ instead of subdirs as appropriate. I might leave that restriction in the first release. It's not hard to remove, but I've been focusing on upload, download, and metadata change.

Creating files, updating metadata, and downloading files from GDrive all work and will be available in the next libferris release. I have one other issue to cleanup (rate limiting directory read) before I do the first libferris release with gdrive mounting.

Oh and big trap #2 for the young players. To actually *use* libferris on gdrive after you have done the OAuth 2.0 "yep, libferris can have access" you have to go to code.google.com/apis/console and enable drive API for your account otherwise you get access denied errors for all. And once you goto the console and do that, you'll have to OAuth again to get a valid token.

A huge thank you for those two contributed to the ferris fund raising after my last post proposing mounting Google Drive!

Monday, July 22, 2013

Mounting Google Drive?

So on the heels of resurrecting and expanding the support for mounting vimeo as a filesystem using libferris I started digging into mounting Google Drive. As is normally the case for these things, the plan is to start out with listing files, then uploading files, then downloading files, then updating the metadata for files, then rename, then delete, and with funky stuff like "tail -f" and append instead of truncate on upload.

One plus of all this is that the index & search in libferris will then extend it's claws to GDrive as well as desktop files. As I&S is built on top of the virtual filesystem and uses the virtual filesystem to return search results.

For those digging around maybe looking to do the same thing, see the oauth page for desktop apps, and the meat seems to be in the Files API section. Reading over some of the API, the docs are not too bad. The files.watch call is going to take some testing to work out what is actually going on there. I would like to use the watch call is for implementing "tail -f" semantics on the client. Which is in turn most useful with open(append) support. The later I'm still tracking down in the API docs, if it is even possible. PUT seems to update all the file, and PATCH seems very oriented towards doing partial metadata updates.

The trick that libferris uses of exposing the file content through the metadata interface seems to be less used by other tools. With libferris, using fcat and the -a option to select an extended attribute, you can see the value of that extended attribute. The content extended attribute is just the file's content :)

$ date > df.txt
$ fcat -a name df.txt
df.txt
 $ fcat -a mtime-display df.txt
13 Jul 23 16:33
$ fcat -a content df.txt
Tue Jul 23 16:33:51 EST 2013

Of course you can leave out the "-a content" part to get the same effect, but anything that is wanting to work on an extended attribute will also implicitly be able to work on the file's byte content as well with this mechanism.

If anyone is interested in hacking on this stuff (: good ;) patches accepted. Conversely if you would like to be able to use a 'cp' like tool to put and get files to gdrive you might consider contributing to the ferris fund raising. It's amazing how much time these Web APIs mop up in order to be used. It can be a fun game trying to second guess what the server wants to see, but it can also be frustrating at times. One gets very used to being able to see the source code on the other side of the API call, and that is taken away with these Web thingies.

Libferris is available for Debian Hard Float and Debian armel soft floating point. I've just recently used the armhf to install ferris on an OMAP5 board. I also have a build for the Nokia N9 and will update my Open Build Service Project to roll fresh rpms for Fedora at some stage. The public OBS desktop targets have fallen a bit behind the ARM builds because I tend to develop on and thus build from source on desktop.

Saturday, July 20, 2013

Like a Bird on a Wire(shark)...

Over recent years, libferris has been using Qt to mount some Web stuff as a filesystem. I have a subclass of QIODevice which acts as an intermediary to allow one to write to a std::ostream and stream that data to the Web, over a POST for example. For those interested, that code is in Ferris/FerrisQt.cpp of the tarball. It's a bit of a shame that Qt heavy web code isn't in KIO or that the two virtual filesystems are not closer linked, but I digress.

I noticed a little while ago that cp to vimeo://upload didn't work anymore. I had earmarked that for fixing and recently got around to making that happen. It's always fun interacting with these Web APIs. Over the time I've found that Flickr sets the bar for well documented APIs that you can start to use if you have any clue about making GET and POST etc. At one stage google had documented their API in a way that you could never use it. I guess they have fixed that by now, but it did sort out the pretenders from those two could at least sniff HTTP and were determined to win. The vimeo documentation IIRC wasn't too bad when I added support to upload, but the docs have taken a turn for the worst it seems. Oh, one fun tip for the young players, when one API call says "great, thanks, well done, I've accepted your call" and then a subsequent one says "oh, a strange error has happened", you might like to assume that the previous call might not have been so great after all.

So I started tinkering around, adding oauth to the vimeo signup, and getting the getTicket call to work. Having getTicket working meant that my oauth signed call was accepted too. I then was then faced with the upload of the core data (which is normally done with a rather complex streaming POST), and the final I'm done, make it available call. On vimeo that last call seems to be two calls now, first a VerifyChunks call and then a Complete call.

So, first things first. To upload you call getTicket which gives you an endpoint that is an HTTP URL to send the actual video data to, as well as an upload ticket to identify the session. If you try to post to that endpoint URL and the POST converts the CGI parameters using multipart/form-data with boundaries into individual Content-Disposition: form-data elements, you loose. You have to have the ticket_id in the URL after the POST text in order to upload. One little trap.

So then I found that verifyChunks was returning Error 709 Access to the chunk list failed. And that was after the upload had been replied to with "OK. Thanks for the upload.". Oddly, I also noticed that the upload of video data would hang from time to time. So I let the shark out of the pen again, and found that vimeo would return it's "yep were done, all is well" response to the HTTP POST call at about 38-42kb into the data. Not so great.

Mangling the vimeo.php test they supply to upload with my oauth and libferris credentials I found that the POST had a header Expect: 100-continue. Right after the headers were sent vimeo gave the nod to continue, and then the POST body was sent. I assume that just ploughing through and giving the headers followed by the body confused the server end and thus it just said "yep, ok, thanks for the upload" and dropped the line. Then of course forgot the ticket_id because there was no data for it, so the verifyChunks got no chunk list and returned the strange error it did. mmm, hindsight!

So I ended up converting from the POST the newly available PUT method for upload. They call that their "streaming API" even though you can of course stream to a POST endpoint. You just need to frame the parameters and add the MIME tailer to the POST if you want to stream a large file that way. Using PUT I was then able to verify my chunks (or the one single chunk in fact) and the upload complete method worked again.

In the end I've added oauth to my vimeo mounting, many thanks to the creators of the QOAuth library!

Friday, June 7, 2013

BeagleBone Black: Walking the dog.

My software guy with a soldering iron fun has recently extended to the BeagleBone Black. This is a wonderful little ARM machine with a 1Ghz CPU, a whole bunch of GPIO pins, I2C, SPI, AIN.. all the fun things packed into a $45 board.



On an unrelated purchase, I got a small 1.8 inch TFT display that can do 128x160 with a bunch of colours using the st7735 chip. That's shown above running the qtdemo on the framebuffer. Of course, an animation might serve to better show that off. The display was on sale for $10 and so it was then on it's way to me :) My original plan was to drive that from an Arduino... Looking around I noticed that Matt Porter had generously contributed a driver to run the st7735 over SPI from the Linux kernel. The video of him talking at ELC about this framebuffer driver was also very informative :) It seems the same TFT can be run from the Raspberry or Beagle series of hardware.

The wiring for the panel I got was a bit different than the adafruit one that Matt used. But once you have the pinouts its not so hard to figure out. I've currently left the 5V rail unconnected on my TFT. On the BeagleBone Black the HDMI output captures a whole bunch of pins when it starts. Unfortunately some of those pins are needed for the little TFT. One might be able to reroute the SPI to the other bus or mux the pins differently to get around that and have HDMI and the TFT at once. But I wanted to get the TFT going to see if/how it worked before changing the pins.

I had found some info on putting a line in eEnv.txt to stop the HDMI cape from loading but that didn't work for me. On my board I saw that in /sys/devices/bone_capemgr.9/slots the HDMI was the 5th cape. When I first echoed "-5" into the slots file to unload that cape the kernel gave a backtrace. If I did the same on a freshly booted bone it would cleanly remove the HDMI cape though. So something was using the HDMI cape driver before which didn't want to be removed.

With the HDMI cape unloaded the next step is to load a "firmware" file that reserves the pins that the st7735fb driver wants to use. Since I used the same pins on the bone as the adafruit display wants I could just use the below.

echo cape-bone-adafruit-lcd-00A0  >  /sys/devices/bone_capemgr.9/slots

A dmesg showed that a new framebuffer device fb0 had come into existence.

[   85.280471] bone-capemgr bone_capemgr.9: slot #6: Requesting firmware 'cape-bone-adafru-00A0.dtbo' for board-name 'Override Board Name', version '00A0'
[   85.284645] bone-capemgr bone_capemgr.9: slot #6: dtbo 'cape-bone-adafru-00A0.dtbo' loaded; converting to live tree
...
[   86.235178] fb0: ST7735 frame buffer device,
[   86.235178]  using 40960 KiB of video memory
[   86.236687] bone-capemgr bone_capemgr.9: slot #6: Applied #5 overlays.

After a bunch of searching around trying various things, I found that prescaling in mplayer can display to the framebuffer:

# mplayer -ao null  -vo fbdev2:/dev/fb0 -x 128 -y 160 -zoom  ArduSat_Open_Source_in_orbit.mp4

The qtdemo also runs "ok" by executing the below. I say ok because it obviously expects a higher resolution display than 128x160.... qtdemoE -qws

It is tempting to have two screens and add a touch sensitive film to them. With a QML/QtQuick/TodaysRebrand^TM interface the GUI should work well and be flickable to many screens.

A great hack I look forward to is running a 32x16 LED DMD using a deferred rendering framebuffer driver like the st7735fb does. I see the evil plan now, release the BeagleBone Black for $45 and draw more C/C++ programmers to being kernel hackers rather than userland ones :)

Wednesday, May 29, 2013

FontForge: Rounding out the platforms for binary distrubution

Earlier this year I made it simple to install FontForge on OSX. The process boiled down to expanding a zip file into /Applications. The libraries that fontforge uses have been all tinkered to work from inside the package, and the configuration files and other dynamically opened resources and theme are sought in the right place too.

Now after another stint I have FontForge running under 32bit Windows 7. So finally I had a use for that other OS sitting on my laptop for all this time ;) The first time I got it to run it looked like below. I created a silly glyph to make sure that bezier editing was responsive...


The plan is to have the theme in use so nice modern fonts are used in the menu, and other expected tweaks before making it a simple thing to install on Windows.

One, IMHO, very cool thing I did to get all this happening was to use the OpenSUSE Build System (OBS) to make the binaries. There are some DLL and header file drops for X floating around, but I tend to like to know where libraries that are being linked into the program have come from. Call me old fashioned. So in the process I cross compiled chunks of X Window for Windows on the OBS servers. My OBS win32 support repository contains these needed libraries, right through cairo and pango using the Xft backends to render.

There is a major a schism there: if you are porting a native GTK+2 application over to win32, then you will naturally want to use the win32 backends to cairo et al and have a more native win32 outcome. For FontForge however, the program wants to use the native X Window APIs and the pango xft backend. So you need to be sure that you can render text to an X Window using pango's xft backend to make your life simpler. That is what the pangotest project I created does, just put "hello world" on an X Window using pango-xft.

A big thanks to Keith Packard who provided encouragement at LCA earlier this year that my crazy cross compile on OBS plan should work. I had a great moment when I got xeyes to run, thinking that things might turn out well after the hours and hours trying to cross compile the right collection of X libraries.

I should also mention that I'm looking for a bit of freelance hacking again. So if you have an app you want to also run on OSX/Windows then I might be the guy to make that happen! :) Or if you have cool C/C++ work and are looking to expand your team then feel free to email me.

Sunday, May 19, 2013

Some amateur electronics: hand made 8x8 LED matrix

So I made an 8x8 matrix of LEDs in a common cathode arrangement. Only one column is ever on at any time, but they cycle from left to right so quick that you and your camera can't get to see that little artefact. This does save on power though so the whole layer can be run directly from the arduino LeoStick in the top right of picture. Thanks again to Freetronics for giving those little gems away at LCA!


8x8 LED matrix, two 595 shifties and a ULN2003 current sink from Ben Martin on Vimeo.

The LEDs to light on a row are selected by a 595 shift register providing power for each row. The resistors are on the far right of the grid leading to that shift register. The cathodes for each individual column are connected together leading to the top of the grid (as seen in the video). Those head over to a uln2003 current sink IC. In the future I'll use either two 2003 chips or one single 2803 (which can do all 8 columns at once) to get the first column to light up too.

The uln2003 is itself controlled by supplying power to the opposite side to select which column's cathodes will be grounded at any given moment. The control of the uln2003 is also done by a 595 shift register which is connected to the row shifty too. The joy of all this is you can pump in the new state and latch the shift registers at once to apply the new row pattern and select which column is lit.

The joy of this design is that I can add 8x8 layers on top at the cost of 8 resistors and one 595 to perform row select.

There are also some still images of the array if you're peaked.

The 595 chips can be had for around 40c a pop and the uln2003 for about 30c. LEDs in quantity 500+ go at around 5-7c a pop.

The code is fairly unimaginative, mainly to see how well the column select works and how detectable it is. In the future I should setup a "framebuffer" to run the show and have a timer refresh the array automatically...

#define DATA   6
#define LATCH  8
#define CLOCK 10  // digital 10 to pin 11 on the 74HC595

void setup()
{
  pinMode(LATCH, OUTPUT);
  pinMode(CLOCK, OUTPUT);
  pinMode(DATA, OUTPUT);
}

void loop()
{
  int i;
  for ( i = 0; i < 256; i++ )
  {
    int col = 1;
    for( col = 1; col < 256; col <<= 1 )
    {
      digitalWrite(LATCH, LOW);
      shiftOut(DATA, CLOCK, MSBFIRST, col );
      shiftOut(DATA, CLOCK, MSBFIRST, i   );
      digitalWrite(LATCH, HIGH);
    }

    delay(20);
  }
}




Wednesday, May 8, 2013

Save Ferris: Show some love for libferris...

Libferris has been gaining some KDE love in recent times. There is now a KIO slave to allow you to see libferris from KDE, also the ability to get at libferris from plasma.

I've been meaning to update the mounting of some Web services like vimeo for quite some time. I'd also like to expand to allow mounting google+ as a filesystem and add other new Web services.

In order to manage time so that this can happen quicker, I thought I'd try the waters with a pledgie. I've left this open ended rather than sticking an exact "bounty" on things. I had the idea of trying a pledgie with my recent investigation into the libferris indexing plugins on a small form factor ARM machine. I'd like to be able to spend more time on libferris, and also pay the rent while doing that, so I thought I'd throw the idea out into the public.

If you've enjoyed the old tricks of mounting XML, Berkeley DB, SQLite, PostgreSQL and other relational databases, flickr, google docs, identica, and others and want to see more then please support the pledgie to speed up continued development. Enjoy libferris!

Click here to lend your support to: Save Ferris: Show some love for libferris and help kick it

Thursday, May 2, 2013

Indexing on limited hardware... what to do

Libferris supports many indexing libraries and technologies through its plugin interface. Larger systems can use a PostgreSQL plugin which is tailored explicitly to get the most out of that RDBMs for larger file server indexes. For smaller end, there are memory mapped files, clucene, soprano, or SQLite. I've been doing some tinkering trying to milk extra performance out of the indexing plugins for ARM machines lately. Note that if you are using debian, the CLucene you'll want is the 2.x series, currently only packaged for experimental.

For testing purposes I built a fairly tiny index of only 130k files. An interesting test case is looking for specific files which have paths that match against a regular expression and returns a fairly small chunk of results. For this case, about 115 resulting files using a four character substring search as the regex. These are a common query for looking for files when you don't recall the exact ordering of the directory names or where a directory was. Small number of results, regex to pick them.

The memory mapped index implementation (boostmmap) uses boost IPC and multi indexed collections created in memory mapped files to maintain the index. The index has also a digram index for each URL allowing regular expressions to resolve through index rather than needing evaluation against full URLs.

The SQLite index is fairly vanilla and doesn't include many customizations for sqlite. Whereas the PostgreSQL index implementation does use many of the features specific to that database. Neither the SQLite or boostmmap indexes in the public libferris repo attempt to do any compression on URL strings or the like.

A fairly basic index on 130k files is about 80mb using either memory mapped files or SQLite. Caches are cleared by echo 3 > drop_caches. Using an odroid-u2 with emmc flash, on a cold cache the SQLite index comes out about 10% faster than the boostmmap for a query finding 115 files. Turning off the regex prefilter index in the boostmmap makes it 10% slower again. This is a trade off, a very fast CPU and a disk with great file location and single extents will show less or no difference with the prefilter as reading 80mb from disk will take less time and the CPU can run 130k regexes very quickly. The prefilter requited only 124 regex evaluations, without the prefilter all 130611 URLs needed a regex evaluation.

The interesting part is with a warm cache the boostmmap is about twice as fast overall as the SQLite index. This is a big difference as the timing is for overall complete run time from the command line, and there is some overhead in starting up the index query itself. As usual, things vary depending on if you are expecting frequent queries (warm cache), have a very fast CPU (regex eval is relatively less costly), or need multiple updaters (SQLite allows it, my memory mapped doesn't).

To then experiment a little further, I brought the ferris clucene plugin into the mix. I disabled the explicit prefilter index on regex code for initial testing, the index became about 70mb and could resolve the query on a cold cache in about 65% the time of the SQLite plugin. On warm cache the clucene was slowest, which is mainly due to the prefilter being disabled and the fallback code making the URL query a WildcardQuery with no pre or postfix to anchor the query on.

Next time around I'll see how speed effective the prefilter index is on clucene. I know it slows down adding documents (you are indexing more), and is larger (I haven't optimized for index size), but it will be interesting to see the performance on the eMMC device for the prefilter.



Filesystem Indexing: Taking the reins

To index data using a small ARM CPU without much RAM you might like to break the indexing run down into many parts, and get more explicit control over what is happening. The below will index all files on /DATA-PATH in batches of 5000 files at a time with libferris. This will use whatever index plugin you have setup for ~/.ferris/ea-index (the default metadata index). Be that PostgreSQL, SQLite, boost memory mapped files, clucene or whatever.

I'm currently racing the boost memory mapped index with the SQLite backed index on simple URL queries against the filesystem. This is being done on about 2ghz ARM machines with either 512 or 2048mb of RAM. The boostmmap plugin is of my own design and contains some smarts while executing regular expression matching against unanchored strings (.*foo.*). Unfortunately the boostmmap plugin is not as smart as it could be regarding scattered updates, transactions, and journaling, which slows it down a bit in the index creation phase relative to the SQLite plugin.

The below is a skeleton bash script to get started adding files. Another option is to ssh into the remote host and run find(1) there which can be much faster over network filesystems. The whitelist environment variable is to override which metadata libferris will index. If your index indicates it wants sha and md5 digests, the act of calculating those can dominate indexing time. An explicit whitelist keeps index adding times down with the obvious side effect of limiting what you can use in your queries. Such a limited list of metadata as in the below brings the index closer to what locate provides.

#!/bin/bash

rm -rf /tmp/fidxtmp
mkdir -p /tmp/fidxtmp
cd /tmp/fidxtmp

find /DATA-PATH | split -l 5000
 

export LIBFERRIS_EAINDEX_EXPLICIT_WHITELIST="name,size,mtime"

for if in x*
do
   echo "adding $if..."
   cat "$if" | feaindexadd -v -1 >>/tmp/ferris-index-progress.txt
done


Then you can find all your PDF files for example using the following:

feaindexquery -Z '(url=~pdf)'

The -Z tells libferris not to try to lstat() or resolve URLs to see if they exist currently. Much faster results but at the cost of not weeding out things which might have moved since they were last indexed.

And all the files which have been written this year

feaindexquery -Z '(mtime>=begin this year)'

Unfortunate about needing the quotes as bash wants to do things with naked parenthesis.

Save Ferris! Or just donate to an open source project or organization of your choice if you like the ferris posts.

Tuesday, April 30, 2013

libferris available for debian arm hard float

To compliment my existing packages of the libferris virtual filesystem and index/search suite for soft float debian, I now offer hot off the press, debian hard float! The distinction between how floating point is handled probably doesn't make a big difference to the operation of libferris, but if you are installing debian on an odroid-u2 then you are likely running hf, and as such having debs which are hf makes installing libferris a whole bunch simpler.

With the eMMC card on the u2, it is a really enjoyable little server machine to play around with. So far I've done the rudimentary test that XML is mountable as a filesystem and created one or two indexes with the Qt/SQLite index plugin for libferris. Note that in recent releases the sqlite backend is transaction backed which gives a huge performance increase, and on really IO constrained machines this is even more noticeable. This is a little tip for those using QtSQL, transactions are not just for making operations atomic, you may find that the whole show runs faster when it is transaction protected.

If you haven't played with libferris, things are auto mounted where possible, and there are many coreutils like tools to make interacting with ferris simple. The ferris-redirect is like the bash redirection but can write to any filesystem that libferris knows how to mount.

ben@odroidu2:/tmp/test$ cat example.xml
<top>
<node name="foo">bar
</node>
</top>
ben@odroidu2:/tmp/test$ fls example.xml
top
ben@odroidu2:/tmp/test$ fls example.xml/top
foo
ben@odroidu2:/tmp/test$ fcat example.xml/top/foo
bar
ben@odroidu2:/tmp/test$ date | ferris-redirect -T example.xml/top/foo
ben@odroidu2:/tmp/test$ cat example.xml
<?xml version="1.0" encoding="UTF-8" standalone="no" ?>
<top>
  <node name="foo">Tue Apr 30 14:57:26 PDT 2013
</node>
</top>

The above interaction would also work for mounted Berkely DB and other filesystems of course.


I have noticed one of the binary scoped destructors doesn't like the hard float build for whatever reason. This can cause some of the command line tools to not exit gracefully, which is a shame. I can't get a good backtrace for the situation either, which makes tracking it down a nice day long adventure into trial and error.

So something productive has been generated during the last round of jet lag after all!

The goods http://fuuko.libferris.com/debian/debian-armhf/


Save Ferris!

Thursday, March 28, 2013

FontForge: Rolling Type Design with No Save Using Collab

FontForge has some support for collaborative type design. It's early days, but things are moving along in the right direction. In order to test things I've been looking at the python scripting support with an eye to moving control points around etc, or doing design and modification from the script and having the collab server keeping up with things. This way I can "save" the type that is being designed at certain points and compare the saved font with that I would expect the result of multiple collaborators (the previous python scripts) should be. Automated testing for the win!

So, No Save is only for the "doing" scripts. I of course want to save the current type at my leisure :)

You might be wondering what a script that creates a glyph might look like. I had to do a bit of trial and error to figure out how to use the scripting API myself. With that in mind, I might roll some of my scripts into the mainline FontForge git repo so others can enjoy the little snippits to base their own scripts on.

Anyway, the following script will load a font and create a new capital "C" glyph. The core of this that wasn't that intuitive to me is that you have to set g.layers[] to the layer you got earlier from g.layers[] or the new contour will not show up in the /tmp/out.sfd file. See the MUST comment for the needed line.

import fontforge

f=fontforge.open("test.sfd")       
fontforge.logWarning( "font name: " + f.fullname )

g = f.createChar(-1,'C')
l = g.layers[g.activeLayer]
c = fontforge.contour()
c.moveTo(100,100)   
c.lineTo(100,700)   
c.lineTo(800,700)   
c.lineTo(800,600)   
c.lineTo(200,600)   
c.lineTo(200,200)   
c.lineTo(800,200)   
c.lineTo(800,100)   
c.lineTo(100,100)   
l += c
g.layers[g.activeLayer] = l  #### MUST do this for changes to show up

f.save("/tmp/new.sfd")

At first blush I was expecting to do something like
c = layer.createContour()
c.moveTo()
...
c.lineTo()
and then not have to do anything special. At that stage calling f.save() should know about the new contour etc and save. But without setting the g.layers[active] to the layer that contains the contour you will not see it.

Digging into the C code in python.c I see that point, contour etc are all basically abstractions for python use only. When you assign to a layer in the "g.layers[] = l" call, the C function PyFF_LayerArrayIndexAssign() calls PyFF_Glyph_set_a_layer() which uses something like SSFromLayer() to convert the python only data structure (contour or what have you) into a native "c" SplineSet object.

The good news is that with all this mining into python.c I now have some collab sprinkles in there. So when you do "g.layers[] = l" the FontForge in script mode will send updates to the layer off to the server as a collab update message.

The test is quite easy to run. As three consecutive scripts, start the collab server process (collab server remains running, script ends). Next attach to the collab server and update the C glyph, and finally attach to the collab server and grab all its data and save a out.sfd file.

fontforge -script collab-sessionstart.py
fontforge -script collab-sessionjoin-and-change-c.py
fontforge -script collab-sessionjoin-and-save-to-out.sfd.py

The middle script connects to the collab server and makes its changes with the python API and then exits. No Save. To know if the changes made it to the collab server, the last script grabs all the updates etc and builds the "current" font to save into /tmp/out.sfd.

It took a bit of hacking in the python code, but now the little changes to the contour (path) of the C glyph are sent to the server as one would expect.

The python scripts are still out of repo. Since they are interesting in and of themselves I'll likely put them into my fontforge fork as a prestage to having them mainline.

Now to move on to the next thing that needs to be send to the collab server and updated in all clients.

Monday, March 4, 2013

FontForge Design: Two is a party!

FontForge can now allow multiple people to collaborate on designing a font in real time. This is all still rather alpha level code, and in fact the pull request including this code was only pushed over the fence in the last hour ;) If real time collaboration sounds interesting to you, or something in this post seems of interest, then you might like to attend an upcoming Interactivos Workshop which is right after the Libre Graphics Meeting in Madrid (10-27 April).

At interactivos I will be expanding on the current collab code and discussing future directions and possible cross tool network specifications to allow different font tools to talk to each other. Even if you are not convinced that you want anyone else joining your collab session, you might like something like a Web sink watching your work and updating a page on your tablet as you modify the font so you can see in near real time if changes are going to work well on a target device and design or not. Why burden your workflow having to have your local fontforge export to an OTF and slow you down when a server can do all that for you in the background. There is no reason that the processes in the collab session can't all be running on your behalf.


If you are running osx then I have a binary drop for you which bundles all the needed stuff for collab at:
http://fuuko.libferris.com/osx/packages/201303/05_1630/


To start a server select the menu item Collaborate/Start Session... a dialog will show you the IP address the server will run on so you can start fontforge on another computer (+other OS) and connect to that address using Collaborate/Connect to Session...

Things are in very early days and focus has been only on the glyph view. It seems that there is some fluff holding the undo system from working on the metrics view (thus on kerning etc).

The design uses FontForge's undo/redo system to know what the changes are in the font, and since those changes can be serialized to an SFD format that code is reused to send and receive the undo information across a zeromq broadcast server.

The design is inline with the MCT (Merge Enabled Change Tracking) for ODF that is up on the OASIS lists. In FontForge's case, s baseline full SFD snapshot and sequential fragments thereafter describing updates.

If you are compiling from sources then you'll need czmq and zeromq version3 development stuff installed. Then grab master from github and enjoy.
https://github.com/fontforge/fontforge

Monday, February 18, 2013

Redland && Fedora 18

It appears that redland might be busted in some respects in Fedora 18. After a fresh install I recompiled libferris as per usual after an upgrade, only to find that the RDF/Soprano engine for metadata was busted. Digging into this, looking at the soprano sources and with many versions of soprano and sopranocmd I thought maybe the issue was one layer closer to the metal. Switching to rdfproc from redland I noticed that I couldn't create a new on disk RDF/db store!

$ rdfproc myrdf add \
  'http://witme.sf.net/libferris-core/0.1/subj' \
  'http://witme.sf.net/libferris-core/0.1/pred' \
  'http://witme.sf.net/libferris-core/0.1/obj1'

rdfproc: Failed to open hashes storage 'myrdf'

After downloading redland's source rpm, recompiling it and installing the resulting rpm files I could then create RDF/db again. Same command, different result:

$ rdfproc myrdf add \
  'http://witme.sf.net/libferris-core/0.1/subj' \
  'http://witme.sf.net/libferris-core/0.1/pred' \
  'http://witme.sf.net/libferris-core/0.1/obj1'
rdfproc: Added triple to the graph

I haven't verified this on a second Fedora 18 machine yet, but it might well mean that anyone trying to use the Berkeley db backends of redland on F18 would need an update to redland or to do some tinkering.