Thursday, December 17, 2015

17 Segments are the new Red.

While 7 Segment displays are quite common, the slightly more complex cousin the 17 segment display allows you to show the A-Z range from English and also some additional symbols due to the extra segments.

The unfortunate part of the above kit which I got from Akizukidenshi is that the panel behind the 17 segger effectively treats the display as a 7 segger. So you get some large 7 segment digits but can never display an "A" for example. Although this suits the clock that the kit builds just fine, there is no way I could abide wasting such a nice display by not being able to display text. With the esp8266 and other wifethernet solutions around at a low price point it is handy to be able to display the wind speed, target "feels like" temperature etc as well as just the time.

With that in mind I have 3 of these 17 seggers breadboarded with a two transistor highside and custom lowside using an mxp23017 pin muxer driving two 2803 current sinks which are attached using 8 up 330 ohm resistors in IC blocks. This lowside is very useful because with a little care it can be setup on a compact piece of stripboard. All the controlling MCU needs is I2C and it can switch all the cathodes just fine.

While experimenting I found some nice highside driver ICs. I now have custom PCB on their way which can each drive 2 displays and can chain left and right to allow an arbitrary number of displays to be connected into a single line display. More info an photos to follow, I just noticed that I hadn't blogged in a while so thought I'd drop this terse post up knowing that more pictures and video are to come. I also have the digits changing through a fade effect. It is much less distracting to go from time to temp and back if you don't jump the digits in one go.

Monday, October 26, 2015

ESP8266 and a few pins

The new Arduino 1.6.x IDE makes it fairly simple to use the ESP8266 modules. I have been meaning to play around with a some open window detectors for a while now. I notice two dedicated GPIO pins on the ESP8266, which is one more than I really need. So I threw in an led which turns on when the window is open. Nothing like local, direct feedback that the device has detected the state of affairs. The reed switch is attached on an interrupt so as soon as the magnet gets too far away the light shines.


I will probably fold and make the interrupt set a flag so that the main loop can perform an http GET to tell the server as soon as it knows when a state has changed.

Probably the main annoying thing I've still got is that during boot it seems the state of both the gpio pins matters. So if the reed switch is closed when you first spply power then the esp goes into some stall state.

It will be interesting to see how easy OTA firmware updates are for the device.

Thursday, October 15, 2015

Terry & the start of a video project.

I did a test video showing various parts of Terry the Robot while it was all switched off and talking about each bit as I moved around. Below are some videos of the robot with batteries a humming and a little movement. First up is a fairly dark room and a display of what things look like just using the lighting from the robot itself. All the blinking arduino LEDs, the panel, and the various EL and other lights.



The next video has a room light on and demonstrates some of the control of the robot and screen feedback.



I got some USB speakers too, but they turned out to be a tad too large to mount onto Terry. So I'll get some smaller ones and then Terry can talk to me letting me know what is on its, err, "mind". I guess as autonomy is ramped up it will be useful to know if Terry is planning to navigate around or has noticed that it has been marooned by a chair that a pesky human has moved.

The talk over video is below. I missed talking about the TPLink wifi APs and why there are two, and might be only one in the future. The short answer is that Terry might become a two part robot, with a base station only one wifi AP is needed on the robot itself.


Saturday, September 19, 2015

Terry Motor Upgrade -- no stopping it!

I have now updated the code and PID control for the new RoboClaw and HD Planetary motor configuration. As part of the upgrade I had to move to using a lipo battery because these motors stall at 20 amps. While it is a bad idea to leave it stalled, it's a worse idea to have the battery have issues due to drawing too much current. It's always best to choose where the system will fail rather than letting the cards fall where the may. In this case, leaving it stalled will result in drive train damage in the motors, not a controller board failure, or a lipo issue.


One of the more telling images is below which compares not only the size of the motors but also the size of the wires servicing the power to the motors. I used 14AWG wire with silicon coating for the new motors so that a 20A draw will not cause any issues in the wiring. Printing out new holders for the high precision quadrature encoders took a while. Each print was about 1 hour long and there was always a millimetre or two that could be changed in the design which then spurred another print job.


Below is the old controller board (the 5A roboclaw) with the new controller sitting on the bench in front of Terry (45A controller). I know I only really needed the 30A controller for this job, but when I decided to grab the items the 30A was sold out so I bumped up to the next model.


The RoboClaw is isolated from the channel by being attached via nylon bolts to a 3d printed cross over panel.

One of the downsides to the 45A model, which I imagine will fix itself in time, was that the manual didn't seem to be available. The commands are largely the same as for the other models in the series, but I had to work out the connections for the quad encoders and have currently powered them of the BEC because the screw terminal version of the RoboClaw doesn't have +/- terminals for the quads.

One little surprise was that these motors are quite magnetic without power. Nuts and the like want to move in and the motors will attract each other too. Granted it's not like they will attract themselves from any great distance, but it's interesting compared to the lower torque motors I've been using in the past.


I also had a go at wiring 4mm connectors to 10AWG cable. Almost got it right after a few attempts but the lugs are not 100% fixed into their HXT plastic chassis because of some solder or flux debris I accidentally left on the job. I guess some time soon I'll be wiring my 100A monster automotive switch inline in the 10AWG cable for solid battery isolation when Terry is idle. ServoCity has some nice bundles of 14AWG wire (which are the yellow and blue ones I used to the motors) and I got a bunch of other wire from HobbyKing.

Wednesday, September 16, 2015

10 Foot Pound Boots for Terry

A sad day when your robot outgrows it's baby motors. On carpet this happened when the robot started to tip the scales at over 10kg. So now I have some lovely new motors that can generate almost 10 foot pounds of torque.



This has caused me to move to a more rigid motor attachment and a subsequent modofication and reprint of the rotary encoder holders (not shown above). The previous motors were spur motors, so I could rotate the motor itself within its mounting bracket to mate the large gear to the encoders. Not so anymore. Apart from looking super cool the larger alloy gear gives me an 8 to 1 reduction to the encoders, nothing like the feeling of picking up 3 bits of extra precision.

This has also meant using some most sizable cables. The yellow and purple cables are 14 AWG silicon wires. For the uplink I have an almost store bought 12AWG and some hand made 10 AWG monsters. Each motor stalls at 20A so there is the potential of a noticable amount of current to flow around the base of Terry now.

Monday, August 31, 2015

Inspecting ODF round trips for attribute retention

Given an office application one might like to know which attributes are preserved properly across a load and save cycle. For example, is the background color or margin size mutated just by loading and saving an ODF file with OfficeAppFoo version 0.1.

The odfautotests project includes many tests on simple ODF documents to see how well each office application preserves the information in the document. Though testing ODF attribute preservation might not be as simple as one might first imagine. Consider the below document with a single paragraph using a custom style:

<office:text>
  <text:p text:style-name="style">hello world</text:p>
</office:text>

In the styles.xml file one might see something like the following:

<style:style 
  style:display-name="TestStyle" 
  style:family="paragraph" 
  style:name="style" 
  style:parent-style-name="standard">
     <style:text-properties fo:background-color="transparent" />

</style:style>

This input is obviously designed to see how well the fo:background-color style information is preserved by office applications. One thing to notice is that the style:family attribute in the above is paragraph.

If one loads and saves a document with the above fragments in it using LibreOffice 4.3.x then they might see something like the following in the output ODF file. In content.xml:

<text:p text:style-name="TestStyle">hello world</text:p>

And in the styles.xml file the background-color attribute is preserved:

<style:style style:name="TestStyle"
     style:family="paragraph"
     style:parent-style-name="standard">
      <style:text-properties fo:background-color="transparent"/>
</style:style>

One can test if the attribute has been preserved using XPath selecting on the @style-name of the text:p and then making sure that the matching style:style has the desired fo:background-color sub attribute.

The XPath might look something like the below, which has been formatted for display:

//s:style[
  @s:display-name='TestStyle' 
  or (not(@s:display-name) and @s:name='TestStyle')]
/s:text-properties/@fo:background-color

Performing the load and save using Word 2016 is quite interesting. The resulting content.xml file might have:

<style:style style:name="P1"
   style:parent-style-name="TestStyle"
   style:master-page-name="MP0"
   style:family="paragraph">
     <style:paragraph-properties fo:break-before="page"/>
</style:style>
...
<office:text text:use-soft-page-breaks="true">
  <text:p text:style-name="P1">hello world</text:p>
</office:text>

and in styles.xml the background-color setting is pushed up to the paragraph style level.

<style:style style:name="TestStyle"
   style:display-name="TestStyle"
   style:family="paragraph">
      <style:text-properties fo:hyphenate="false"/>
</style:style>

<style:default-style style:family="paragraph">
...
<style:text-properties ... fo:background-color="transparent"

So to see if the output ODF has the fo:background-color setting one has to consider not just the directly used style "P1" but also parent style elements which might contain the attribute instead. In this case it was pushed right up to the paragraph style.

For the Word output the above XPath doesn't necessarily work. If the attribute we are looking for has been pushed up to paragraph then we should look for it there instead. Also, if we are looking at the paragraph level then we need to be sure that there is no attribute directly at the lower, TestStyle, level. Also it helps to ensure in the selection that the paragraph is really a parent of the TestStyle, or P1 in the above.

After a bit of pondering I found an interesting solution that can evaluate using plain XPath1.0. To test the value I pick off the fo:background-color from both the TestStyle and also the paragraph level. If those values are passed to concat() then, if the attribute is only at the TestStyle or paragraph level we get something that can be used to test the value. If the attribute appears at both levels are are in trouble.

For example:

<style:style style:name="TestStyle"
<style:text-properties ... fo:background-color="transparent"  />
</style:style>
<style:default-style style:family="paragraph">
<style:text-properties ... fo:background-color="#FF0000"/>
</style:style>

Considering the semantic XPath query of concat( TestStyle/@fo:background-color, paragraph/@fo:background-color ) the result would be  transparent#FF0000 which would not match a string comparison with 'transparent'.

The trick is to use an array selector on the second item in the concat() call. If we only return the paragraph/@fo:background-color value if there is no value associated with the TestStyle then the concat will effectively only return one or the other (directly on TestStyle or nothing on TestStyle and the attribute from paragraph).

With this the query can allow the office application to move the information to a parent of the style and still match for a test.

Friday, July 17, 2015

OSX Bundling Soprano and other joys

Libferris has been moving to use more Qt/KDE technologies over the years. Ferris is also a fairly substantial software project in it's own right, with many plugins and support for multiple libraries. Years back I moved from using raw redland to using soprano for RDF handling in libferris.

Over recent months, from time to time, I've been working on an OSX bundle for libferris. The idea is to make installation as simple as copying Ferris.app to /Applications. I've done some OSX packaging before, so I've been exposed to the whole library paths inside dylib stuff, and also the freedesktop specs expecting things in /etc or whatever and you really want it to look into /Applications/YouApp/Contents/Resources/.../etc/whatever.

The silver test for packaging is to rename the area that is used to build the source to something unexpected and see if you can still run the tools. The Gold test is obviously to install from the app.dmz onto a fresh machine and see that it runs.

I discovered a few gotchas during silver testing and soprano usage. If you get things half right then you can get to a state that allows the application to run but that does not allow a redland RDF model to ever be created. If your application assumes that it can always create an in memory RDF store, a fairly secure bet really, then bad things will befall the app bundle on osx.

Plugins are found by searching for the desktop files first and then loading the shared libary plugin as needed. The desktop files can be found with the first line below, while the second line allows the plugin shared libraries to be found and loaded.

export SOPRANO_DIRS=/Applications/Ferris.app/Contents/Resources/usr/share
export LD_LIBRARY_PATH=/Applications/Ferris.app/Contents/Resources/usr/local/lib/soprano/

You have to jump through a few more hoops. You'll find that the plugin ./lib/soprano/libsoprano_redlandbackend.so links to lib/librdf.0.dylib and librdf will link to other redland libraries which themselves link to things like libxml2 which you might not have bundled yet.

There are also many cases of things linking to QtCore and other Qt libraries. These links are normally to nested paths like Library/Frameworks/QtCore.framework/Versions/4/QtCore which will not pass the silver test. Actually, links inside dylibs like that tend to cause the show to segv and you are left to work out where and why that happened. My roll by hand solution is to create softlinks to these libraries like QtCore in the .../lib directory and then resolve the dylib links to these softlinks.

In the end I'd also like to make an app bundle for specific KDE apps. Just being able to install okular by drag and drop would be very handy. It is my preferred reader for PDF files and having a binary that doesn't depend on a build environment (homebrew or macports) makes it simpler to ensure I can always have okular even when using an osx machine.


Thursday, July 16, 2015

Terry && EL

After getting headlights Terry now has a lighted arm. This is using the 3 meter EL wire and a 2xAA battery inverter to drive it. The around $20 entry point to bling is fairly hard to resist. The EL tape looks better IMHO but seems to be a little harder to work with from what I've read about cutting the tape and resoldering / reconnecting.

I have a 1 meter red EL tape which I think I'll try to wrap around the pan/tilt assembly. From an initial test it can make it around the actobotics channel length I'm using around twice. I'll probably print some mounts for it so that the tape doesn't have to try to make right angle turns at the ends of the channel.

Wednesday, July 15, 2015

Terry - Lights, EL and solid Panner

Terry the robot now has headlights! While the Kinect should be happy in low light I found some nice 3 watt LEDs on sale and so headlights had to happen. The lights want a constant current source of 700mA so I grabbed an all in one chip solution do to that and mounted the lights in series. Yes, there are a load of tutorials on building a constant current driver for a few bucks around the net, but sometimes I don't really want to dive in and build every part. I think it will be interesting at some stage to test some of the constant current setups and see the ripple and various metrics of the different designs. That part of he analysis is harder to find around the place.


And just how does this all look when the juice is flowing I hear you ask. I have tilted the lights ever so slightly downwards to save the eyes from the full blast. Needless to say, you will be able to see Terry coming now, and it will surely see you in full colour 1080 glory as you become in the sights. I thought about mounting the lights on the pan and tilt head unit, but I really don't want these to ever get to angles that are looking right into a person's eyes as they are rather bright.


On another note, I now have some EL wire and EL tape for Terry itself. So the robot will be glowing in a sublte way itself. The EL tape is much cooler looking than the wire IMHO but the tape is harder to cut (read I probably won't be doing that). I think the 1m of tape will end up wrapped around the platform on the pan and tilt board.

Behind the LED is quite a heatsink, so they shouldn't pop for quite some time. In the top right you can just see the heatshrink direct connected wires on the LED driver chip and the white wire mounts above it. I have also trimmed down the quad encoder wires and generally cleaned up that area of the robot.


A little while ago I moved the pan mechanism off axle. The new axle is hollow and setup to accomodate a slip ring at the base. I now have said slip ring and am printing a crossover plate for that to mount to channel. Probably by the next post Terry will be able to continuiously rotate the panner without tangling anything up. The torque multiplier of the brass to alloy wheels together with the 6 rpm gearmotor having very high torque means that the panner will tend to stay where it is. Without powering the motor the panner is nearly impossible to move, the grub screws will fail before the motor gives way.


Although the EL tape is tempting, the wise move is to fit the slip ring first.

Tuesday, June 16, 2015

Abide the Slide

The holonomic drive robot takes it's first rolls! This is what you get when you contort a 3d printer into a cross format and attach funky wheels. Quite literally as the control board is an Arduino Mega board with Atmel 2650 MCU and a RAMPS 1.4 stepper controller board plugged into it. The show is controlled over rf24 link from a hand made controller. Yes folks, a regression to teleoperating for now. I'll have to throw the thing onto scales later, but the steppers themselves add considerable weight to the project, but there doesn't seem to be much problem moving the thing around under it's own power.



The battery is a little underspeced, it will surely supply enough current, and doesn't get hot after operation, but the overall battery capacity is low so the show is over fairly quickly. A problem that is easily solved by throwing more dollars at the battery. The next phase is to get better mechanical stability by tweaking things and changing the software to account for the fact that one wheel axis is longer than the other. From there some sensor feedback (IMU) and a fly by wire mode will be on the cards.



This might end up going into ROS land too, encapsulating the whole current setup into being a "robot base controller" and using other hardware above to run sensors, navigation, and decision logic.

Monday, April 27, 2015

Unbrick the NUC

It seems there are many folks with the suspend of death on the NUC. When you suspend to RAM you can't get back. When you disconnect power for a while you can't turn it on again. Welcome to brickland, population: you. I found that following the advice on the forums if I disconnect the CMOS battery for a bit then I could turn on the NUC again.

The downside is that the CMOS battery is installed under the motherboard, so you have to remove the motherboard which is no easy task the first time. Then each subsequent time that the NUC bricks you have to take it apart again to such a great extent.

Luckily I found these extension leads which let me bring out the battery from the case. So hopefully now a debrick isn't going to involve a system teardown anymore.

Friday, April 10, 2015

Tiny Tim improves and gets Smaller

I finally switched Tiny Tim over to a lipo battery. Almost everything worked when I tested the new battery, the only thing that failed in a major way were the two 2812 LEDs which, either didn't come on or came on for a very quick moment and went dark. So Tim is now smaller again without the "huge" AA battery pack at it's tail.


The 2812 story was interesting. It wasn't going to be happy jumping to the 7.6v of the 2S lipo. So I tried various voltage divider setups which didn't work either. I ended up using a common 5v regulator and the lights work fine again. I think I was maybe using too high resistor values in the divider and the 2812s didn't like it. At any rate, they apparently want a good regulated power source, and I wasn't giving it one before I switched over to using the regulator.

On the whole, going from 5-6v of the AA pack to 7.6v has made it a snappier mover. I tried it initially with the battery on the bench and found it would lift the back off the desk under hard break.

Next up is probably attaching a claw or drop mechanism and ultrasound sensor and then take on the Sparkfun autonomous ping-pong ball into cup challenge. I'll probably control it via wireless from a second on board micro-controller. The drop, ultrasound, and autonomous navigation micro (and additional battery) can all be put into a single "module" that I can then bolt to Tim. All the navigation micro needs to do is control the differential drive like a remote control would. This way, the existing micro etc on Tim doesn't change at all in order for the challenge to be accepted.


Monday, March 16, 2015

Google Breakpad and the post crash experience

Google Breakpad has many components to it, but at the basic level it lets you capture information at the time a crash occurs and upload that to the net. A really cute part of Breakpad is that the binary doesn't need to have the debug symbols in it, you don't even need to have them on the client machine at any location. When you build version $githash then you use a breakpad tool to copy out the debug symbols into separate files. When the user discovers a crash they upload a minidump file to a server of your selecting. Then you can combine the extracted symbols from build time and the minidump file to generate a backtrace with line number information. So software users don't have to know about gdb or lldb or whatnot and how to make a backtrace and where to paste it.



I recently updated FontForge's use of breakpad to use a small server on localhost to report the bug. The application dmg file for fontforge will soon also include the extracted symbols for the build. By telling breakpad to use a local server, that server can lookup the symbols that are shipped and generate a human readable backtrace with line number information. Because its also a web interface and running locally, it can spawn a browser on itself. So instead of getting the Mac dialog supplied by the osx crash reporter app telling you that there was a crash, you get a web page telling you the same thing. But the web page can use jQuery/Bootstrap (or $ui tool of choice) and ask what the user was doing and offer many ways to proceed from there depending on how the user wants to report things. The https://gist.github.com/ site can be used to report without any login or user accounts. It's also rather handy as a place to checking larger backtraces that might be, maybe, 50-100kb.

But once you can upload to gist, you can get a http and other URL links to the new gist. So it makes sense from there to offer to make a new github issue for the user too. And in that new issue include the link to the gist page so that developers can get at the full backtrace. It turns out that you can do this last part, which requires user login to github, by redirecting to github/.../issues/new and passing title and body GET parameters. While there is a github API, to report a new issue using it you would need to do OAuth first. But in the libre world it's not so simple to have a location to store the OAuth secure token for next time around. So the GET redirect trick nicely gets around that situation.


For those interested in this, the gist upload and callback to subsequently make a github issue are both available. The Google Breakpad hands over the minidump to a POST method which then massages the minidump into the backtrace and spawns a browser on itself. The GET serves up all the html, css, js, and other assets to the browser and that served html/js is what I link to at the start of the paragraph which is where the actual upload/reporting of the backtrace takes place.

The only thing left to do is to respond to the backtraces that come in and everybody gets a more stable FontForge out of the deal. It might be interesting to send off reports to a Socorro server too so that statistics month on month can be easily available.

Thursday, January 29, 2015

libferris on osx

So libferris is now compiled and installed thanks to some of my handy work on Portfiles and macports doing the heavy lifting. I've put the Portfile into the distribution for many of the repositories; ferrisloki, ferrisstreams, stldb4, ferris, fampp2. And moved the source control over to the github -- https://github.com/monkeyiq/ferris

It's still a bit of a bumpy compile for ferris itself. Using clang instead of gcc, using the different stdc++ lib, the lack of some API calls on osx relative to Linux and the assumptions I'd made that IPC, advising the kernel on IO patterns, memory mapping and again advising on page patterns, would all be available APIs and contants. I have a patch from the compile which I need to feedback into the main libferris repo, making sure it still works fine on Linux too.

So now I can dig into xml files from the command line on osx too. I have to test out the more advanced stuff and the web services. The later use some of the 'Q' magic dust, qjson, qoauth, qtnetwork et al so they should be fairly robust after the port.

I should also update the primary file:// handler in libferris to use some of the osx apis for file monitoring etc to be a friendlier citizen on that platform. But going from no ferris on osx to some ferris is a great first move. A bundle would be the ultimate goal, /Applications/Ferris install in a single drag and drop.

Friday, January 9, 2015

Terry 2.0: The ROS armada begins!

It all started with wanting to use a Kinect or other RGBD (Depth sensing) camera to do navigation... Things ended up slowly but surely with moving from a BeagleBone Black and custom nodejs script that I created as the heart to a quad core atom running ROS and many ROS nodes that I created ;)


The main gain to ROS is the nodes that other people have written. If you want to convert RGBD to a simulated laser scan in order to do 2d navigation then that's already available. If you want to make a map and then use it then that code is already there for you. And the visualization for these things. I'm not sure I'd have the time to write from scratch a 3d robot viewer and visualize my cut down 'fake' 2d laser scan data from the Kinect in OpenGL. But with ROS I got the joy of seeing the scan change in real time as Terry sensed me move in front of it.

I now have 3d control of the robot arm happening, including optional sinusoidal encoding of movements. The fun part is that the use of sinusoidal can be enabled or disabled without any code changes. I wrote that part as a JointTrajectory shim. For something to use smoother movement all it has to do is publish to that shim instead of directly to the servo controller itself. The publish and subscribe parts of the IPC that ROS has are very easy to get used to and allow breaking up the functionality into rather small pieces if desired.


The arm is one area that is ROS controlled, but not quite the way I want. It seems that using MoveIt is indicated for arm control but I didn't manage to get that to work as yet. The wizard only produced an arm that would articulate on one joint, so more tinkering is needed in that area. Instead I wrote my own ROS node to control the arm. It's all fairly basic trig to get the gripper at an x,y,z relative to the base of the arm. And an easy carry over to fix the gripper at a horizontal to the base no matter what position the arm is moved to. But in the future the option to MoveIt will be considered, can't hurt to have two codepaths to choose from for arm control.

As part of the refresh I updated the pan unit for the camera platform.Previously I used a solid 1/4 inch shaft with the load taken by a bearing and the gearmotor turning the shaft directly from below it. Unfortunately that setup has many drawbacks; no ability to use a slip ring, no torque multiplication, difficulty using an axle end rotary encoder IC to gain real world position feedback. The updated setup uses a 6 rpm gearmotor offset with a variable motor mount to drive a 24 tooth brass gear. That mates with an 80 tooth gear which is affixed to a hollow 1/2 inch alloy tube. As you can see at the top of the image, I've fed the tilt servo cable directly into the inside of that tube. No slip ring right now, but it is all set to allow the USB cable to slip through to the base and enable continuous rotation of the pan subsystem. So the Kinect becomes a radar style. One interesting aside is that you can no longer manually rotate the pan system because the gearmotor, even unpowered, will stop you. The grub screw will slip before the axle turns.


As shown below, the gearmotor is driven by an Arduino which is itself connected to a SparkFun breakout of the TB6612FNG HBridge IC. This combo is attached using double sided 3M tape to a flat bit of channel. Then the flat bit of channel is bolted to Terry. I've used this style a few times now and quite like it. A single unit and all it's wires can be attached and moved fairly quickly.



At first I thought the Arudino gearmotor control and the Web interface would be a bit outside the bounds of ROS. But there is an API for Arduino which gives the nice publish and subscribe with messages that one would expect on the main ROS platform. A little bit of python glue takes the ttyUSB right out of your view and you are left with a little extension from the main ROS right into the MCU. I feel that my 328 screen multiplexer will be updated to use this ROS message API. Reimplementing packeting and synchronization at the serial port level becomes a little less exciting after a while, and not having to even think about that with ROS is certainly welcome.

Below is the motherboard setup for all this. Unfortunately many of the things I wanted to attach used TTL serial, so I needed a handful of USB to TTL bridges. The IMU uses I2C, so its another matter of shoving a 328 into the mix to publish the ROS messages with the useful information for the rest of the ROS stack on the main machine to use at its will.


The web interface has been resurrected and extended from the old BBB driven Terry. This is the same Bootstrap/jQuery style interface but now using roslibjs to communicate from the browser to Terry. I'm using WebSockets to talk back, which is what I was doing manually from the BBB, but with ROS that is an implementation choice that gets hidden away and you again get a nice API to talk ROS like things such as publishing and subscribing standard and custom messages.


The below javascript code sends an array of 4 floats back to Terry to tell it where you want to have the arm (x,y,z,claw) to be located. The 4th number allows you to open and close the claw in the same command. The wrist is held horizontal to the ground for you. Notice that this message is declared to be a Float32MultiArray which is a standard message type.The msg and topic can be reused, so an update is just a prod to an array and a publish call. You can fairly easily publish these messages from the command line too for brute force testing.

var topic_arm_xyz = new ROSLIB.Topic({
   ros  : ros,
   name : '/arm/xyzc',
   messageType : 'std_msgs/Float32MultiArray'
});

var msg = new ROSLIB.Message({
  data : [ x,y,z, claw ]
});
topic_arm_xyz.publish( msg );


The learning curve is a bit sharp for some parts of ROS. Navigation requires many subsystems to be brought up, and at first I had a case that the robot model was visualized 90 degrees out of phase to reality. Most of the stuff is already there, but you need to have a robot base controller that is compatible. It is also a trap for the new players not to have a simple robot model urdf file. Without a model some parts of the system didn't work for me. I'd have liked to have won with the MoveIt control, and will get back to trying to do just that in the future. I think I'll dig around for shoe string examples, something like building a very basic three servo arm with ice cream sticks and $5 servos would make for an excellent example of MoveIt for hobby ROS folk. Who knows, maybe that example will appear here in a future post.


Sunday, January 4, 2015

One Small Step for Terry, one giant leap for robotkind.

This evening the BeagleBone Black was finally unbolted from Terry. All the sensors, servos, gearmotors, PWM partial update moosed down RGB DMD screens are controlled from ROS now.

In the process I replaced the physical pan subsystem with a torque multiplied 6 rpm gearmotor. Its 80-24 multiplication giving 1.8 effective rpm at the pan. Unlike the previous setup, the torque is now so high that you can not manually turn the pan system. It occurred to me that this is also a minor hazard as if you get a finger in the gear mesh it would not lead to a "nice place". This is also now using a hollow 1/2 inch metal tube, so I can add a slip ring to allow the pan and tilt to rotate freely at will. I can also much more easily include an IC based pot to close the pan feedback loop with high precision.

It has been interesting going from a board with so many headers for SPI, I2C, UART and GPIO to a general purpose small desktop motherboard running the show. The upside is that things like SLAM and localization which would be tedious to reimplement myself are now available. It might be fun to play with Monte Carlo localization on an mbed based MCU (target point at around 100Mhz/100kb sram). But that's for another day.

The next trick is bringing together the launch files for each subsystem into a Terry-Main.launch that brings it all up.

Pictures and video are extremely likely to follow.