I’ve been very quiet here for some time, basically because I’ve been too busy building things to write, although I have started covering non-technical startup engineering topics over on Medium (so check those out if you’re interested).

Time to start adding at least some short bits here, though. Recently I’ve been getting addicted to coding in Go, which reminds me of my old C programming days but with most of the ugly stuff sanded off. It’s simple, fast, clean and easy to read, “object oriented” in the ways that matter (interfaces), and the packaging and dependency management so far seems well-thought-out.

Alongside a growing belief that microservices (or whatever you want to call it since that term is so trendy now) are the way to go once a system reaches a certain size, and a desire to make things API-driven, I threw together an extremely simple web service skeleton in Go. It doesn’t do much — it’s there to serve as a quick starting point, including a Dockerfile and process for building the Docker image.

The web service uses Martini, which is a nice clean layer on top of the standard HTTP library. This example service really just shows its routing and ability to serve static assets. The repository is here: https://github.com/masonoise/go-service-example

You can clone the repository, run “godep save -r” since the import in the server.go file expects the Martini dependency to be vendored, and then “go run server.go” will bring it up locally. The README gives the simple steps to build and run a Docker image in a way that results in an extremely small image. All credit for this goes to Travis Reeder / Iron.io thanks to this very handy blog post.

All of the code here is in the server.go file, which is (I think) self-explanatory. Note that getting this running assumes you have Go installed, your GOPATH set up, and so forth. Getting the Docker image running of course assumes you have the Docker Toolbox installed and Docker running.

Try it out, and enjoy!

I needed to put in a nice Date and Time selector into a form that’s using AngularJS, and wondered what might work. Searches came up with the Bootstrap Datetimepicker (on Github here) and a couple of related Stackoverflow posts, but nothing that laid it out simply. I ended up doing it mostly-but-not-quite by-the-book AngularJS, so I thought I’d write up a quick post about it in case it helps someone else.

Grab the download of the Datetimepicker from the site, and do the usual if you’re working in Rails like I am: copy the JS file into vendor/assets/javascripts, and the datetimepicker.css into vendor/assets/stylesheets, and add them to your application JS and SCSS files. From there, I made an AngularJS directive, mostly based on the one in this Stackoverflow post, though I had to make some tweaks to it. First, the HTML, which is simple enough; just put the directive into your page wherever you want the input field:

<date-time-picker recipient="recipient"></date-time-picker>

In my case, I have an ng-model called “recipient”, representing the person for whom the user is scheduling an event, so I passed in the recipient to the directive. The directive is as follows:

.directive('dateTimePicker', function() {
  return {
    restrict: 'E',
    replace: true,
    scope: {
      recipient: '='
      '<div>' +
      '<input type="text" readonly data-date-format="yyyy-mm-dd hh:ii" name="recipientDateTime" data-date-time required>'+
    link: function(scope, element, attrs, ngModel) {
      var input = element.find('input');

        format: "mm/dd/yyyy hh:ii",
        showMeridian: true,
        autoclose: true,
        todayBtn: true,
        todayHighlight: true

      element.bind('blur keyup change', function(){
        scope.recipient.datetime = input.val();

The setup specifies the recipient object that’s passed in, so that the directive can set the datetime property of it. The template is straight-forward, of course, being a normal text input with the datetimepicker options — I realized while writing this that there’s not really any need to specify the format there since it’s also specified down below in the JavaScript initializer. The initializer will take precedence in this case.

The link function is the first important piece here. It finds the input field within the directive’s element, and then initializes the datetimepicker widget on it. You can specify any of the datetimepicker options here, of course. The next piece is the one that really makes things work: the bind() call. This is called when the input field is changed/blurred, and it sets the datetime property of the recipient object to the value of the input field. Essentially this works because the user will select the date and the time using the widget, then move out of the input field to submit the form. As they leave the field, this sets the recipient.datetime property, so when the form is submitted, the AngularJS object has the desired value.

A nice side-effect of the way this works is that it supports multiple datetime fields for multiple recipients. I have my recipient list in an ng-repeat block with an add button, and as additional recipients are added with different dates/times, everything works as it should. Each recipient has their own datetime field and value.

I’ve recently started using Mailgun, which I like quite a bit, but I stumbled on an issue dealing with attachments, because the files I needed to attach are stored in S3. Using RestClient to send the emails, the expectation is that attachments are files. As it turns out, using the aws-sdk gem, S3 objects don’t quite behave like files, so it doesn’t work to simply toss the S3Object instance into the call.

The standard setup for sending an email is like the following:

 data = Multimap.new
 data[:from] = "My Self <no-reply@example.com>"
 data[:subject] = "Subject"
 data[:to] = "#{recipients.join(', ')}"
 data[:text] = text_version_of_your_email
 data[:html] = html_version_of_your_email
 data[:attachment] = File.new(File.join("files", "attachment.txt"))
 RestClient.post "https://api:#{API_KEY}"\

As this shows, the attachment is expected to be a File that can be read in. So the challenges was to make an S3 object readable. One option, of course, is to do this in two steps: read in the S3 object and use Tempfile to write a file which can then be used as the attachment. This seemed pretty unfortunate. For one thing, I’m running stuff on Heroku, and try to avoid using the file system even for temp files. But primarily, it’s really wasteful to have to write, and then read, a transitory file. The better option, of course, was to see if there was a way to trick the client into reading from S3.

Thanks to some very nice help from Mailgun support (thanks Sasha!), the idea of writing a wrapper seemed feasible, and in fact it wasn’t too bad aside from a couple of tricky issues. A side-effect advantage was that it solved another problem: the naming of the attachment. By default, the name of the attachment is the name of the file, which is pretty ugly if you use a temp file. Not user-friendly.

I’ve put the wrapper file in a Github Gist at https://gist.github.com/masonoise/5624266. It’s pretty short, and there were only a couple of gotchas, which I describe below. The key for this wrapper is to provide the methods that RestClient needs: #read, #path, #original_filename, and #content_type. It’s pretty obvious what #read and #path are for. The attachment naming problem is solved by #original_filename: whatever it returns will be the name of the attachment in the email. It should be clear what #content_type does, but see below for why it’s important.

Using the wrapper is described in the header comment, but it’s mainly a change to give RestClient the wrapper instead of a File object:

data[:attachment] = MailgunS3Attachment.new(file_name, file_key)

The first gotcha was that RestClient calls #read repeatedly for 8124 bytes, and doesn’t pass a block. This forced me to write a crude sort of buffering — the wrapper reads the whole S3 object in, then hands out chunks of it when asked. This isn’t a problem for me because the files I’m dealing with aren’t very large, but it’s something I warn about in the comments. If you have large files, this may be a problem for you, so beware.

The second gotcha that threw me off for a little while is that the value returned by #content_type is important. I haven’t researched exactly why this is, but I found that if I tried to send a Word document but #content_type returns ‘text/plain’, the attachment comes through corrupted. It was easy enough for me to check the filename suffix and set the content type accordingly, but I can imagine cases where this might not work, so this is something else to beware of.

Anyway, this solved the issue for me, and hopefully it’ll be useful for others. There are ways to make this a bit more elegant, but it’s a short piece of code that works. Enjoy.

Quick little tip to hopefully save others the hassle of tracking this down… I just added some pjax to an app in order to quick-load some content into a DIV in the page. However, the content includes a small AngularJS app, and when the content got loaded, the app wasn’t getting initialized, so instead of nice content I had mustache tags {{all over the place}}. Not so nice. After some searching and testing, I found that the following works:

    $(document).pjax('a[data-pjax]', '#content-panel');
    $(document).on('pjax:complete', function() {
        angular.bootstrap($("#content-panel")[0], ["my-app"]);

That waits for the pjax request to complete, then boots the AngularJS app, and everything is good again.

Last week I started in on learning AngularJS and putting it to work on the app at my new company, FinderLabs. I have a long post coming up detailing what I’ve learned about setting up an AngularJS page backed by a simple Rails API, but for today I just wanted to jot down some notes about creating an AngularJS directive, because I found it pretty painful figuring it out — some of the AngularJS docs are quite good, and some of them are lacking. Thankfully there are a lot of examples out there, but I had to look over too many of them to get this working. Note that I’m only moderately comfortable with JavaScript coding, so your mileage may vary.

So, what I wanted to was really easy with plain old jQuery, but it turned out to be more complicated when AngularJS entered the picture. Inside a list of items, for each item I had a DIV into which I was rendering a line graph, using Flotr2, and I had to pass it the JSON data needed for the charting. Before converting my page to AngularJS, I simply iterated my list in Ruby and called item.json_chart_data for each one, and called Flotr2. Not any more; now I’m using ng-repeat on a collection of items in my controller scope. What to do? I could in theory stuff the chart data into my items when my API returns them to my controller, but that was overloading the items themselves. So instead, I created a directive that loads the chart data for each item via an AJAX call.

Here’s the relevant markup from the page:

    <div class="item-graph" id="item-graph-{{item.id}}" style="width: 100px; height: 50px; padding-right: 20px">

        <item-graph item="{{item.id}}"></item-graph>


This defines the DIV that Flotr2 is going to draw into, with the width and height, and then inside it is my directive, “item-graph”. I pass the item id into the directive, so that it can make the AJAX call to the server to get the graph data. Now let’s look at the directive:

var app = angular.module('my-app', ['restangular', 'ui.bootstrap']).
  config(function(RestangularProvider) {
.controller('MyListCtrl', function($scope, Items) {
  $scope.items = Items;
.directive('itemGraph', function($http) {
  return {
    restrict: 'E',
    link: function(scope, element, attrs) {
      scope.$watch('itemId', function(value) {
          if (value) {
            $http.jsonp("/items/" + value + "/item_graph_data.json?callback=JSON_CALLBACK").then(function(response){
              draw_line_graph(document.getElementById('item-graph-' + value), response.data);

This is all of the code for the page; I’ll gloss over the top part since that’s pretty standard AngularJS stuff. It defines the app, injecting the dependencies. I use Restangular for most of the server interaction; it does a really nice job of encapsulating things in a clear, RESTful way. The code configures Restangular with the base URL, since my server-side routes are in the ‘/api/v1’ space. Then we define the controller, and use the Items service (see below) to fetch the items. Then we get into the more interesting part: the directive.

First, note the name of the directive: “itemGraph”. But in the page markup, the tag is “item-graph” — it’s an irritating inconsistency, but just remember that directive names are “camel cased” while the name in your markup is “snake cased”. Thus “item-graph” in the page matches up with “itemGraph” in the code. Whatever. So then in the declaration we inject in the $http service so it can be used later.

Directives basically return an object with a couple of important sections. The first is random configuration, of which the restrict is very important, and this caused me more than a few minutes of debugging. I seemed to have everything wired up, but nothing was happening. That’s because restrict defaults to ‘A’, which specifies that the directive is restricted to attributes. I needed ‘E’ for element. This is described in the AngularJS page on directives (here), but it’s a long, long way down the page buried in details about compiling and the “directive definition object”. Major documentation fail, yes. In any case, don’t make my mistake; as soon as I added this, it started invoking my code.

The second piece needed is the scope, which you can think of as what ties the attributes in your page to your directive. In this case, remember that in my page I specified item="{{item.id}}" in order to pass in the item id. In this scope block we specify itemId:'@item', which says that we want a binding between the local property ‘itemId’ and the attribute ‘item’. The ‘@’ indicates a one-way data binding. I highly recommend reading this great blog post that describes the use of attributes, bindings, and functions. You’ll be glad you did.

Whew, okay, so now we have the directive attached to our element, and we have the attribute bound to a local property so we can use it. How do we use it? Well, it’s a little complicated, but not too bad. First: think of the link() function as what’s invoked when the directive is “activated” for an element. So in here, we want to take the item id that was passed in, and do something. However, it turns out that you don’t simply use itemId directly, or (as a normal person might assume) go with attrs.itemId; instead, we have to use our scope, and “watch” the property. In fairness, this is because often what you’re doing is binding your directive to something in the page that can change, such as a form field. Then by watching it, your directive can respond to changes in that attribute. In my case that wasn’t needed, but I still have to play by the rules. So okay, call scope.$watch() on ‘itemId’, and then write a function to deal with it. Apparently when first loaded, the watch function is called, so then we make sure there is a value (perhaps the directive gets loaded before the DOM is ready? I dunno). If there is, let’s finally do something.

What I wanted to do in my case is make a quick call back to the server, specifying the item id — which, remember, is now represented by “value” since that’s the parameter name on the watch function. When it returns, it calls draw_line_graph(), another function that sets up Flotr2 and does the drawing. It draws the graph into the DOM element passed in, with the data also passed in.

And that’s it. Seems like a lot of code to do what could be done in a couple of lines before, to be honest, but it’s packaged and reusable, and easily copied to do more complex things. One last thing; as promised I wanted to include the Items service which is used to get the initial list of items for the page, just in case someone finds it useful. It’s in another small JS file:

var app = angular.module('my-app');

app.factory('Items', function($http, Restangular) {
  var baseItems = Restangular.all('items');
  return baseItems.getList();

That’s all there is to it, using Restangular to get the list. This automagically ends up invoking the server URL ‘/api/v1/items’ and returns the JSON response. Restangular is even nicer when you start getting into nested relationships and other more complex needs.

I’ve made some of this generic (“my-app” and “Items”), since I can’t get show full details of the app I’m working on, but hopefully it will also make it easier for this to be used by others. I hope this saves others who are new to AngularJS some of the pain I had in figuring all of this out.

I’ve recently had the chance and excuse to play with Elasticsearch, after reading good things about it. We’ve been using Solr with decent success, but it feels like whenever we try to do anything outside the normal index-and-search it’s more complicated than it should be. The basics are easy thanks to the terrific Sunspot gem, though. So when I had a small project to prototype that involved indexing PDFs as well as database records, I figured it was a good opportunity to try out Elasticsearch.

I quickly reached for the Tire gem, which is very similar to Sunspot if you’re using ActiveRecord. Where Sunspot has you include a “searchable” block, Tire adds a “mapping” block, but the idea is the same — that’s where you tell it what fields to index, and how to do it. For each field you can adjust the data type, boost, and more. You can also tack on a “settings” block to adjust things like the analyzers.

The documentation for Tire is pretty good, but I found that I made a number of mistakes trying to adapt the instructions on the Elasticsearch site to the Tire way of doing things, so I thought I’d write up some of the things I learned in hopes that it can help save time for others. Many thanks to the folks on StackOverflow who answered my questions and pointed me in the right direction.

One starter suggestion is to configure Tire’s debugger, which is really convenient because it will output the request being sent to the ES server as a curl command that you can copy and paste into a terminal for testing. Very handy. I added this to my config/environments/development.rb file:

  Tire.configure do
    logger STDERR, :level => 'debug'

Now on to the model. I’ll call mine Publication, so inside app/models/publication.rb:

class Publication < ActiveRecord::Base
  include Tire::Model::Search
  include Tire::Model::Callbacks

  attr_accessible :title, :isbn, :authors, :abstract, :pub_date

  settings :analysis => {
    :filter  => {
      :ngram_filter => {
        :type => "nGram",
        :min_gram => 2,
        :max_gram => 12
    :analyzer => {
      :index_ngram_analyzer => {
        :type  => "custom",
        :tokenizer  => "standard",
        :filter  => ["lowercase", "ngram_filter"]
      :search_ngram_analyzer => {
        :type  => "custom",
        :tokenizer  => "standard",
        :filter  => ["standard", "lowercase", "ngram_filter"]
  } do
    mapping :_source => { :excludes => ['attachment'] } do
      indexes :id, :type => 'integer'
      indexes :isbn
      [:title, :abstract].each do |attribute|
        indexes attribute, :type => 'string', :index_analyzer => 'index_ngram_analyzer', :search_analyzer => 'search_ngram_analyzer'
      indexes :authors
      indexes :pub_date, :type => 'date'
      indexes :attachment, :type => 'attachment'

  def to_indexed_json
    to_json(:methods => [:attachment])

  def attachment
    if isbn.present?
      path_to_pdf = "/Users/foobar/Documents/docs/#{isbn}.pdf"
      Base64.encode64(open(path_to_pdf) { |pdf| pdf.read })

Okay, that’s a lot of code, so let’s look things over bit by bit. Naturally the includes at the top are needed to mix-in the Tire methods. There are two includes so that you can include the calls needed for searching without the callbacks if you don’t need them. The callbacks, though, are what make things work auto-magically when you persist ActiveRecord objects. With those, whenever you call save() on an AR model, the object will be indexed into ES for you.

Next up are two blocks of code: settings, and mapping. The settings block defines a filter, and two analyzers, one for indexing and one for searching. I can’t claim to be enough of an expert yet to fully explain the ramifications of the filter/analyzer options, so rather than risk confusion I’ll just note that this code is there to set up the nGram filter and connect it with two analyzers, index and search, which differ slightly in order to ensure that the standard filter is included for searching. You may want to play with the nGram’s min and max settings to get the matching behavior you want. Note that if you don’t need the nGram filter, you can remove the settings block and let the mapping block stand on its own, in which case the default settings will be used (but you’ll have to change the mapping entry for the :title and :abstract fields, as described below).

The mapping block is the more interesting one, as it defines the fields and indexing behavior. The first line took me some searching and StackOverflow questioning to figure out. The issue that by default, Elasticsearch will put all of the fields you index into its _source storage. Because I’m indexing large PDF documents, the result was that a huge Base64-encoded field was being stored. If I wanted to serve the PDFs out of Elasticsearch that might be okay, but that’s not the plan. The :excludes instruction prevents the attachment field from being stored.

Next are the fields themselves, and I won’t spend much time on these because the Tire documentation does a fine job of explaining these. The only interesting items are the :attachment field and the entry for :title and :abstract — that one specifies that for those fields the custom analyzers defined in the settings block should be used. For :attachment it gets a little bit tricky.

When the indexing is performed, the fields themselves are gathered up by calling the method to_indexed_json(). Normally that will just do a to_json() on your model and then collect the fields. But you can also override it, which we do here. You can see that we add in the method attachment(), which is defined below. So the other fields will be JSONized as normal, as well as the output of the attachment() method. The attachment() method itself uses the ISBN number to open the PDF file, which is read and Base64-encoded. The results of that encoding will be included with the other fields and sent to ES for indexing.

Performing the searching is almost too easy, but there was one bit that threw me off initially, which was getting highlighting to work. The search block in my controller looks like this:

results = Publication.search do
  query { string query_string }
  sort { by :pub_date, 'desc' }
  highlight :title, :options => { :tag => "<strong class='highlight'>" }

I was trying to test the highlighting and was thrown off by the field names being case-sensitive (see my question on StackOverflow), but this now works. The other key is that the highlighted fields are returned separately from the plain fields, which was odd to see. This means that to display the highlighting I have to check for the field:

results.each do |r|
  r_title = (r.highlight.nil? ? r.title : r.highlight.title[0])
  puts "Title: #{r_title}"

If the highlighting is present then it’s used; if not (because the term isn’t present in that field) then the regular field is used. The other handy thing to note is that you can specify the tag with which to wrap the term. The default is “<em>” but I wanted to specify the “highlight” CSS class, as is shown here. This is a really convenient feature.

That covers the basics, though it’s also probably worth sharing just how nice it is to be able to test using curl. For example, I wanted to check how easy it is to have the search call return just a single field (to speed up certain requests), so I tried it first in curl:

curl -XPOST http://localhost:9200/publications/_search\?pretty\=true -d '{
"query": {"query_string": {"query": "Foobar"}},
"fields": ["title"]

That’s of course assuming that ES is running on your local system on port 9200; if not, adjust accordingly.

There you go. I hope this writeup is helpful to folks getting started and it saves you some time.

Recently we wanted to add a nice-looking timeline view to some data in a web app, and looked around for good Javascript libraries. Perhaps the coolest one is Timeline JS from Verite, but while gorgeous it’s also super heavyweight, and pretty much demands a page all to itself. We wanted something more economical that could share the page with some other views, and decided on Timeline-Setter, which creates pretty little timelines with just enough view area to provide important information about each event.

However, as-provided, Timeline-Setter wants to exist as a command-line utility. You get a data file ready, run the utility, and it generates a set of HTML (and associated) files that you can drop into an app. That’s dandy if you have a set of data that doesn’t change often, or you want to perhaps run a cron to pre-generate a bunch of timelines. We needed something that could generate a timeline for a dynamic set of data, so we weren’t sure Timeline-Setter would work for us. However, looking it over I thought it seemed potentially usable in a dynamic way. I generated a static example using our data, read through what it had created, and deconstructed what it wanted in order to display the timeline, then wrote some code to dynamically generate the JSON data necessary. It wasn’t too difficult, and fairly shortly we had dynamic timelines going. I wanted to share the info here since it’s a pretty nice library that others should get a lot out of.

We’re using Jammit to handle our static assets, so we simply put “public/javascripts/timeline-setter.js” and “public/stylesheets/timeline-setter.css” into our assets.yml file, but you can use whatever standard approach you take to including JS and CSS into your pages. Once that’s done, you’re ready to go.

Timeline-Setter takes a pretty standard approach to placing itself in the page: it takes the ID of a DIV, and that’s the container which will hold the timeline. One note: we needed to include multiple timelines in a single page, so we had to do a little creative naming of the DIVs that hold the timelines, as you’ll see below.

<div id="timeline-<%= author.id %>" class="timeline_holder"></div>
<script type="text/javascript">// <![CDATA[
    $(function () {
      var currentTimeline = TimelineSetter.Timeline.boot([<%= author.make_timeline_json %>], {"container":"#timeline-<%= author.id %>", "interval":"", "colorReset":true});

// ]]></script>

The code here creates the DIV, gives it an ID, and then places a Javascript call to the Timeline-Setter boot() function, which tells it to generate the timeline. The first parameter is the JSON holding the data for the timeline; the second is a set of options, passed in as JSON. “container” of course is the ID of the DIV which will contain the timeline. Other options include “interval”, “formatter”, and “colorReset” among others. See the library’s page as listed at the beginning for details of the API, and in particular see the section headed “Configuring the Timeline JavaScript Embed” for the basics of calling the boot() function.

Now of course we need the make_timeline_json() method, which will take our object’s data and create the JSON needed by Timeline-Setter. As an example here, let’s pretend that we’re showing a timeline of books written by an author over the years.

class Author < ActiveRecord::Base
  def make_timeline_json
    timeline_list = []
    if (!birth_date.nil?)
      timeline_list << "{'html':'','date':'#{birth_date}','description':'Birth Date:','link':'','display_date':'','series':'Events','timestamp':#{(birth_date.to_time.to_i * 1000)}}"
    books.each do |book|
      author_list = book.authors.map { |auth| "#{auth['name']}" }.join('; ')
      timeline_list << "{'html':'Authors: #{author_list}
Publisher: #{book.publisher}','date':'#{book.pub_date}','description':'#{book.title}','link':'','display_date':'','series':'Publications','timestamp':#{(book.pub_date.to_time.to_i * 1000)}}"
    if (!death_date.nil?)
      timeline_list << "{'html':'','date':'#{death_date}','description':'Death Date:','link':'','display_date':'','series':'Events','timestamp':#{(death_date.to_time.to_i * 1000)}}"
    return "#{timeline_list.join(',')}"

Essentially, this method creates a JSON string containing a set of entries, each representing an event to place on the timeline. Each entry has several fields: html, date, description, link, display_date, series, and timestamp. Not all of these are used here, but with the basics you can experiment further. The important fields are:

  • html: This will be displayed in the event’s pop-up when clicked on.
  • description: Just what it says.
  • link: an optional URL which will be associated with a link in the event pop-up.
  • series: which “series” this event belongs to; see below for details on this.
  • timestamp: This is the timestamp associated with the event, used to construct the timeline in order.

A note about the “series” parameter: one very nice feature of Timeline-Setter is that you can display more than one set of events in a single timeline. Each set of events is called a “series”. In our example we’re creating two series: “Events” and “Publications”. Each will be shown in a different color, with a title (so the names of the series need to look nice, as they’ll be displayed) and a checkbox so that a viewer can hide and show each series individually. It’s extremely useful.

In the code above, you’ll see that we create the “Birth Date” and “Death Date” events individually, but in the middle we iterate over the books associated with this author. For each book we build a string of authors, semicolon-delimited, just to demonstrate one way to include another list of information in an event’s HTML. I have to admit that I’m not entirely certain why it’s necessary to multiply the timestamps by 1000 to get to the correct time, but it works fine…

And there you are. Hopefully anyone needing an economical, nice-looking timeline with dynamically-generated data can take advantage of this. But certainly, if you can work with static (or infrequently-updated) data, you may be able to use Timeline-Setter out of the box via a cron job or rake task — you could generate the CSV file for its command-line interface, run it, then copy the resulting files into your application. If you need dynamic timelines, though, I hope this post is helpful.


Get every new post delivered to your Inbox.