I’m still figuring things out with React-Native, but yesterday I was working on getting a form to automatically scroll in order to prevent the keyboard from covering the fields being entered. You’d think this would be built-in behavior for TextInput, but not yet. Maybe eventually. In the meantime, I had to figure out how to make ScrollView work with TextInput, and then make TextInput auto-scroll. The auto-scroll was fairly easy once I found a good StackOverflow post (see the answer from Sherlock) about it, though there were some unanswered bits involved. Getting ScrollView to work with TextInput is sort of straightforward except I stumbled over weird behavior regarding contentContainerStyle.

I’ve made a working example at rnplay.org, so you can check it out at https://rnplay.org/apps/P774EQ. I had to post to StackOverflow with a question at one point because for some reason if you forget the contentContainerStyle, the Text elements inside the ScrollView will still render, but the TextInputs don’t. I’m still not sure why that is, but it made it tricky to figure out the problem. In any case, once everything is rendering the way you want it, then the next thing is to get the auto-scroll working.

As you’ll see in the example, there are a few important steps:

  1. Make sure your ScrollView has the ref=’scrollView’ on it so the inputFocused() function can find it.
  2. Make sure either your TextInput or a wrapper around it has a ref also — this will be passed to the function so that the scroll can find it and position appropriately. In my case I put ref=’firstname’ on the wrapping View.
  3. Next, put the onFocus on the TextInput, and make sure it’s passing the name of the ref. If you have a form with multiple fields, you’ll have this on every TextInput, of course with a different ref for each input. See line 44 in the example.
  4. Then add the inputFocused() function to your code, which is straight from the StackOverflow post referenced above, so thanks again to Sherlock for that. As mentioned in that post, you can tweak the additionalOffset on line 71 if the positioning isn’t quite right based on your layout.

That’s really everything you need to do. I have this working for a view with three form fields, and it scrolls up nicely for each one. Hopefully this is something that will get built-in to TextInput in the future, but for now hopefully this will help out anyone trying to do this.

I’ve been very quiet here for some time, basically because I’ve been too busy building things to write, although I have started covering non-technical startup engineering topics over on Medium (so check those out if you’re interested).

Time to start adding at least some short bits here, though. Recently I’ve been getting addicted to coding in Go, which reminds me of my old C programming days but with most of the ugly stuff sanded off. It’s simple, fast, clean and easy to read, “object oriented” in the ways that matter (interfaces), and the packaging and dependency management so far seems well-thought-out.

Alongside a growing belief that microservices (or whatever you want to call it since that term is so trendy now) are the way to go once a system reaches a certain size, and a desire to make things API-driven, I threw together an extremely simple web service skeleton in Go. It doesn’t do much — it’s there to serve as a quick starting point, including a Dockerfile and process for building the Docker image.

The web service uses Martini, which is a nice clean layer on top of the standard HTTP library. This example service really just shows its routing and ability to serve static assets. The repository is here: https://github.com/masonoise/go-service-example

You can clone the repository, run “godep save -r” since the import in the server.go file expects the Martini dependency to be vendored, and then “go run server.go” will bring it up locally. The README gives the simple steps to build and run a Docker image in a way that results in an extremely small image. All credit for this goes to Travis Reeder / Iron.io thanks to this very handy blog post.

All of the code here is in the server.go file, which is (I think) self-explanatory. Note that getting this running assumes you have Go installed, your GOPATH set up, and so forth. Getting the Docker image running of course assumes you have the Docker Toolbox installed and Docker running.

Try it out, and enjoy!

I needed to put in a nice Date and Time selector into a form that’s using AngularJS, and wondered what might work. Searches came up with the Bootstrap Datetimepicker (on Github here) and a couple of related Stackoverflow posts, but nothing that laid it out simply. I ended up doing it mostly-but-not-quite by-the-book AngularJS, so I thought I’d write up a quick post about it in case it helps someone else.

Grab the download of the Datetimepicker from the site, and do the usual if you’re working in Rails like I am: copy the JS file into vendor/assets/javascripts, and the datetimepicker.css into vendor/assets/stylesheets, and add them to your application JS and SCSS files. From there, I made an AngularJS directive, mostly based on the one in this Stackoverflow post, though I had to make some tweaks to it. First, the HTML, which is simple enough; just put the directive into your page wherever you want the input field:

<date-time-picker recipient="recipient"></date-time-picker>

In my case, I have an ng-model called “recipient”, representing the person for whom the user is scheduling an event, so I passed in the recipient to the directive. The directive is as follows:

.directive('dateTimePicker', function() {
  return {
    restrict: 'E',
    replace: true,
    scope: {
      recipient: '='
      '<div>' +
      '<input type="text" readonly data-date-format="yyyy-mm-dd hh:ii" name="recipientDateTime" data-date-time required>'+
    link: function(scope, element, attrs, ngModel) {
      var input = element.find('input');

        format: "mm/dd/yyyy hh:ii",
        showMeridian: true,
        autoclose: true,
        todayBtn: true,
        todayHighlight: true

      element.bind('blur keyup change', function(){
        scope.recipient.datetime = input.val();

The setup specifies the recipient object that’s passed in, so that the directive can set the datetime property of it. The template is straight-forward, of course, being a normal text input with the datetimepicker options — I realized while writing this that there’s not really any need to specify the format there since it’s also specified down below in the JavaScript initializer. The initializer will take precedence in this case.

The link function is the first important piece here. It finds the input field within the directive’s element, and then initializes the datetimepicker widget on it. You can specify any of the datetimepicker options here, of course. The next piece is the one that really makes things work: the bind() call. This is called when the input field is changed/blurred, and it sets the datetime property of the recipient object to the value of the input field. Essentially this works because the user will select the date and the time using the widget, then move out of the input field to submit the form. As they leave the field, this sets the recipient.datetime property, so when the form is submitted, the AngularJS object has the desired value.

A nice side-effect of the way this works is that it supports multiple datetime fields for multiple recipients. I have my recipient list in an ng-repeat block with an add button, and as additional recipients are added with different dates/times, everything works as it should. Each recipient has their own datetime field and value.

I’ve recently started using Mailgun, which I like quite a bit, but I stumbled on an issue dealing with attachments, because the files I needed to attach are stored in S3. Using RestClient to send the emails, the expectation is that attachments are files. As it turns out, using the aws-sdk gem, S3 objects don’t quite behave like files, so it doesn’t work to simply toss the S3Object instance into the call.

The standard setup for sending an email is like the following:

 data = Multimap.new
 data[:from] = "My Self <no-reply@example.com>"
 data[:subject] = "Subject"
 data[:to] = "#{recipients.join(', ')}"
 data[:text] = text_version_of_your_email
 data[:html] = html_version_of_your_email
 data[:attachment] = File.new(File.join("files", "attachment.txt"))
 RestClient.post "https://api:#{API_KEY}"\

As this shows, the attachment is expected to be a File that can be read in. So the challenges was to make an S3 object readable. One option, of course, is to do this in two steps: read in the S3 object and use Tempfile to write a file which can then be used as the attachment. This seemed pretty unfortunate. For one thing, I’m running stuff on Heroku, and try to avoid using the file system even for temp files. But primarily, it’s really wasteful to have to write, and then read, a transitory file. The better option, of course, was to see if there was a way to trick the client into reading from S3.

Thanks to some very nice help from Mailgun support (thanks Sasha!), the idea of writing a wrapper seemed feasible, and in fact it wasn’t too bad aside from a couple of tricky issues. A side-effect advantage was that it solved another problem: the naming of the attachment. By default, the name of the attachment is the name of the file, which is pretty ugly if you use a temp file. Not user-friendly.

I’ve put the wrapper file in a Github Gist at https://gist.github.com/masonoise/5624266. It’s pretty short, and there were only a couple of gotchas, which I describe below. The key for this wrapper is to provide the methods that RestClient needs: #read, #path, #original_filename, and #content_type. It’s pretty obvious what #read and #path are for. The attachment naming problem is solved by #original_filename: whatever it returns will be the name of the attachment in the email. It should be clear what #content_type does, but see below for why it’s important.

Using the wrapper is described in the header comment, but it’s mainly a change to give RestClient the wrapper instead of a File object:

data[:attachment] = MailgunS3Attachment.new(file_name, file_key)

The first gotcha was that RestClient calls #read repeatedly for 8124 bytes, and doesn’t pass a block. This forced me to write a crude sort of buffering — the wrapper reads the whole S3 object in, then hands out chunks of it when asked. This isn’t a problem for me because the files I’m dealing with aren’t very large, but it’s something I warn about in the comments. If you have large files, this may be a problem for you, so beware.

The second gotcha that threw me off for a little while is that the value returned by #content_type is important. I haven’t researched exactly why this is, but I found that if I tried to send a Word document but #content_type returns ‘text/plain’, the attachment comes through corrupted. It was easy enough for me to check the filename suffix and set the content type accordingly, but I can imagine cases where this might not work, so this is something else to beware of.

Anyway, this solved the issue for me, and hopefully it’ll be useful for others. There are ways to make this a bit more elegant, but it’s a short piece of code that works. Enjoy.

Quick little tip to hopefully save others the hassle of tracking this down… I just added some pjax to an app in order to quick-load some content into a DIV in the page. However, the content includes a small AngularJS app, and when the content got loaded, the app wasn’t getting initialized, so instead of nice content I had mustache tags {{all over the place}}. Not so nice. After some searching and testing, I found that the following works:

    $(document).pjax('a[data-pjax]', '#content-panel');
    $(document).on('pjax:complete', function() {
        angular.bootstrap($("#content-panel")[0], ["my-app"]);

That waits for the pjax request to complete, then boots the AngularJS app, and everything is good again.

Last week I started in on learning AngularJS and putting it to work on the app at my new company, FinderLabs. I have a long post coming up detailing what I’ve learned about setting up an AngularJS page backed by a simple Rails API, but for today I just wanted to jot down some notes about creating an AngularJS directive, because I found it pretty painful figuring it out — some of the AngularJS docs are quite good, and some of them are lacking. Thankfully there are a lot of examples out there, but I had to look over too many of them to get this working. Note that I’m only moderately comfortable with JavaScript coding, so your mileage may vary.

So, what I wanted to was really easy with plain old jQuery, but it turned out to be more complicated when AngularJS entered the picture. Inside a list of items, for each item I had a DIV into which I was rendering a line graph, using Flotr2, and I had to pass it the JSON data needed for the charting. Before converting my page to AngularJS, I simply iterated my list in Ruby and called item.json_chart_data for each one, and called Flotr2. Not any more; now I’m using ng-repeat on a collection of items in my controller scope. What to do? I could in theory stuff the chart data into my items when my API returns them to my controller, but that was overloading the items themselves. So instead, I created a directive that loads the chart data for each item via an AJAX call.

Here’s the relevant markup from the page:

    <div class="item-graph" id="item-graph-{{item.id}}" style="width: 100px; height: 50px; padding-right: 20px">

        <item-graph item="{{item.id}}"></item-graph>


This defines the DIV that Flotr2 is going to draw into, with the width and height, and then inside it is my directive, “item-graph”. I pass the item id into the directive, so that it can make the AJAX call to the server to get the graph data. Now let’s look at the directive:

var app = angular.module('my-app', ['restangular', 'ui.bootstrap']).
  config(function(RestangularProvider) {
.controller('MyListCtrl', function($scope, Items) {
  $scope.items = Items;
.directive('itemGraph', function($http) {
  return {
    restrict: 'E',
    link: function(scope, element, attrs) {
      scope.$watch('itemId', function(value) {
          if (value) {
            $http.jsonp("/items/" + value + "/item_graph_data.json?callback=JSON_CALLBACK").then(function(response){
              draw_line_graph(document.getElementById('item-graph-' + value), response.data);

This is all of the code for the page; I’ll gloss over the top part since that’s pretty standard AngularJS stuff. It defines the app, injecting the dependencies. I use Restangular for most of the server interaction; it does a really nice job of encapsulating things in a clear, RESTful way. The code configures Restangular with the base URL, since my server-side routes are in the ‘/api/v1’ space. Then we define the controller, and use the Items service (see below) to fetch the items. Then we get into the more interesting part: the directive.

First, note the name of the directive: “itemGraph”. But in the page markup, the tag is “item-graph” — it’s an irritating inconsistency, but just remember that directive names are “camel cased” while the name in your markup is “snake cased”. Thus “item-graph” in the page matches up with “itemGraph” in the code. Whatever. So then in the declaration we inject in the $http service so it can be used later.

Directives basically return an object with a couple of important sections. The first is random configuration, of which the restrict is very important, and this caused me more than a few minutes of debugging. I seemed to have everything wired up, but nothing was happening. That’s because restrict defaults to ‘A’, which specifies that the directive is restricted to attributes. I needed ‘E’ for element. This is described in the AngularJS page on directives (here), but it’s a long, long way down the page buried in details about compiling and the “directive definition object”. Major documentation fail, yes. In any case, don’t make my mistake; as soon as I added this, it started invoking my code.

The second piece needed is the scope, which you can think of as what ties the attributes in your page to your directive. In this case, remember that in my page I specified item="{{item.id}}" in order to pass in the item id. In this scope block we specify itemId:'@item', which says that we want a binding between the local property ‘itemId’ and the attribute ‘item’. The ‘@’ indicates a one-way data binding. I highly recommend reading this great blog post that describes the use of attributes, bindings, and functions. You’ll be glad you did.

Whew, okay, so now we have the directive attached to our element, and we have the attribute bound to a local property so we can use it. How do we use it? Well, it’s a little complicated, but not too bad. First: think of the link() function as what’s invoked when the directive is “activated” for an element. So in here, we want to take the item id that was passed in, and do something. However, it turns out that you don’t simply use itemId directly, or (as a normal person might assume) go with attrs.itemId; instead, we have to use our scope, and “watch” the property. In fairness, this is because often what you’re doing is binding your directive to something in the page that can change, such as a form field. Then by watching it, your directive can respond to changes in that attribute. In my case that wasn’t needed, but I still have to play by the rules. So okay, call scope.$watch() on ‘itemId’, and then write a function to deal with it. Apparently when first loaded, the watch function is called, so then we make sure there is a value (perhaps the directive gets loaded before the DOM is ready? I dunno). If there is, let’s finally do something.

What I wanted to do in my case is make a quick call back to the server, specifying the item id — which, remember, is now represented by “value” since that’s the parameter name on the watch function. When it returns, it calls draw_line_graph(), another function that sets up Flotr2 and does the drawing. It draws the graph into the DOM element passed in, with the data also passed in.

And that’s it. Seems like a lot of code to do what could be done in a couple of lines before, to be honest, but it’s packaged and reusable, and easily copied to do more complex things. One last thing; as promised I wanted to include the Items service which is used to get the initial list of items for the page, just in case someone finds it useful. It’s in another small JS file:

var app = angular.module('my-app');

app.factory('Items', function($http, Restangular) {
  var baseItems = Restangular.all('items');
  return baseItems.getList();

That’s all there is to it, using Restangular to get the list. This automagically ends up invoking the server URL ‘/api/v1/items’ and returns the JSON response. Restangular is even nicer when you start getting into nested relationships and other more complex needs.

I’ve made some of this generic (“my-app” and “Items”), since I can’t get show full details of the app I’m working on, but hopefully it will also make it easier for this to be used by others. I hope this saves others who are new to AngularJS some of the pain I had in figuring all of this out.

I’ve recently had the chance and excuse to play with Elasticsearch, after reading good things about it. We’ve been using Solr with decent success, but it feels like whenever we try to do anything outside the normal index-and-search it’s more complicated than it should be. The basics are easy thanks to the terrific Sunspot gem, though. So when I had a small project to prototype that involved indexing PDFs as well as database records, I figured it was a good opportunity to try out Elasticsearch.

I quickly reached for the Tire gem, which is very similar to Sunspot if you’re using ActiveRecord. Where Sunspot has you include a “searchable” block, Tire adds a “mapping” block, but the idea is the same — that’s where you tell it what fields to index, and how to do it. For each field you can adjust the data type, boost, and more. You can also tack on a “settings” block to adjust things like the analyzers.

The documentation for Tire is pretty good, but I found that I made a number of mistakes trying to adapt the instructions on the Elasticsearch site to the Tire way of doing things, so I thought I’d write up some of the things I learned in hopes that it can help save time for others. Many thanks to the folks on StackOverflow who answered my questions and pointed me in the right direction.

One starter suggestion is to configure Tire’s debugger, which is really convenient because it will output the request being sent to the ES server as a curl command that you can copy and paste into a terminal for testing. Very handy. I added this to my config/environments/development.rb file:

  Tire.configure do
    logger STDERR, :level => 'debug'

Now on to the model. I’ll call mine Publication, so inside app/models/publication.rb:

class Publication < ActiveRecord::Base
  include Tire::Model::Search
  include Tire::Model::Callbacks

  attr_accessible :title, :isbn, :authors, :abstract, :pub_date

  settings :analysis => {
    :filter  => {
      :ngram_filter => {
        :type => "nGram",
        :min_gram => 2,
        :max_gram => 12
    :analyzer => {
      :index_ngram_analyzer => {
        :type  => "custom",
        :tokenizer  => "standard",
        :filter  => ["lowercase", "ngram_filter"]
      :search_ngram_analyzer => {
        :type  => "custom",
        :tokenizer  => "standard",
        :filter  => ["standard", "lowercase", "ngram_filter"]
  } do
    mapping :_source => { :excludes => ['attachment'] } do
      indexes :id, :type => 'integer'
      indexes :isbn
      [:title, :abstract].each do |attribute|
        indexes attribute, :type => 'string', :index_analyzer => 'index_ngram_analyzer', :search_analyzer => 'search_ngram_analyzer'
      indexes :authors
      indexes :pub_date, :type => 'date'
      indexes :attachment, :type => 'attachment'

  def to_indexed_json
    to_json(:methods => [:attachment])

  def attachment
    if isbn.present?
      path_to_pdf = "/Users/foobar/Documents/docs/#{isbn}.pdf"
      Base64.encode64(open(path_to_pdf) { |pdf| pdf.read })

Okay, that’s a lot of code, so let’s look things over bit by bit. Naturally the includes at the top are needed to mix-in the Tire methods. There are two includes so that you can include the calls needed for searching without the callbacks if you don’t need them. The callbacks, though, are what make things work auto-magically when you persist ActiveRecord objects. With those, whenever you call save() on an AR model, the object will be indexed into ES for you.

Next up are two blocks of code: settings, and mapping. The settings block defines a filter, and two analyzers, one for indexing and one for searching. I can’t claim to be enough of an expert yet to fully explain the ramifications of the filter/analyzer options, so rather than risk confusion I’ll just note that this code is there to set up the nGram filter and connect it with two analyzers, index and search, which differ slightly in order to ensure that the standard filter is included for searching. You may want to play with the nGram’s min and max settings to get the matching behavior you want. Note that if you don’t need the nGram filter, you can remove the settings block and let the mapping block stand on its own, in which case the default settings will be used (but you’ll have to change the mapping entry for the :title and :abstract fields, as described below).

The mapping block is the more interesting one, as it defines the fields and indexing behavior. The first line took me some searching and StackOverflow questioning to figure out. The issue that by default, Elasticsearch will put all of the fields you index into its _source storage. Because I’m indexing large PDF documents, the result was that a huge Base64-encoded field was being stored. If I wanted to serve the PDFs out of Elasticsearch that might be okay, but that’s not the plan. The :excludes instruction prevents the attachment field from being stored.

Next are the fields themselves, and I won’t spend much time on these because the Tire documentation does a fine job of explaining these. The only interesting items are the :attachment field and the entry for :title and :abstract — that one specifies that for those fields the custom analyzers defined in the settings block should be used. For :attachment it gets a little bit tricky.

When the indexing is performed, the fields themselves are gathered up by calling the method to_indexed_json(). Normally that will just do a to_json() on your model and then collect the fields. But you can also override it, which we do here. You can see that we add in the method attachment(), which is defined below. So the other fields will be JSONized as normal, as well as the output of the attachment() method. The attachment() method itself uses the ISBN number to open the PDF file, which is read and Base64-encoded. The results of that encoding will be included with the other fields and sent to ES for indexing.

Performing the searching is almost too easy, but there was one bit that threw me off initially, which was getting highlighting to work. The search block in my controller looks like this:

results = Publication.search do
  query { string query_string }
  sort { by :pub_date, 'desc' }
  highlight :title, :options => { :tag => "<strong class='highlight'>" }

I was trying to test the highlighting and was thrown off by the field names being case-sensitive (see my question on StackOverflow), but this now works. The other key is that the highlighted fields are returned separately from the plain fields, which was odd to see. This means that to display the highlighting I have to check for the field:

results.each do |r|
  r_title = (r.highlight.nil? ? r.title : r.highlight.title[0])
  puts "Title: #{r_title}"

If the highlighting is present then it’s used; if not (because the term isn’t present in that field) then the regular field is used. The other handy thing to note is that you can specify the tag with which to wrap the term. The default is “<em>” but I wanted to specify the “highlight” CSS class, as is shown here. This is a really convenient feature.

That covers the basics, though it’s also probably worth sharing just how nice it is to be able to test using curl. For example, I wanted to check how easy it is to have the search call return just a single field (to speed up certain requests), so I tried it first in curl:

curl -XPOST http://localhost:9200/publications/_search\?pretty\=true -d '{
"query": {"query_string": {"query": "Foobar"}},
"fields": ["title"]

That’s of course assuming that ES is running on your local system on port 9200; if not, adjust accordingly.

There you go. I hope this writeup is helpful to folks getting started and it saves you some time.


Get every new post delivered to your Inbox.