I’ve been using Firebase with Ember.js quite a lot recently and have just
released Fireplace, a library to integrate the two more easily. It’s been
extracted from a rather large application so is driven from real world usage.
There’s EmberFire but, aside from it not existing when I started to write Fireplace,
it doesn’t support relationships or many other basic / advanced features I’d want in an Ember persistence library.
Anyone who’s used Ember Model or Ember Data
should feel at home with Fireplace as I’ve taken inspiration (and code) from many parts of them when developing it.
Ok, not so fast. That would have been a rather short blog post…
It turns out that the cache buster doesn’t work if it’s after a concat and this is an issue which is impossible to fix with the way rake-pipeline works.
Rake pipeline filters generate the filenames they output when they are initialized, which means you can’t have a filter which has a filename based on the output of a previous filter as you’d want for cache busting. When the cache buster tries to generate the filename the concat hasn’t yet happened so it has no contents to work with.
If we can’t dynamically change the filename in a filter, how do we write a cache buster for rake-pipeline which actually works? We need to check all the files before rake-pipeline runs and use that knowledge to set the output name, something like:
We want to generate some kind of key based on the contents of the files in the javascripts directory which will only change if
there’s a change to the files. We could iterate through each of the files and get the MD5 hash of each, then take an MD5 of all the hashes to generate one master hash, or maybe we loop through and find the most recent mtime and generate a key from that? All sounds a bit messy and resource intensive for such a simple task.
If only there was a way we could find out the last change of a file in a directory, some kind of system which tracked all the versions of our files that we could ask…
Turns out git is quite good at tracking changes to files, it’s also rather easy to get a log of what’s changed in a directory and we can ask for the hash of the most recent change with git log -n1 --pretty=format:"%H" app/javascripts.
Lets say you let your users edit comments they’ve posted for up to 5 minutes, we want to display an edit
button on all comments posted by the current person until that 5 minutes is over.
Our comment template might look something like this:
currentPerson could be bound to another controller or injected into all controllers depending on how
your app works.
That covers only showing the edit button if the comment was posted by the current logged in person and
is less than 5 minutes old.
That’s all good, but we want to automatically hide the edit button once 5 minutes has elapsed so we need to
track the passage of time too. We could add a timer to the controller and have that tick every minute or so:
comment-controller.js
123456789101112131415
App.CommentController=Ember.ObjectController.extend({init:function(){this.tick();this._super();},tick:function(){// forces isEditable to be recalculated as it's bound to `postedAt`this.notifyPropertyChange("postedAt");varoneMinute=1000*60;varself=this;setTimeout(function(){self.tick();},oneMinute)}});
That’ll work, but then every single comment which is displayed will have its own timer set. It’s also something we’ll end up
repeating in every bit of the app which does something based on the time.
How about we move it into the view?
comment-view.js
1234567891011121314151617181920212223
App.CommentView=Ember.View.extend({didInsertElement:function(){this.tick();},willDestroyElement:function(){clearTimeout(this._timer);},tick:function(){// forces isEditable to be recalculated as it's bound to `postedAt`this.get("content").notifyPropertyChange("postedAt");varoneMinute=1000*60;varself=this;this._timer=setTimeout(function(){self.tick();},oneMinute)},isEditable:function(){// as before}.property("content.postedBy","controller.currentPerson","content.postedAt")});
Hmm, that’s better in that we know when the timer is kicked off and we can tear it down when the comment is
removed from the view, but we’d have to update the template to point to view.isEditable and isEditable
is getting a bit unweildy having to bind to content and controller. If it’s ugly it probably isn’t right,
so lets scrap this train of thought and rethink things.
We know that every comment needs to know the current time and be updated when it changes, so lets introduce
a domain object to model that:
That’s a simple clock that we can instantiate and it’ll tick every second that our app’s running.
We can use injections to give every controller access to the same clock instance:
Now every controller has access to the same clock, so lets update our comment controller to use it:
comment-controller.js
12345
App.CommentController=Ember.ObjectController.extend({isEditable:function(){// as before}.property("postedBy","currentPerson","postedAt","clock.minute")});
All we’ve done is add clock.minute to the property bindings which causes this to automatically update
once a minute.
We can now reuse that logic anywhere in our application, just add clock.second, clock.minute or clock.hour
to property bindings and they’ll be automatically re-calculated at the appropriate points in time.
The Ember Router takes events from user actions and hands them off to the appropriate
Route depending on where the user is within the app.
Pusher receives events from your server which your app then handles, but you might
want to do different things depending on where your user is within your app at the time the message is received.
Wouldn’t it be great if we could hook these two things up together?
Here’s what we’re going to end up with in a route:
my_route.js
1234567891011121314151617
App.MyRoute=Ember.Route.extend({// subscribe/unsubscribe to a pusher channel// when we enter/exit this part of the appactivate:function(){this.get("pusher").subscribe("a-channel");},deactivate:function(){this.get("pusher").unsuscribe("a-channel");},// handle event from pusher just like normal actionsevents:{aMessageFromPusher:function(data){// do something here}}});
First of all, lets define a Pusher object which will handle subscribing and unsubscribing to channels, and dispatches
any messages we receive from Pusher to the router:
App.Pusher=Ember.Object.extend({key:null,init:function(){var_this=this;this.service=newPusher(this.get("key"));this.service.connection.bind('connected',function(){_this.connected();});this.service.bind_all(function(eventName,data){_this.handleEvent(eventName,data);});},connected:function(){this.socketId=this.service.connection.socket_id;this.addSocketIdToXHR();},// add X-Pusher-Socket header so we can exclude the sender from their own actions// http://pusher.com/docs/server_api_guide/server_excluding_recipientsaddSocketIdToXHR:function(){var_this=this;Ember.$.ajaxPrefilter(function(options,originalOptions,xhr){returnxhr.setRequestHeader('X-Pusher-Socket',_this.socketId);});},subscribe:function(channel){returnthis.service.subscribe(channel);},unsubscribe:function(channel){returnthis.service.unsubscribe(channel);},handleEvent:function(eventName,data){varrouter,unhandled;// ignore pusher internal eventsif(eventName.match(/^pusher:/)){return;}router=this.get("container").lookup("router:main");try{router.send(eventName,data);}catch(e){unhandled=e.message.match(/Nothing handled the event/);if(!unhandled){throwe};}}});
Most of that is pretty straight-forward, we’re just wrapping some basic Pusher functionality and listening for
any message which we get sent. Let’s take a closer look at the meat of the handleEvent method:
1234567
router=this.get("container").lookup("router:main");try{router.send(eventName,data);}catch(e){unhandled=e.message.match(/Nothing handled the event/);if(!unhandled){throwe};}
There’s no longer a global App.router we can access in Ember, so we need to get the router from the container,
then we simply pass send the event and data we got from Pusher. This will then trigger
the event on the current route, or the first of its parents which handle the event.
If the event goes unhandled Ember will raise an error, normally we want this to make sure we’re not
exposing functionality the current route can’t handle, but in this case we have no control of where
the user is within our app when a message from Pusher.
How does our Pusher object get the container, and how do our controllers and routes get
access to Pusher? We do this with injections in an initializer:
1234567891011121314
Ember.Application.initializer({name:"pusher",initialize:function(container,application){// use the same instance of Pusher everywhere in the appcontainer.optionsForType('pusher',{singleton:true});// register 'pusher:main' as our Pusher objectcontainer.register('pusher','main',application.Pusher);// inject the Pusher object into all controllers and routescontainer.typeInjection('controller','pusher','pusher:main');container.typeInjection('route','pusher','pusher:main');}});
Now any controller or route which is instantiated will automatically have an instance of our
Pusher object injected into it.
This causes a bit of a problem with controllers which extend from ObjectController as it will try
and set pusher on them before they have any content assigned and raise the following error:
12
Cannot delegate set('pusher', pusher) to the 'content' property
of object proxy <Ember.ObjectProxy:ember420>: its 'content' is undefined
To address this, we can reopen ControllerMixin to assign a default null value for pusher. As
ObjectController mixes in ControllerMixin it now has its own pusher property and the error is
avoided:
123
Ember.ControllerMixin.reopen({pusher:null});
Now in your app.js or wherever you kick-off your app, we can re-open App.Pusher to set the API key:
app.js
123
App.Pusher.reopen({key:"your-pusher-key"});
Job done, now any messages received from Pusher will trigger events on your routes and you can handle them just
like normal user actions.
Lets say we’re writing a blog which allows users to login, but only certain users can write and edit articles.
We want to display add/edit buttons based on permissions, so how do we do that?
For simple permissions, this is quite trivial. For example, to check if the current logged in user is an
administrator we can just do something like:
This only works if we have a single property and we can’t pass any arguments, which means the following won’t work:
blog/index.handlebars
123
{{#if App.currentUser.canEditPost post }}
<button{{actioneditPostpost}}>edit</button> {{/if}}
Research
What we want is a version of if which knows about permissions and will let us pass in arguments so that we can end up with something like this:
blog/index.handlebars
123456789101112
{{#can createPost}}
<button{{actionnewBlogPost}}>New Post</button> {{else}}
You don't have permission to post
{{/can}}
{{#each post in controller}}
<a{{actionviewPostposthref=true}}>{{post.title}}</a> {{#can editPost post}}
<button{{actioneditPostpost}}>Edit</button> {{/can}}
{{/each}}
Ember.Handlebars.registerHelper('if',function(context,options){Ember.assert("You must pass exactly one argument to the if helper",arguments.length===2);Ember.assert("You must pass a block to the if helper",options.fn&&options.fn!==Handlebars.VM.noop);returnhelpers.boundIf.call(options.contexts[0],context,options);});
This just does some sanity checking and hands off to boundIf:
This in turn calls bind which handles setting up all the observers and re-rendering when properties change. The result of the func it builds
determines whether to display the content or not.
It looks like if we create a helper which calls boundIf with some property to observe on an object, it will take care of the rest for us.
can-helper.js
123456
Handlebars.registerHelper('can',function(permissionName,property,options){// do magic hereEmber.Handlebars.helpers.boundIf.call(someObject,"someProperty",options)})
Hmm, that leaves the content as hidden. It seems that it’s not calling the can on our permission.
If we look back at boundIf then we can see that it’s looking up the context on the options and only falls back to this if
there’s not one set:
ember-handlebars/lib/helpers/binding.js
1
varcontext=(fn.contexts&&fn.contexts[0])||this;
We can get around this by nuking the contexts on the options we pass through to boundIf.
(I’m not sure if this will cause issues, but it worked for me… YMMV and all that).
can-helper.js
12345678910111213
Handlebars.registerHelper('can',function(permissionName,property,options){varpermission=Ember.Object.create({can:function(){returntrue;}.property()});// wipe out contexts so boundIf uses `this` (the permission) as the contextoptions.contexts=null;Ember.Handlebars.helpers.boundIf.call(permission,"can",options)})
If you twiddle the result of can from true to false then we see our content disappear and re-appear, success!
Implementation
Lets define a class to represent our actual permission:
We want to refer to this with a more friendly name in our templates, we could figure out that createPost maps to App.CanCreatePost by
capitalizing and prepending with ‘Can’, but instead lets make a simple registry:
We now have a couple of permissions which have a can property we can bind to and friendly names to lookup from the templates.
All our helper needs to do is take the passed in name, create an appropriate permission with any attributes and pass that off
to the boundIf helper.
After bit of trial and error, I ended up with the following:
varget=Ember.get,isGlobalPath=Ember.isGlobalPath,normalizePath=Ember.Handlebars.normalizePath;vargetProp=function(context,property,options){if(isGlobalPath(property)){returnget(property);}else{varpath=normalizePath(context,property,options.data);returnget(path.root,path.path);}};Handlebars.registerHelper('can',function(permissionName,property,options){varattrs,context,key,path,permission;// property is optional, if we've only got 2 arguments then the property contains our optionsif(!options){options=property;property=null;}context=(options.contexts&&options.contexts[0])||this;attrs={};// if we've got a property name, get its value and set it to the permission's content// this will set the passed in `post` to the content eg:// {{#can editPost post}} ... {{/can}}if(property){attrs.content=getProp(context,property,options);}// if we've got any options, find their values eg:// {{#can createPost project:Project user:App.currentUser}} ... {{/can}}for(keyinoptions.hash){path=options.hash[key];attrs[key]=getProp(context,path,options);}// find & create the permission with the supplied attributespermission=App.Permissions.get(permissionName,attrs);// ensure boundIf uses permission as context and not the view/controller// otherwise it looks for 'can' in the wrong placeoptions.contexts=null;// bind it all together and kickoff the observersreturnEmber.Handlebars.helpers.boundIf.call(permission,"can",options);});
That’s it, now we can show/hide content based on user permissions and have them automatically update when a user
logs in or their permissions change.
The router is the core of any Ember.js application and it can get big, fast.
Keeping your entire application’s router in one file is going to lead to madness. Thankfully it’s quite a simple problem to resolve.
Lets imagine an application with a number of discrete sections - a blog, a list of members and an area to browse uploaded files.
We have an init.js which sets up the application:
Looks pretty straightforward, but that’s without any outlet management, serializing/deserializing, action handlers etc…
Breaking this up is pretty simple.
Anywhere we say Ember.Route.extend we’re defining an anonymous class,
so in order to split up the router we can just give the class a name and move it to a file of its own.
Update: As Jo Liss points out,
you can specify the base route when you assemble the router as opposed to hard coding it in each section. I really like this, feels very similar to
how engines are mounted in Rails.
SessionsController#create tries to find a User with the supplied username and password and then redirects or re-displays the login form.
A few months go by and we decide to let users login against a number of third party services, using something like OmniAuth
to do the actual heavy lifting of communicating with the providers. Our controller and model have bloated a fair bit:
There’s not much good to say about that code, but I’ve seen similar things in plenty of apps over the years. At least we’re not doing the db calls directly in the controller.
Single Responsibility Principle
The Single Responsibility Principle basically says that an object should be responsible for one thing only.
Thinking about that another way, we can also say that a single change should only touch one part of the system.
With the code as it stands, if we add or remove an authentication method we’ve got to change code in both the controller and the model.
Fixing that is pretty simple, let’s just move the logic for deciding which authentication method we’re using into the model:
All tidy, now if we change how authentication works we only have to make changes in one part of the codebase.
Skinny controllers, fat models
This is basically the skinny controllers, fat models principle which encourages moving your business logic out of your controllers and into your models.
This lets us keep nice clean controllers and views, but we can end up with massively bloated models.
What we should aim for is not just skinny controllers, but skinny models too. In fact, we want really want everything to be skinny.
Lets do some refactoring!
When all you have is a hammer…
Being a Ruby developer, our first port of call is simply to move out the authentication code into a module,
using the standard include/extend pattern to bring both class and instance methods along:
ActiveSupport::Concern can clean this up a little bit and brings a few other tricks to the table.
Have we actually done anything?
We’ve got better organised code and given this concept of “user access” a name by wrapping it up in a module.
We’ve also made it easier to test as we can test this module outside of Rails making our tests faster, which is nice.
In effect all we’re doing is cleaning up the source so it’s easier to find things, we’re not modelling the problem any better.
We’re still treating the User as a bucket of methods without giving any real thought as to where these things belong.
Over time we include more and more functionality into the one model, hardly a “single responsibility”:
All the User class should really care about is persistence - storing and retreiving the attributes from the database.
Anything other than that is really outside of its scope, I’d argue that even observers, validation & callbacks don’t belong in the model most of the time.
If we look back at the UserAccess module, it’s pretty self contained and would quite happily exist outside of the User model.
Other than the #authenticate_password method it’s all just class methods which go off and try and find a User.
With very few changes we can make this a stand-alone module:
I’ve inlined the instance method, it could just as easily be another class method which takes the user and password, it doesn’t really matter at this point.
The important thing is that it has allowed us to remove the mixin from the User model, leaving it doing just one thing - handling persistence.
The controller needs changing to point to our new stand-alone module, but that’s a trivial change:
So we’ve got a nice clean model and a nice clean controller, but what about the UserAccess module?
It’s a bit of a mess, but at least it’s swept into one self contained part of the system so we can refactor this without affecting anything else.
Back to the Single Responsibility Principle, lets split up the module into a sub-module per authentication type,
that way each is nicely self contained and responsible purely for the one authentication scheme:
That’s a bit cleaner, but we’ve still got that nasty .authenticate method and we’re back with the problem that if we add or remove an authentication method we’re going to have
to change code in more than one place. Should the top level UserAccess module really know about the logic which determines which sub-module to use?
Chain of Responsibility
What we really want to do is move the logic from the .authenticate method down into the sub-modules.
This is where something like the Chain of Responsibility pattern comes in handy.
Instead of choosing which authentication type to use up front, we ask each module one at a time whether it can handle the submitted parameters:
Now we just loop through the authentication modules and find the first one which can handle the parameters we have and then calls authenticate on it.
Each module is completely responsible for its logic and if we add or remove an authentication method we only have to change one thing.
I’m using an array of types here to loop through, but you could also just loop through the sub-modules of the UserAccess module.
You could also get rid of the .can_handle? method by just calling .authenticate and returning the User for success, nil for a failure and false if it doesn’t handle the params,
but I prefer to be explicit and having a return nil vs false can lead to much confusion.
Here’s a high level overview of what we’ve ended up with, skinny controller, skinny model and a skinny set of modules all responsible for one thing only:
I’m currently working on a project which has an API backend and a JS frontend which consumes that API.
Both parts are built with Rails and must be served from the same domain and port because of the same origin policy.
The API will be served from a sub-directory like so:
http://example.com - serves the JS app
http://example.com/api - serves the API
It’s pretty trivial to set this up with nginx, but developing locally is a bit trickier.
Running both apps with rails server will put them on different ports and the JS app won’t be able to communicate with the API.
We could setup a local nginx config on our development machines, but this makes it harder to setup breakpoints in ruby-debug amongst other things.
Rails apps are just Rack apps, so my first thought was to create a config.ru which mounts both Rails apps:
This raises an error saying You cannot have more than one Rails::Application so that’s that idea out the window.
We could turn the API into a Rails Engine and mount that inside the other app, but we really want these two apps to be completely separate and not have to know about each other outside of the documented API.
The obvious solution is to use a proxy to let us run each Rails app independently and have the proxy forward requests to each one depending on the URL.
The simplest thing I could think to setup a proxy server was to use Rack::Proxy and about 5 minutes later I had a working solution:
Pretty simple, we just rewrite the HTTP_HOST depending on whether or not the requested path starts with “/api”.
Now we fire up the frontend and backend Rails apps on port 3000 and 3001 respectively, run the proxy on another port and point the browser there.
Using rackup config.ru worked fine, but when I tried to run the proxy using passenger-standalone I got the following error:
$ passenger start -p 9999
1234567891011
=============== Phusion Passenger Standalone web server started ===============
PID file: /Users/rlivsey/Sites/multi-rails-experiment/tmp/pids/passenger.9999.pid
Log file: /Users/rlivsey/Sites/multi-rails-experiment/log/passenger.9999.log
Environment: development
Accessible via: http://0.0.0.0:9999/
You can stop Phusion Passenger Standalone by pressing Ctrl-C.
===============================================================================
2012/02/23 14:11:52 [error] 9691#0: *4 "/Users/rlivsey/Sites/multi-rails-experiment/public/index.html" is not found (2: No such file or directory), client: 127.0.0.1, server: _, request: "HEAD / HTTP/1.1", host: "0.0.0.0"
2012/02/23 14:12:07 [error] 9691#0: *5 "/Users/rlivsey/Sites/multi-rails-experiment/public/index.html" is not found (2: No such file or directory), client: 127.0.0.1, server: _, request: "GET / HTTP/1.1", host: "localhost:9999"
2012/02/23 14:12:07 [error] 9691#0: *5 open() "/Users/rlivsey/Sites/multi-rails-experiment/public/favicon.ico" failed (2: No such file or directory), client: 127.0.0.1, server: _, request: "GET /favicon.ico HTTP/1.1", host: "localhost:9999"
This is because passenger-standalone sets up the nginx config expecting there to be a public directory, so I just created an empty one and everything worked fine.
With this setup we can also trivially switch the API host to point to production, letting us develop the frontend against the production API should we want to test the UI with live data.
It’s been a few days since we turned on payment and, in my eyes at least, officially shipped MinuteBase. We’ve been in beta for quite some time, but having paying customers is the major milestone which transforms it from a “project” into an actual business. I’ve learned a huge amount and thought that now is a good time to look back at our progress.
What went well
We based MinuteBase on a problem we actually experienced
I’ve worked on a number of projects over the years which I thought were a good idea, but weren’t solving a problem that I actually faced. This lead to either solving the wrong problem, or just getting bored once the rush of building something new wore off. Just because you experience a problem doesn’t necessarily mean that it’s a viable business, but it certainly helps.
I had a co-founder
This has been essential for getting through the inevitable slumps in motivation which occur. It’s much easier to quit when there’s no one else involved. We’ve not always agreed, far from it, but the many hours of discussion and arguments have lead to a much better product.
Having a co-founder with complementary skills is also essential. I’m a pretty good developer but I’m never going to be mistaken for a designer. I like to think I have a reasonably good eye, but put me in front of PhotoShop for a year and it’s unlikely that I’ll produce anything as beautiful as MinuteBase.
We launched early
Our first version wasn’t quite an MVP, we probably could have launched earlier, but looking back it’s amazing what features we didn’t have. Having actual users other than ourselves tell us what they are missing has been fantastic. It also means we’ve been able to focus on things which are actual problems, instead of imagined “essential” features which in the end don’t matter all that much.
We dog-fooded right from the beginning
Because MinuteBase was built to solve a problem we actually had, we were able to use it right from the earliest stages in the companies we worked for and to collaborate with our clients. We also use MinuteBase to build MinuteBase, by writing up all our meetings and discussions, sharing documents and tracking the tasks and actions as we go.
Using it every day means we have a better idea of where we need to improve than having to wait for other people to tell us. I’m even using it right now to write up this blog post, you can see it at MinuteBase here.
What could be improved
Iterations were too long
MinuteBase 2 initially started off as some small improvements to the prototype app, there’s nothing there which we couldn’t have added in iteratively as we went. Instead we put too many changes and into one release and ended up taking far too long to get changes in-front of our users where we could get feedback.
We changed technology stack mid-stream
The first version was built using Merb, MySQL, DataMapper and Prototype. Our version 2 is Rails 3, MongoDB, MongoMapper, ElasticSearch and jQuery. Very little code survives from the initial prototype.
This meant that far too much time was spent re-building things which already worked instead of on improvements. It also meant that it took us much longer to get in front of our users as we couldn’t run both versions side by side sharing the same database.
However, building one to throw away meant that when we were building “version 2” we had a much better idea of what worked and what didn’t. We were able to make more fundamental changes to the way the app worked than if we were iteratively changing the prototype.
We didn’t turn on payment early enough
There’s no reason why we couldn’t have enabled payment 6 months ago, in version 1 of the app. Instead we convinced ourselves that it wasn’t ready, and that we’d turn on payment after “this one next feature” or bug fix. Of course because our iterations were too large, that “one next feature” ended up taking months, during which time we could have been bringing in money and proving the business model. We even had people asking how they could pay us!
We didn’t have a “business guy”
After going through this process I think the ultimate founding team is a designer, a developer and a business-person. While we’re building the product there’s no one focused on sales & marketing or just getting out there and talking to people.
That’s not to say that we shouldn’t or couldn’t be doing more of that side of things ourselves, but it’s easy to put off going out & talking to people, or drumming up press until after you’ve “finished” building the app. And you never really finish.
We’ve been too quiet
If you look at the MinuteBase blog or Twitter stream, you’ll see there’s not a lot there. All the posts are about changes to the app and new features.
We need to get much better at producing original content and linking to interesting material so that the blog itself can work as a marketing channel. As it stands, unless you’re a MinuteBase user, there’s not much point subscribing to our blog or following us on Twitter.
This has to change and we’re going to be spending much more time on this in future.
MinuteBase might be too specific a name
Our original focus was to build the best tool to take meeting minutes and we chose our name based on that. Call it scope creep, or pivoting, but the MinuteBase of today does far more than just meeting minutes.
With the introduction of workspaces, MinuteBase has turned into a great project management tool but our name is still focused on one part of the app.
Time will tell how much of a problem this is, but having a more generic name or something focused on meetings instead of minutes could have been a better idea.
In Closing
So many of these lessons are things should have already known. In my day job managing projects over the years I preached agile development, small iterations, test driven development. Even things I didn’t have first hand experience in I should have known via talking to others or from Hacker News over the years.
For some reason when you’re building it for yourself and don’t have anyone to report to these things go out of the window.
This process has made me a much better developer, a much better manager and no matter what happens to MinuteBase I’ve no doubt that it makes me a far more capable person than I was before.
If you go to meetings, or manage projects, why not give MinuteBase a try.