I’ve been working on setting up the infrastructure for a mostly javascript based project, and we’ve been putting RequireJS into the codebase to help us manage the file dependencies instead of having to declare them within the page that is using them. As a concept, RequireJS is helping us keep different javascript modules apart in different files and let’s us assemble them.
RequireJS works by declaring dependencies and having the framework pull them in when you need them.
define(["aDependency"], function(theDependency) { // now I can do something with theDependency theDependency.aMethodOnIt(); })
This is pretty much how spring works, but the issue I have is that RequireJS manages the lifecycle of the javascript objects, so when you want to pass in a substitute for a test, you end up in a dilemma.
define(["aDependency"], function(theDependency) { // how do I get inject a different instance? // now I can do something with theDependency theDependency.aMethodOnIt(); })
Unsurprisingly a number of people wrote libraries such as testr which allow you to override the requirejs to inject different versions. Although very reasonable approaches, I find this approach a little bit smelly as you’re effectively patching a library you don’t own. The ruby community know the dangers of monkey patching too much, particularly those parts of a code base you cannot control and the potential issues you face when you try to upgrade.
Our current approach involves using RequireJS to manage the file/name dependencies, but for us to write javascript that allows us to control the instances of the objects that we want. Here’s an example:
dependency.js
define([], function () { return function () { return { doSomeWork:function () { } }; }; });
consumer.js
define([], function () { return function (aDependency) { var dependency = aDependency; return { start:function () { dependency.doSomeWork(); } }; }; });
And then we control the lifecycle of the components and instances in the application using the following code.
main.js
define(["consumer", "dependency"], function (Consumer, Dependency) { var dependency = Dependency(); var consumer = Consumer(dependency); consumer.start(); });
And our jasmine tests get to look like this:
requirejs = require('requirejs'); describe("consumer", function() { it("should ensure the dependency does some work", function() { // given var dependency = jasmine.createSpyObj("dependency", ["doSomeWork"]); var consumer = requirejs("consumer")(dependency); // when consumer.start(); // then expect(dependency.doSomeWork).toHaveBeenCalled(); }); });
This approach has been working out well, forcing us to manage the dependency and global hell that javascript global functions can quickly become. Thoughts? Please leave a comment.
Why are you not using a constructor for your object, and then passing the dependency as an argument?
Using constructors for your objects also has the added benefit of making them much easier to test, as there is no chance of state from one test leaking into other tests.
Dependency injection is not dead, works WITHOUT jumping through hoops or even RequireJS.
Hi Morgan,
Thanks for posting a comment. I believe you, and I are agreeing, but help me out here because I don’t understand your comment.
I believe I am using the constructor for my object (consumer.js is the only one with a dependency). As you can see in the test, I am injecting a different dependency in the test compared to the application (main.js)
I’m not suggesting Dependency Injection (DI) is not dead, just that requirejs is a bit sucky. Spring to me is a DI framework, but that doesn’t meant that we cannot use DI.
Cheers.
Pat
I think you’re making things hard for yourselves by using RequireJS from within your tests.
Require forces your modules to declare all their dependencies as arguments to a function. In production, Require will call the function for you. In test code I would call it myself from the test case, passing in the arguments manually.
Hi Darren,
Thanks for the comment. Can you point me to an example as I understand what you’re saying, but not sure how that works?
We are using require to manage the separation/names of the javascript functions. I’m not sure how I would load the correct constructor in our tests with the way the production source is load out without requirejs.
Hi Pat
You’re calling a function. Construction of new objects should be done with the
new
keyword.http://pivotallabs.com/users/pjaros/blog/articles/1368-javascript-constructors-prototypes-and-the-new-keyword
Ah yes, now I remember also getting frustrated that Require appears to entwine itself into your code.
You have a few options. You could provide your own definition of define inside your tests and not use Require at all, or call through to Require manually. You could also look at using different contexts.
Theres’s also Squire which looks like it might suit you.
https://github.com/iammerrick/Squire.js
I rarely use new in javascript and try to stick to functional style. Most of my code looks something like this:
define(['jquery', 'foo'], function($, foo) {
function anExportedFunction(anArgument) {
return aPrivateFunction(foo, anArgument);
}
function aPrivateFunction(foo, anArgument) {
return foo.doSomethingCleverWith(anArgument);
}
return {
thisIsAPublicMethod: anExportedFunction
};
});
Sometimes I’ll keep the public method name and the function I’m binding it to the same for readability.
@Morgan:
Thanks for the link. I guess I’m still following the crockford recommendations (for better or worse). I’ve not found any side-effects with this pattern and I find I never need inheritance in JS.
The requirejs is just helping us manage the mess of files and namespaces.
@Darren:
I had been recommended squire, but it still is effectively patching requirejs in some funky manner and something we’re trying to avoid on this project.