Software Test Engineering at Axial

At Axial, all of our Software Test Engineers are embedded in the development teams. We come in early on in the development life cycle to do the testing. We firmly believe that quality can be built-in from scratch, therefore, we spend a lot of time educating other members of the team how to think of testing. We encourage developers to test, write unit tests, and pair program. Everybody on the team should be able to test and write automation to support the testing activities. Our developers allocate a certain amount of time to write unit tests and are responsible for them. Both testers and developers have shared ownership over the integration and API tests. Here is where we pair up to build out test suites based on test ideas. We use mind mapping tools, stickies, and brainstorm together to determine what needs to be automated. A good rule of thumb is to focus on automating the repetitive regression tests as well as the scenarios based on recurring bug fixes. Automation offers invaluable support to to us testers, and it allows us to focus on the exploratory and smarter type of testing that lets us catch more corner cases.

Pairing is something that is very important when it comes to testing. Having a fresh set of eyes look at a piece of software helps us broaden our perspective. To not only pair with other testers, but also, with developers, product managers, customer service, sales representatives, and even end-users helps us get better perspective and momentum.

We spend time creating documentation both in a written format, but also, in visual and video format. This allows for the ability to get new people on board quickly and share this knowledge across departments. As testers, we are serving many stakeholders. By keeping documentation up-to-date, we are enlightening the organization about testing, and that generates more interest in testing. One thing we value a lot at Axial is “dogfooding”; to have people from all departments test is valuable to all. It provides us with feedback so that we can develop better software.

Some might say that testing is dead and that everything can be automated, but that could not be further from the truth. Testing is a craft. It is an intelligent activity performed by humans. Computers help us save time and money, by automating what is already known, but having smart testers embedded in development teams brings real value. To communicate well and to ask questions about development, risks, requirements, etc. are two of the most important skills a tester can have, especially in an Agile team. Asking questions to gain knowledge and communicating well will help you to quickly identify risk areas and prevent development issues ahead of time potentially saving the team both time and money. These abilities are something that differentiate an excellent tester from a regular tester whose mindset is set entirely on just finding bugs.

There are many branches of testing to think of. Functional is one, security, performance, UX, location and accessibility testing are some others. Tools can help us do our job better. But it is how we think and act as testers that will make a difference. Keeping a positive, solution-minded attitude and looking at issues in an objective manner helps to eliminate personal constraints in a team. We share the quality interest, and work, together as one to ship a quality product.

Staying connected to the software testing community, going to conferences, meetups such as CAST, TestBash, NYC Testers and Let’s Test, and interacting with great people in the field really helps motivate us, gives us new ideas and brings our craft to a whole new level.

We read a lot, listen, follow (and question) some of the thought leaders in the field. Here are three testing quotes from some of my favorites:

“The job of tests, and the people that develop and run tests, is to prevent defects, not find them” – Mary Poppendieck

“Documentation is the castor oil of programming.” – Gerald M. Weinberg

“Testing is an infinite process of comparing the invisible to the ambiguous in order to avoid the unthinkable happening to the anonymous.” – James Bach

 

If you are interested in chatting about testing, automation tools and different methodologies, please feel free to reach out to us @axialcorps

In the upcoming blog posts we will do on Software testing, we are going to post hands-on tutorials, videos on how to set up test environments with Protractor and, Gatling, as well as break down how we do pair testing. Stay tuned!

Validating JSON-RPC Messages

Here at Axial we have standardized on JSON-RPC 2.0 for our inter-service communications. We recently decided to start using JSON Schemas to validate our JSON-RPC 2.0 messages. JSON Schema (http://tools.ietf.org/html/draft-zyp-json-schema-04) is a flexible declarative mechanism for validating the structure of JSON data. In addition to allowing for definitions of simple and complex data types, JSON Schema also has primitives ‘allOf’, ‘anyOf’ and ‘oneOf’ for combining definitions together. As we will see, all three of these primitives are useful for validating even just the envelope of JSON-RPC 2.0 messages.

The simplest JSON Schema is just an empty object:

{ }

This JSON Schema validates against any valid JSON document. This schema can be made more restrictive by specifying a “type”. For example, this JSON Schema:

{ “type”: “object” }

would validate against any JSON document which is an object. The other basic schema types are: array, string, number, integer, boolean and null. Most of the basic schema type comes with additional keywords that further constrain the set of valid documents. For example, this JSON Schema could be used to validate an object that must contain a ‘name’ property which is a string:

{
    “type”: “object”,
    “properties”: {
        “name”: { “type”: “string” }
    },
    “required”: [ “name” ]
}

If “name” was not in the “required” list (or if “required” was not specified) then objects would not have to actually have a “name” property to be considered valid (however, if the object happened to have a “name” property whose value was not a string, then the object would not validate). Normally, objects are allowed to contain additional properties beyond those specified in the “properties” object; to change this behavior, there’s an optional “additionalProperties” keyword that can be specified with ‘false’ as its value.

The combining primitives ‘allOf’, ‘anyOf’ and ‘oneOf’ can be used to express more complicated validations. They work pretty much as you might expect; ‘allOf’ allows you to specify a list of JSON Schema types which all must be valid, ‘anyOf’ means that any one of the list of types must be valid, and ‘oneOf’ means that EXACTLY one of the list of types must be valid. For example:

{
    “anyOf”: [
        { “type”: “string” },
        { “type”: “boolean” }
    ]
}

would validate against either a string or a boolean.

It turns out that all three of these combining primitives are useful for accurately validating JSON-RPC objects. JSON-RPC objects come in two flavors; the request and the response. Requests have the following properties:

  • jsonrpc: A string specifying the version of the JSON-RPC protocol. Must be exactly “2.0”.
  • method: A string containing the name of the method to invoke. Must *not* start with “rpc.”.
  • id: An identifier for this request established by the client. May be a string, a number or null. May also be missing, in which case the request is a “Notification” and will not be responded to by the server.
  • params: Either an array or an object, contains either positional or keyword parameters for the method. Optional.

Responses must have these properties:

  • jsonrpc: As for the request, a string which must be exactly “2.0”.
  • id: This must match the ‘id’ of the request, but could be null if the request ‘id’ was null or if there was an error parsing the request ‘id’.

In addition, if the request was successful, the response must have a ‘result’ property, which can be of any type. Conversely, if the request was not successful, the response must NOT have a ‘result’ property but must instead have an ‘error’ property whose value is an object with these properties:

  • code: An integer indicating the error type that occurred.
  • message: A string providing a description of the error.
  • data: A basic JSON Schema type providing more details about the error. Optional.

Here’s a schema for validating JSON-RPC Request objects:

{
    "title": "JSON-RPC 2.0 Request Schema”,
    "description": "A JSON-RPC 2.0 request message",
    "type": "object",
    "properties": {
        "jsonrpc": { "type": "string", "enum": ["2.0"] },
        "id": { "oneOf": [
            { "type": "string" },
            { "type": "integer" },
            { "type": "null" }
        ] },
        "method": { "type": "string", "pattern": "^[^r]|^r[^p]|^rp[^c]|^rpc[^.]|^rpc$" },
        "params": { "oneOf": [
            { "type": "object" },
            { "type": "array" }
        ] }
    },
    "required": [ "jsonrpc", "method" ]
}

Note the use of “oneOf” in the definition of the ‘id’ and ‘params’ properties. In addition, the ‘pattern’ in the definition of the ‘method’ property could use some explanation. Recall above that methods starting “rpc.” are not allowed in JSON-RPC. JSON Schema provides for the use of regular expressions to constraint string types, and the pattern here matches all strings except those starting with “rpc.”. If you are familiar with Perl-compatible regular expressions, you probably are thinking that this pattern would be more elegantly written as “^(?!rpc\.)” which is certainly true, but for the widest compatibility, it is recommended to not use “advanced” features like look-ahead in JSON Schema patterns.

The JSON Schema for validating JSON-RPC Response objects is a bit more complicated:

{
    "title": "JSON-RPC 2.0 Response Schema",
    "description": "A JSON-RPC 2.0 response message",
    "allOf": [
        {
            "type": "object",
            "properties": {
                "jsonrpc": { "type": "string", "enum": ["2.0"] },
                "id": { "oneOf": [
                    { "type": "string" },
                    { "type": "integer" },
                    { "type": "null" }
                ] }
            },
            "required": [ "jsonrpc", "id" ]
        },
        {
            "oneOf": [
            {
                "type": "object",
                "properties": {
                    "result": { },
                    "jsonrpc": { },
                    "id": { }
                },
                "required": [ "result" ],
                "additionalProperties": false
            },
            {
                "type": "object",
                "properties": {
                    "error": {
                        "type": "object",
                        "properties": {
                            "code": { "type": "integer" },
                            "message": { "type": "string" },
                            "data": {
                                "anyOf": [
                                    { "type": "string" },
                                    { "type": "number" },
                                    { "type": "boolean" },
                                    { "type": "null" },
                                    { "type": "object" },
                                    { "type": "array" }
                                ]
                            }
                        },
                        "required": [ "code", "message" ]
                    },
                    "jsonrpc": { },
                    "id": { }
                },
                "required": [ "error" ],
                "additionalProperties": false
            } ]
        }
    ]
}

In this example, “allOf” is used to combine an object that defines the properties shared between error and success responses with “oneOf” two different objects that define the specific properties in either an error or success response. The “additionalProperties” property setting is needed because it is an error to supply both ‘result’ and ‘error’ properties. Because of the use of “additionalProperties” it was also necessary to specify the “jsonrpc” and “id” properties in both the error and success responses, but no further type information is required there because their types are fully specified in the first “allOf” definition. Although the “anyOf” used in the definition of the “data” property of the error object could be replaced with just an empty schema, it is convenient here as it clearly documents what types are allowed.

As you can see, JSON-RPC envelopes are actually rather tricky to validate accurately, but JSON Schemas are flexible and generic enough to do the job. We also use JSON Schemas to validate the parameters to and return data from our JSON-RPC; look for the details of how that works and how it integrates with our RPC service definitions in a future blog post.

GLitter – A Simple WebGL Presentation Framework

Example GLitter Page

Example GLitter Page

A couple of weeks ago, I gave an Axial Lyceum talk on WebGL, a technology that allows hardware-accelerated 3D graphics to be implemented in the browser using pure javascript code. Before I came to Axial, I was doing a lot of 3D graphics work, which is how I came to learn WebGL. I had a moment of panic a few days before my talk when I realized it had been almost a year since I’d done any WebGL programming and I was feeling a little rusty. I was about to fire up powerpoint and start creating what probably would have been a boring presentation when I had a flash of inspiration– I could implement the slides for my WebGL talk directly in WebGL!

This both forced me to get back into the swing of WebGL and resulted in a much more engaging and interactive presentation. I used the fantastic Three.js library to make a simple framework for my presentation. After my talk, a few of the attendees asked if they could get a copy of the code that I used for the presentation, so I spent a little time making the code a bit more modular and it is now available at http://github.com/axialmarket/GLitter/. Before I explain how the presentation framework works, a little background on three.js is necessary.

WebGL is a very powerful technology, but it is also very complicated and has a steep learning curve. Three.js (http://threejs.org) makes creating 3D graphics with WebGL drastically simpler. While you still need some understanding of matrix mathematics and 3D geometry, Three.js abstracts away some of the highly technical details of WebGL programming like writing fragment shaders in GLSL or manipulating OpenGL buffers. In Three.js, you create a scene, a camera and a renderer object, and then use the renderer object to render each frame. To actually render something, you add 3D objects to the scene. These 3D objects have a geometry and materials, a transformation matrix that specifies translation, rotation and scaling relative to the object’s parent, and the ability to contain other 3D objects.

With GLitter, you define a “Page” object for each step in the presentation that creates the 3D objects and behaviors needed to implement that step. The GLitter “Scene” object manages the Three.js scene, camera and renderer and implements transition logic for switching between steps, and provides some common keypress handling functionality. One of the neat things about WebGL is that it renders inside of the HTML5 canvas so it is easy to composite the WebGL scene with HTML content overlayed on top. In GLitter, there are a few different types of HTML content you can overlay on top of the WebGL canvas. First, each Page provides a title and optionally some subtitle content. Second, GLitter uses the dat.GUI control framework to allow Page objects to easily add controls for properties of javascript objects. Lastly, GLitter provides an “info” interface that can be used to show dynamic content.

To see how this works in practice, let’s create a presentation with two steps. The first will show a spinning cube and provide controls to change the cube’s size, and the second will show a sphere with controls to change the camera’s field-of-view, aspect ratio and near and far clipping planes. On the first step, we will show the cube’s transformation matrix in the “info” overlay and in the second, we will show the camera’s projection matrix.

We first create GLitter Page objects CubePage and SpherePage:

var CubePage = new GLitter.Page({
    title: "Cube",
    subtitle: "Spinning!",
    initializor: function (scene) {
        var context = {};
        var cubeMaterial = new THREE.MeshLambertMaterial({ color: 0xee4444});
        var cubeGeometry = new THREE.BoxGeometry(1, 1, 1);
        context.cube = new THREE.Mesh(cubeGeometry, cubeMaterial);
        context.cube.position.z = -2.5;
        scene.add(context.cube);

        var spin = function() {
            new TWEEN.Tween(context.cube.rotation)
                     .to({y: context.cube.rotation.y - 2*Math.PI}, 2000)
                     .start()
                     .onComplete(spin);
        }
        spin();
        scene.add(new THREE.PointLight(0x999999));
        scene.camera.position.z = 2.5;
        return context;
    },
    finalizor: function() {
        GLitter.hideInfo();
    },
    updator: function (context) {
        return function (scene) {
            GLitter.showInfo(GLitter.matrix2html(context.cube.matrix));
            return ! context.STOP;
        }
    },
    gui: function (scene, context) {
        scene.gui.add(context.cube.scale, 'x', 0.1, 5);
        scene.gui.add(context.cube.scale, 'y', 0.1, 5);
        scene.gui.add(context.cube.scale, 'z', 0.1, 5);
    }
});
var SpherePage = new GLitter.Page({
    title: "Sphere",
    initializor: function (scene) {
        var context = {};
        var sphereMaterial = new THREE.MeshLambertMaterial({ ambient: 0xee4444 });
        var sphereGeometry = new THREE.SphereGeometry(1, 20, 20);
        context.sphere = new THREE.Mesh(sphereGeometry, sphereMaterial);
        context.sphere.position.z = -5;
        scene.add(context.sphere);

        scene.add(new THREE.AmbientLight(0x999999));
        return context;
    },
    finalizor: function() {
        GLitter.hideInfo();
    },
    updator: function (context) {
        return function (scene) {
            GLitter.showInfo(GLitter.matrix2html(scene.camera.projectionMatrix));
            return ! context.STOP;
        }
    },
    gui: function (scene, context) {
        var upm = function(){scene.camera.updateProjectionMatrix()};
        scene.gui.add(scene.camera, 'fov', 1, 179).onChange(upm);
        scene.gui.add(scene.camera, 'aspect', 0.1, 10).onChange(upm);
        scene.gui.add(scene.camera, 'near', 0.1, 10).onChange(upm);
        scene.gui.add(scene.camera, 'far', 0.1, 10).onChange(upm);
    }
});

The “title” and “subtitle” values are pretty self-explanatory. The “initializor” function is called to initialize the page when GLitter transitions to it. It adds the desired 3D objects to the GLitter Scene object, including lights, and returns a context object holding any objects that need to be referenced later. The “finalizor” function is called just before GLitter transitions away from this page. The “updator” function is called just before every frame is rendered. The “gui” function is used to update controls in scene.gui, which is a dat.GUI object.

Note that in CubePage, the spinning of the cube is handled by the TWEEN javascript library. TWEEN requires an update call to be made in every frame, but this is handled automatically by GLitter.

Also note that the “updator” functions return “! context.STOP”. The idea here is that if the “updator” function returns a false value, rendering is paused. The GLitter Scene object intercepts ‘keydown’ events, and will set context.STOP to true if Enter or Space is pressed. In addition, if “n” or “p” is pressed, GLitter transitions to the next or previous step, respectively. Page objects can add handling for other keypresses by defining an ‘onKeydown’ function. If this function returns a true value, then the standard GLitter keypress handling is skipped.

Now that these Page objects are defined, we can create an HTML page that sets up the basic structure GLitter needs and loads all of the required files. Currently GLitter is not a javascript module, so we load all of the files explicitly:

<!DOCTYPE html>
<html>
  <head>
    <title>GLitter Blog Example</title>
    <meta charset="utf-8"> 
    <link rel="stylesheet" type="text/css" href="example/example.css">
    <script src="//rawgithub.com/mrdoob/three.js/master/build/three.js"></script>
    <script src="lib/tween.js"></script>
    <script src="lib/dat.gui.min.js"></script>
    <script src="lib/OrbitControls.js"></script>
    <script src="lib/EffectComposer.js"></script>
    <script src="lib/MaskPass.js"></script>
    <script src="lib/RenderPass.js"></script>
    <script src="lib/ShaderPass.js"></script>
    <script src="lib/CopyShader.js"></script>
    <script src="lib/HorizontalBlurShader.js"></script>
    <script src="lib/VerticalBlurShader.js"></script>
    <script src="GLitter.js"></script>
    <script src="Page.js"></script>
    <script src="Scene.js"></script>
    <script src="example/CubePage.js"></script>
    <script src="example/SpherePage.js"></script>
    <script src="example/example.js"></script>
  </head>
  <body>
    <div id="content" class="content">
        <div id="title" class="title"></div>
        <div id="subtitle" class="subtitle"></div>
    </div>
    <div id="info" class="info">
    </div>
  </body>
</html>

The files in lib/ are third-party libraries, and GLitter itself is comprised of GLitter.js, Page.js and Scene.js. The contents of CubePage.js and SpherePage.js are shown above, so all that’s left is example.js:

window.addEventListener('load',
    function () {
        CubePage.nextPage = SpherePage;
        SpherePage.prevPage = CubePage;
        var scene = new GLitter.Scene({});
        scene.initialize();
        document.getElementById('content').appendChild(scene.domElement);
        scene.loadPage(CubePage);
    }
);

You can see the completed example at: http://axialmarket.github.io/GLitter/example.html

There’s still quite a bit of work to do to make GLitter more powerful and easier to use,  but as you can see, GLitter is already pretty easy to use, and doesn’t look too shabby either!

[Video] Having Fun with WebGL

On 2/25, our very own Ben Holzman stopped by to teach us all a little bit about 3D graphics on the Web, using WebGL. Ben’s presentation was 100% written in a WebGL presentation framework he calls “GLitter”, which he will be releasing shortly.

Out of everything Ben managed to pack into this Lyceum, perhaps the most impressive thing we learned was just how easy it’s becoming to do 3D graphics in the browser with tools like three.js.

Axial Lyceum: Having Fun With WebGL

[UPDATE] Watch the video from Ben’s talk here.

Join Ben Holzman on 2/25 to have some fun with WebGL.

Where: Axial HQBen Holzman
When: Tuesday, February 25th, 6PM
RSVP: via EventBrite

WebGL is a new web standard that allows javascript code to control your computer’s graphics hardware, making it possible to create stunning animated 3D games and applications in the browser without the use of proprietary technologies like Flash or Silverlight. This exciting technology is very powerful, but also can be a little daunting if you’re not familiar with OpenGL or how the programmable pipeline inside your graphics card works.

In this Lyceum, Ben Holzman will take us on a brief tour of the basics of 3D graphics programming and tip a toe or two in some deeper waters, like what a quaternion is and what it has to do with computer graphics. Then he will introduce the three.js library which makes it much, much easier to create an animated 3D scene than just using raw WebGL.

Ben will finish by demonstrating how to use three.js to make a 3D animated version of the Axial logo.

More About Ben

Ben Holzman is a lead software engineer at Axial focused mainly on the front-end. He has over 18 years of full-stack software development experience in a diversity of industries, including finance, broadcasting, publishing and SaaS. A long-time and mostly reformed Perl developer, Ben enjoys hacking python and javascript these days and not having to deal with unstable and unwieldy proprietary real-time graphics software. He’s also a pretty decent keyboard player and has recently started learning to paint.

SPOKES: Towards Reusable Front-End Components

Hub and Bespoke

For some time here at Axial we’ve been migrating a large monolithic app to a set of small and simple services. One challenge that has come up in this process is how to share front-end components without unnecessarily coupling services together, and without imposing too many restrictions on how these front-end components can be implemented. The solution we’re evolving involves an abstraction we call a “spoke“.

What is a spoke?

A spoke is an executable javascript file that can contain javascript, CSS and HTML templates. In addition, a spoke can have dependencies on other spokes, which allows us to partition front-end components into small discrete chunks and at build time create a single loadable javascript file for a page that includes just the functionality we need. Now, you may be wondering how we embed CSS and HTML templates into an executable javascript file. We’re using a somewhat unsophisticated approach; we URI-encode the CSS or HTML content into a string and embed it in some simple javascript that decodes the content and dynamically adds it to the DOM.

A simple example

Let’s create a spoke for rendering a user’s name. This perhaps sounds like it’s too simple a task, but there could be some complexity to the logic required:

  • To save space, if the user’s full name would be more than 20 characters, we will render just their first initial followed by their last name.
  • If the user is an internal user, we want to annotate their name with (internal).
  • If the user is an internal user masquerading as a regular user, we want to annotate their name with (masq).

For this example, we will use a Backbone model and view, and an Underscore template, but these are implementation choices and not imposed on us just because we are creating a spoke.

Here is the Backbone model we will use:

var UsernameModel = Backbone.Model.extend({
    defaults: { first_name: "",
                last_name: "",
                is_internal: false,
                is_masq: false }
});

The view is pretty straightforward:

var UsernameView = Backbone.View.extend({
    className: 'username',
    render: function() {
        this.$el.html(this.template(this.model.attributes));
        return this;
    },
    template: _.template($('#username-template').html())
 });

We will store the Underscore template in a <script> tag with type “text/template”:

<script id="username-template" type="text/template">
    <% if (first_name.length + last_name.length >= 20) { %>
        <%= first_name.substr(0,1) %>.
    <% } else { %>
        <%= first_name %>
    <% } %>
    <%= last_name %>
    <% if (is_internal) { %>(internal)<% } %>
    <% else if (is_masq) { %>(masq)<% } %>
</script>

In addition, we have a CSS file to control the styling of the username:

.username {
    font-size: 18px;
    color: #333;
    white-space: nowrap;
}

To turn this into a spoke, all we have to do is store these source files in the spoke source tree:

js/models/Username.js
js/views/Username.js
html/username.html.tpl
css/username.css

Then we add a definition for this spoke (which we will call, surprise, surprise, “username”) to a spoke config in /etc/spoke/, for use by the “spoke compiler”, which is a python script spokec:

    # /etc/spoke/username.cfg
    [username]
    js     = [ 'models/Username.js', 'views/Username.js' ]
    html   = 'username.html.tpl'
    css    = 'username.css'
    spokes = 'backbone'

Spokes do not need to have all of these types of files; a spoke might contain only CSS or only javascript content. Note, also, that we have made the “username” spoke dependent on the “backbone” spoke. The definition of the “backbone” spoke in turn references the “underscore” spoke. When we use spokec to generate a spoke, these dependencies are followed and included in the output. As you probably anticipate, if a spoke is referenced multiple times, it only gets included in the output once.

Now that we’ve defined this spoke, here’s how we would call spokec to generate it:

spokec username [additional spokes] path/to/output.js

Each invocation of spokec generates a single executable javascript file containing all of the specified spokes and their dependencies. So typically a service will create a single spoke file for all of its pages, or sometimes a few different spoke files if the pages that service provides are significantly different. Currently we apply minification and fingerprinting to the spokes after generating them, but we will probably add this functionality directly to spokec soon.

Now, because we specified that “backbone” is a requirement for the “username” spoke, the resulting output is somewhat too large to paste here, but spokec has a feature that allows you to exclude specific dependencies from the generated spoke file by specifying them on the command-line prefixed with a ‘-‘. So, for example,

spokec username -backbone path/to/output.js

would create a spoke file with *only* the “username” spoke in it, which looks like this:

$("<style type='text/css'>").appendTo("head").text(decodeURIComponent(".username%20%7B%0A%20%20%20%20font-size%3A%2018px%3B%0A%20%20%20%20color%3A%20%23333%3B%0A%20%20%20%20white-space%3A%20nowrap%3B%0A%7D%0A"));
$("<div style='display: none'>").appendTo("body").html(decodeURIComponent("%3Cscript%20id%3D%22username-template%22%20type%3D%22text/template%22%3E%0A%3C%25%20if%20%28first_name.length%20%2B%20last_name.length%20%3E%3D%2020%29%20%7B%20%25%3E%0A%3C%25%3D%20first_name.substr%280%2C1%29%20%25%3E.%0A%3C%25%20%7D%20else%20%7B%20%25%3E%0A%3C%25%3D%20first_name%20%25%3E%0A%3C%25%20%7D%20%3E%0A%3C%25%3D%20last_name%20%25%3E%0A%3C%25%20if%20%28is_internal%29%20%7B%20%25%3E%28internal%29%3C%25%20%7D%20%25%3E%0A%3C%25%20else%20if%20%28is_masq%29%20%7B%20%25%3E%28masq%29%3C%25%20%7D%20%25%3E%20%0A%3C/script%3E%0A"));
var UsernameModel = Backbone.Model.extend({
   defaults: {
       first_name: "",
       last_name: "",
       is_internal: false,
       is_masq: false }
});

;
var UsernameView = Backbone.View.extend({
   className: 'username',
   render: function() {
       this.$el.html(this.template(this.model.attributes));
       return this;
   },
   template: _.template($('#username-template').html())
});
;

As you can see, the implementation of spokec assumes that jQuery is already included on the page, which for us is a practical assumption but would be easy to change if we wanted to. It should also be clear that the spoke abstraction makes very few other assumptions about how a specific spoke is implemented, as long as they can be represented as a series of javascript, CSS and HTML files. This allows us the flexibility to change the tools and libraries we are using but maintain a consistent and logical structure to our reusable components.

Getting Started

To start using spoke today, install it using pip:

sudo pip install spoke

The Future of Spokes

One type of content that we do not yet support in spokes is images; thus far we have just been using data URLs when we’ve needed to include images, but particularly for larger images this may become somewhat impractical. Another future enhancement we’ve been considering would make it much safer and easier to create reusable components by providing a way to automatically prefix CSS/HTML classes (and perhaps IDs) with a spoke identifier so that we can create very generic CSS class names without fear of a conflict with CSS classes in a different spoke. Doing this for CSS and HTML content is relatively straightforward, but to do so in javascript is a little trickier, although we have some ideas. Look for a future blog post once we’ve got solutions to this problem that we’re happy with!

Axial Lyceum: Is Crockford Even a 10 Anymore?

javascripttrends

[UPDATE] See pictures from the event here.

Join us for a beer-soaked panel on June 6 with some of the best front-end JavaScript programmers in New York covering the current state-of-the-art in UI programming tools and patterns.

Topics:

  • What’s here to stay vs a fad?  Which libraries will last, which will bite the dust (our bet is CoffeeScript)?
  • Backbone.js and other client-side MVCs — Will the pattern last?  If so, which will be the jQuery of MVC, and which the Prototype.js.
  • node.js – Getting lots of adoption, and becoming more feature rich every day.  Is this the new, best tool for the server side?
  • Push-State, SVG, Canvas and other emerging browser features

Panelists:
Mark Meyer — VP Engineering at PayPerks
Ben Holzman — Software Engineer at Axial
Fire Crow — Front-End Engineer at ShutterStock

Register at EventBrite.

Minimalist Backbone.js

Backbone.js is a popular tool for organizing front-end javascript code. It is used by well-known sites like Airbnb, Hulu and USA Today. It is also complex and full of features. Here at Axial, we have started building new services using a subset of Backbone to reduce the complexity but retain some of the key benefits.

The fundamental abstraction underlying Backbone is that data is stored in Models and HTML rendering and interactivity is handled by Views that may be backed by Models. This pattern not only helps organize front-end code, but its consistent use eliminates many common errors, especially errors where different representations of the same data are inconsistent.

For example, lets say we are creating a widget that has a bunch of checkboxes and also displays a count of the total number of selected checkboxes. Doing this the “old fashioned” way, we might add a change handler to the checkbox elements and then update the count whenever any checkbox changes:

HTML

<div class="checkboxWidget">
Count: <span>1</span><br>
<input type="checkbox" name="checkbox1" /><br>
<input type="checkbox" name="checkbox2" checked="checked" /><br>
...
</div>

JavaScript

$('.checkboxWidget input[type=checkbox]')
 .on('change', function() {
   $('.count')
     .html($('input[type=checkbox]:checked').length);
 });

This is pretty straightforward, but if some checkboxes are checked when the widget is initially loaded, then we need a separate piece of code to set the count on load:

$(
  function(){
    $('.count')
      .html(
        $('.checkboxWidget input[type=checkbox]:checked')
          .length
      );
   }
);

Now this is clearly still not very complicated, but once we have updates in more than one place, we have created an opportunity for them to get out-of-sync. Also, the HTML is probably being generated by the backend, which means the code for this widget is split into multiple files in multiple languages, with some of it executing on the server and some of it executing in the browser. For an example as simple as this, the complexity is not hard to manage, but it can quickly grow out of control.  Here’s how we might approach this in Backbone:

var model = Backbone.Model.extend({
  defaults: {
    'checkboxes': [],
    'selected': {}
  }
});

var view = Backbone.View.extend({
  initialize: function() {
    this.listenTo(this.model, 'change', this.render);
  },
  render: function() {
    // Backbone automatically creates a DOM element for
    // our view in this.el; this.$el is just this element
    // wrapped with jQuery.
    var $el        = this.$el,
        checkboxes = this.model.get('checkboxes'),
        selected   = this.model.get('selected');

    $el.html('Count: ' + _.size(selected) + '<br>');
    _.map(checkboxes,
      function (name) {
        var $checkbox = $("<input type='checkbox' name='" + name + "'>");
        if (selected[name]) {
          $checkbox.attr('checked', 'checked');
        }
        $el.append($checkbox).append($('<br>'));
      }
    );

    return this;
  },
  events: {
    'change input[type=checkbox]': 'checkboxChanged'
  },
  checkboxChanged: function (evt) {
    var selected = _.clone(this.model.get('selected')),
        $cb      = $(evt.target),
        name     = $cb.attr('name');

    if ($cb.is(':checked')) {
      selected[name] = true;
    } else {
      delete selected[name];
    }
    this.model.set('selected', selected);
  }
});

Ok, this might seem complicated, but it’s really not too bad. We have one model defined that contains a list of checkbox names and a ‘selected’ object whose keys are the checkbox names that are currently selected. The view takes this model and generates the corresponding HTML in the render() method. In this example, the HTML is created directly in render() using jQuery. Alternatively, any javascript templating system (e.g., Mustache, Handlebars or even just Underscore templates) could be used instead. The view listens for changes in the model, and automatically re-renders whenever there’s a change (that’s the reason we access the model attributes with .get() and .set(); it allows Backbone to keep track of changes). Because the count and the checkboxes are both being rendered by the same piece of code, with the same underlying data model, there’s no possibility that they could be out-of-sync.

If you are familiar with Backbone, you might be thinking that we could use Backbone.Collection for managing the set of selected checkboxes, since we could directly listen to changes in the collection instead of clone()ing the selected object every time it changes. That is certainly true, and probably not unreasonable, but I think in this case it would have added additional complexity that we don’t need.

Now, the class definitions above won’t do anything by themselves; we need to actually instantiate a view object and attach the rendered view to the page in order to see anything:

var checkboxModel = new model(
  {checkboxes: ['Checkbox 1', 'Checkbox 2'],
   selected: { 'Checkbox 2': true }}
);
$('body').append(
  new view(
    {model: checkboxModel }
  ).render().$el
);

We could have instead attached view.$el to the body inside the render() method, but delegating this responsibility to code outside of the views allows all the DOM to be generated and then rendered by the browser once. In this case, it doesn’t make much difference, but if we were creating lots of content, rendering only once would be an important optimization.

To really start to see the power of using a Model/View front-end pattern, let’s consider what would happen if we needed to dynamically add some checkboxes. Using the old-fashioned methodology, we might have a server template that returns a bunch of checkboxes that we request via AJAX and then add in to the ‘checkboxWidget’ class:

HTML

<input type='checkbox' name='checkbox3' checked='checked' />
<input type='checkbox' name='checkbox4' />
...

 

JavaScript

$.get('/more/checkboxes/html', function (data) {
  $('.checkboxWidget').append(data);
  // and we have to update the count...
  $('.count').html(
    $('.checkboxWidget input[type=checkbox]:checked').length
  );
});

With Backbone, this becomes simpler; we can have the server just feed us some JSON for the new checkboxes, and then we simply have to update the ‘checkboxes’ array and ‘selected’ object in the model:

JSON

{ 'checkboxes': ['Checkbox 3', 'Checkbox 4'],
  'selected':   {'Checkbox 3': true} }

JavaScript

$.get('/more/checkboxes/json', function (data) {
  checkboxModel.set('checkboxes',
    checkboxModel.get('checkboxes').concat(data.checkboxes));

  checkboxModel.set('selected',
    $.extend({},checkboxModel.get('selected')),data.selected));
  // that's it...we changed the model so the view will automatically
  // re-render with the new checkboxes and count!
});

Although we still have roughly the same number of lines of code, the conceptual overhead is reduced by using Backbone. When we want to change the data, all we have to worry about is changing the data; the view will then take care of correctly rendering the changed data. Because each view renders within a DOM element that it creates, we do not need to add extra markup like the ‘.checkboxWidget’ div, thus simplifying the markup. We don’t have to split knowledge about how the data is rendered between code running on the server and code running in the browser. And we can have the server just send JSON instead of HTML, which is simpler and more efficient.

Backbone includes a Routes facility for synchronizing models with data on the backend. We considered using this pattern at Axial, but we decided that it makes more sense for the models in the front-end to be separate from the models in the back-end. Instead of using a full REST interface, we simply defined a few JSON-RPC endpoints to deliver the data the front-end needs and wrote a simple javascript controller to make requests and marshall the data into our models.

The great thing about Backbone is it supports these usage patterns well: views don’t care how the models get their data, and models don’t care how the views are implemented. The great thing about Axial is developers are empowered to select the best tools for the job, and encouraged to find patterns that are efficient and pragmatic.

Acing the Front-End Interview — What is a Closure?

It’s becoming an increasingly common question as more and more Front-End candidates are getting back to Javascript’s functional roots.  But ask most Front-End candidates what a closure is, and you’ll nearly always get the same response:

A closure is an anonymous function

~ Approximately 9/10 Candidates

Of course, in Javascript this isn’t false (strictly speaking):

var i = 0;               // i is a local variable in void scope
(function() { ++i; })(); // execute an anonymous function incrementing
                         // local variable i from definition scope
console.log(i);          // 1, not 0, due to the magicks of closures!

And of course it makes sense, given Javascript’s Lisp heritage, because the Lisps and Schemes and Haskells of the world are fond of their lambdas and their higher-order functions:

// higherOrderIncrementor is higher-order because it returns a function.
function higherOrderIncrementor() {
    var i = 0;               // i is a local variable in function scope
    function incrementor() { // incrementor is a named function that
        return ++i;          // encloses i, and increments it
    }
    return incrementor;      // return our non-anonymous closure! see
                             // what we did there?
}

So we’ve created the closure above, but let’s stop to understand its magicks.  The first thing to understand here is that our closure incrementor encloses the variable i at definition time. That’s a fancy way of saying that incrementor has access to use the variable i from higherOrderIncrementor whenever incrementor is called:

var inc = higherOrderIncrementor();
console.log(inc()); // console.log(1)

Now this alone doesn’t make the closure special …

/* this is what makes the javascript closure special */
console.log(inc()); // console.log(2)

What’s remarkable here is not that the function has access to a variable defined at the same scope as the function itself, it’s that the function re-evaluates that variable at call-time to the value assigned to the variable name i back in the definition scope. This evaluation to the assigned value in definition scope, and re-assignment on modification to the variable (++i) is pretty special. This is why closures are also called Lexical Closures.

Closures are So Special, Python 2 Doesn’t Really Have Them

By way of example, let’s look at what Python does in that situation:

def higher_order_incrementor():  # a higher order function to increment
    i = 1                        # set i in function scope
    def failed():                # define our closure
        i += 1                   # increment i -- with a useless comment
        return i                 # return our incremented var
    return failed                # return our star-crossed ``closure''

inc = higher_order_incrementor() # get an incrementor
inc()                            # raises UnboundLocalError -- fail

Of course, python 3 is capable of a closure (see the nonlocal keyword), and python 2 is capable of reading the variable i from within failed … it just can’t write to it.

What’s Definition Scope Anyway?

So back to our JS incrementor:

var inc1 = higherOrderIncrementor(), // one incrementor
inc2 = higherOrderIncrementor();     // two incrementor, red incrementor ...
console.log(inc1());                 // console.log(1)
console.log(inc2());                 // console.log(1)

This shows that the definition of inc1 and inc2 have different scopes and each still preserves its own definition scope intact at calling time:

console.log(inc1()); // console.log(2)
console.log(inc2()); // console.log(2)

Which means we can do fun things like using arguments within closures:

// log a value known now, at some time later
function squawk(logThis) {
    // now this is an anonymous function
    return function () {
        console.log(logThis);
    }
}
var squawks = [];
for (var i=0;i<5;++i) {
    squawks.push(squawk(i)); // load up some values to be logged
}
// pass them around as much as you want, and at some point ...
for (var i=0;i<5;++i) {
    squawks[i](); // console.log(1), console.log(2)...
}

Definition scope also allows many different closures to collude in private with one another:

/* construct an object for storing, verifying and modifying a secret */
function secrets(sharedSecret) {
    function verifySecret(secret) {
        return (secret === sharedSecret);
    }
    function changeSecret(oldSecret, newSecret) {
        if (verifySecret(oldSecret)) {
            secret = newSecret;
            return true;
        }
        return false;
    }
    return { 'verifySecret': verifySecret,
             'changeSecret': changeSecret }
}

Our secrets higher-order function can be used to store, validate, and securely change a secret, while preventing direct read/modification access to the stored secret after call-time:

var sec = secrets('password?wowthatsabadpassword.');
sec.verifySecret('password?');                                   // false
sec.changeSecret('password?wowthatsabadpassword.', 'password?'); // true

Useful Use of Closures

One very useful use of closures is to minimize (or in some cases eliminate) the footprint of your library or module:

(function(scope, attr) {
    var timesCalled = 0,                 // track the number of calls to our API
        public = {                       // the API bound to scope.attr
            'foo': function() {
                console.log('foo');
            },
            'bar': function() {
                console.log('bar');
            },
            'timesCalled': function() {  // a function to report the number of times any method
                return timesCalled;      // including this one, has been called)
            }
        }

    for (var meth in public) {           // use a decorator pattern (with another closure)
        public[meth] = (function(meth) { // to increment times_called, bonus: why another closure?
            var origMeth = public[meth];
            return function() {
                ++timesCalled;
                return origMeth.apply(this, arguments);
            }
        })(meth);
    }
    scope[attr] = public;                // assign `public` to scope[attr] as passed in
})(window, 'module')                     // we just modify our module here

window.module.foo()                      // console.log('foo')
window.module.bar()                      // console.log('bar')
window.module.timesCalled()              // return 3

For Bonus Points

To get your application to the front of the line at Axial, decypher and send an explanation of what the following obfuscated function does, and why it’s useful to matt.story+cuttheline@axial.net:

function a(b, c) {
    var d = Array.prototype.slice.call(arguments, 2)
    return function() {
        return c.apply(b, Array.prototype.slice.call(arguments, 0).concat(d));
    }
}

Note: We don’t code like this at Axial. Well … Most of us don’t. Hint: Think Prototype.js