Serverside CSS Modules with babel

August 22, 2016 0 Comments css-modules, babel

I had a requirement to use CSS modules with serverside rendering, it took me a while to figure out so I thought I'd write the process down.

In order to set css modules, we need babel to turn our generic classnames (.container) into localised class names like this .style__container___fK43b. There is a module called babel plugin css modules transform to help us with this task.

npm install -D babel-plugin-css-modules-transform

You can set this module up in your .babelrc file by adding it to the plugins array.

{
    "plugins": [
        ["css-modules-transform"]: {
            "generateScopedName": "[name]__[local]___[hash:base64:5]
        }
    ]
}

I've added this to my .babelrc plugins array, and I've added a custom generateScopedName attribute which will add the filename, the local classname and a hash of the contents to create the new localised classname.

You can specify extensions that it should look out for, .css is the obvious one but what if you want to use a preprocesser like SCSS or Stylus?

I'll show you how to set up the config to preprocess Stylus files so they can be used by the CSS modules transform.

{
    "plugins": [
        [["css-modules-transform", {
            "generateScopedName": "[name]__[local]___[hash:base64:5]",
            "extensions": [".styl"],
            "preprocessCss": "./stylus-require-hook.js",
            "context": process.cwd()
        }] 
    ],
    "ignore": ["./stylus-require-hook.js"]
}

IMPORTANT - Set your context to whatever your webpack context is set to so that babel can generate the exact same classnames as webpack.

I've added .styl to the extensions attribute so it knows to look for my stylus files. And I've added a preprocessCss attribute which points to a file I've called stylus-require-hook.js. I've also added this file to the ignore list of babel because I don't want to babel it.

The stylus require hook is needed to preprocess the Stylus into CSS so it can be used by the CSS modules transform. Its a simple hook which takes raw stylus code and uses the stylus module to convert it to css.

/* stylus-require-hook.js */
const stylus = require('stylus');

module.exports = (styl, filename) =>  
    stylus(styl)
        .set('filename', filename)
        .render();

That's it, you're now set up for babel to process your css classnames into localised css classnames allowing you to use CSS modules in your project. On the client side, if you're using Webpack, you can set up the css-loader to hash the classnames in the same way so the browser can then pick up your css module on the client side and use it.

babel-loader's hidden feature - babelrc:false

August 22, 2016 0 Comments babel, webpack
I've spent 3 days configuring Webpack and babel to deal with CSS Modules while doing server-side rendering. I've just discovered an undocumented setting in Webpack's babel-loader that solves the mystery of disappearing css.

On the server, I've got a .babelrc file which uses babel-plugin-css-modules-transform.

{
    "presets": ["es2015-node4", "react"],
    "plugins": [
        "add-module-exports",
        "syntax-async-functions",
        "transform-async-to-generator",
        "transform-class-properties",
        "transform-object-rest-spread",
        ["css-modules-transform", {
            "generateScopedName": "[name]__[local]___[hash:base64:5]",
            "extensions": [".styl", ".css"],
            "preprocessCss": "./stylus-require-hook.js"
        }]
    ],
    "ignore": ["./stylus-require-hook.js"]
}

On the client side, I've got webpack building my code, I'm using the babel-loader to transform my JavaScript and I've set up a regex to catch all the .styl files and push them through the stylus loader, then the css loader with the css modules configuration and finally to the style loader.

I could get the server side code to correctly create the hashed module names, but on the client, I couldn't get Webpack to pick up the stylus file that I created so I could transform it into css modules. I checked out the regex's multiple times and every other setting I could find. I'd rewritten my Webpack config at least twice from scratch and still I was getting this issue. My build output wouldn't contain any css files despite using the Extract Text Plugin.

I eventually checked out the javascript created from my build and discovered that it contained a lot of css attribute names and this is when it clicked. My babel-loader config was picking up my server side .babelrc and running my stylus files through the css module transform.

I'd checked the documentation a few times but I couldn't see anything about the default functionality of picking up the .babelrc file. This is a weird default because we compile to different targets with different features for server and web.

I discovered the settings babelrc: false in one of the github issues and gave it a go. It has to be supplied as an attribute on the query object passed to babel-loader.

It should look like this:

    {
                test: /\.js/,
                exclude: /(node_modules|bower_components)/,
                loader: 'babel',
                query: {
                    babelrc: false,
                    presets: ['es2015', 'react'],
                    plugins: [
                        'transform-class-properties',
                        ['transform-es2015-classes', { loose: true }],
                        'add-module-exports',
                        'transform-async-to-generator',
                        'syntax-async-functions',
                        'transform-object-rest-spread',
                        'transform-runtime'
                    ]
                }

This means that the babel-loader will only use the settings you've provided to it in the query object allowing you to have entirely different babel configurations for both client and server.

What actually gets included in your npm module?

August 01, 2016 0 Comments npm, nodejs, packages, publishing

Npm automatically includes some files. From the documentation (https://docs.npmjs.com/files/package.json#files) we can see the following files are always included regardless of settings:

  • package.json
  • README (and it's variants)
  • CHANGELOG (and it's variants)
  • LICENSE / LICENCE

There are also some files that always get ignored:

  • .git
  • CVS
  • .svn
  • .hg
  • .lock-wscript
  • .wafpickle-N
  • *.swp
  • .DS_Store
  • ._*
  • npm-debug.log

You can tell npm to include other files using the files array attribute in the package.json. This is an array of files to include in your project, you can also specify a folder name which will include all files in that folder.

You can exclude files by creating a .npmignore file. This is a list of files or folders that will not be included in your module build. In the absence of a .npmignore file, npm will pick up your .gitignore file and use it.

Sometimes you need to exclude a file from git but include it in your build, to get around this, as I've mentioned here, create an empty .npmignore file to ensure npm doesn't pick up your .gitignore file.

I've done some experimenting to see which config takes precedence over what.

Experiment time

Setup

Folder structure

ignore-test-1/  
|--index.js
|--test.js
|--package.json

.gitignore

index.js  
test.js  

.npmignore

index.js  
test.js  

files array

"files": ["index.js", "test.js"]

Test: files array + .npmignore

Result: package.json, index.js & test.js included

Winner: files array


Test: files array + .gitignore

Result: package.json, index.js & test.js included

Winner: files array


Test: just .npmignore

Result: package.json & index.js included


Test: just .gitignore

Result: package.json & index.js included


Conclusion The files array is the source that can't be overridden by .npmignore or .gitignore and the index.js gets included even if it's flagged to be ignored.

This conclusion is different from what I expected from reading the documentation so I did another experiment, this time with a more complicated folder structure, the results come to the same conclusion that the files array in the package.json is the source that cannot be overridden but if you forget to add to the files array, then the .npmignore and .gitignore files will kick in.

NPM uses your .gitignore file

July 03, 2016 0 Comments nodejs, npm, packages

The Problem

I recently published a package to npm and got an issue raised on Github to tell me that the build artifact hadn't been published.

The helpful person who raised the issue informed me that in the absence of an .npmignore file, npm will use your .gitignore file.

At the time, I had not wanted to push the build artifact to Github so I'd stuck the build directory into my .gitignore file. This meant that when I published the module to npm, it published everything but what I'd excluded from Github.


The Solution

The work around for this is to create an empty .npmignore file, this means that npm will find this file and use it instead of your .gitignore file.

I couldn't believe that this was the correct intended functionality so I checked out the docs on npm and the documentation states exactly this.


The Docs

https://docs.npmjs.com/misc/developers#keeping-files-out-of-your-package

Use a .npmignore file to keep stuff out of your package. If there's no .npmignore file, but there is a .gitignore file, then npm will ignore the stuff matched by the .gitignore file. If you want to include something that is excluded by your .gitignore file, you can create an empty .npmignore file to override it. Like git, npm looks for .npmignore and .gitignore files in all subdirectories of your package, not only the root directory.


The Lesson

So I really should have RTFM, but this is just something I'd totally overlooked and didn't expect. I thought I'd share my lesson just incase anyone else comes across this issue.

Using console.time to profile performance in your node application

June 09, 2016 0 Comments nodejs, Profiling, Javascript
I was writing an express application, when I started to notice one of the routes was taking quite a while to respond to a request. The route was doing some heavy lifting shaping data from an API .

I needed to figure out where the bottleneck was and optimise it.

How do I start to figure out how and where to optimise?

I could run the V8 profiler (https://nodejs.org/en/docs/guides/simple-profiling/), pipe the result into a file, load that into Chrome and look at the metric table and flame graph. But I find this to be too low level when I know roughly the area that the bottleneck is happening in.

Node provides two api's to deal with this. (https://nodejs.org/api/console.html)

console.time(label)

Starts a timer that can be used to compute the duration of an operation. Timers are identified by a unique label. Use the same label when you call console.timeEnd() to stop the timer and output the elapsed time in milliseconds to stdout. Timer durations are accurate to the sub-millisecond.

console.timeEnd(label)

Stops a timer that was previously started by calling console.time() and prints the result to stdout

To use these, put a console.time('label') before the function you want to profile and a console.timeEnd('label') after it and then run your app.

Here's an example of profiling native Array.prototype.map and Array.prototype.filter against their lodash counterparts on an array of a million values:

const _ = require('lodash');  
const dataset = [];  
for (let i = 0; i < 1000000; i++) {  
    dataset.push(i);
}

console.time('map');  
const squared = dataset.map(x => x * x);  
console.timeEnd('map');

console.time('lodash map');  
const lodashSquared = _.map(dataset, x => x * x);  
console.timeEnd('lodash map');

console.time('filter');  
const filtered = dataset.filter(x => x % 2 === 0);  
console.timeEnd('filter');

console.time('lodash filter');  
const lodashFiltered = _.filter(dataset, x => x % 2 === 0);  
console.timeEnd('lodash filter');

The output to the console looks like this:

$ map: 199.207ms
$ lodash map: 16.889ms
$ filter: 123.355ms
$ lodash filter: 21.634ms

That's a pretty insane time savings on Node v6.1.0. And this could help you squeeze a bit more performance out of your application.

If you're in the browser, some of them have built in similar tools to give you more information. (https://github.com/DeveloperToolsWG/console-object/blob/master/api.md#consoleprofilelabel)

console.profile(label)

Calling this function initiates a JavaScript CPU profile with an optional label. To complete the profile, call console.profileEnd().

console.profileEnd(label)

Stops the current JavaScript CPU profiling session, if one is in progress, and prints the report to the Profiles panel.

(Photo / CC BY)

Error handling in Express with Async/Await routes

*Async/Await for me is one of the best things to not quite happen to JavaScript yet.*

As we all know from Object.observe, you're never really sure what's going to end up in a language until it actually lands.

But I've been transpiling async/await for a while now and I love it.

It makes asynchronous code look great and really sorts out any issues with scoping of variables and nested promises getting out of hand.

Error handling with async/await generally takes the form of wrapping everything in a try/catch.

async function() {  
    try {
        const result = await goDoSomething();
        return { result.data };
    } catch(err) {
        console.error(err);
        return {};
    }
}

In Express, you'll want to be passing the errors that get thrown to an error handler middleware sitting at the end of your stack. But just how would you go about doing this with an async route?

Well this is the obvious option:

app.get('/', async (req, res, next) => {  
       try {
        const result = await goDoSomething();
        return result.data;
    } catch(err) {
        next(err);
    }
});

app.use((err, req, res, next) => {  
    /* do something with the error */
    console.error(err);
});

This can get a bit cumbersome when dealing with a lot of routes, doing the same try catches over and over again.

What if we could do it automatically for each route?

Well, because an async function returns a promise, we can hook onto the catch function and catch those errors from the promise. We can create a wrapper that will provide the error handling for us.

What would that look like?

function catchAsyncErrors(fn) {  
    return (req, res, next) => {
        const routePromise = fn(req, res, next);
        if (routePromise.catch) {
            routePromise.catch(err => next(err));
        }
    }
}

This piece of code takes an async route function as a parameter, and returns a function with the signature req, res, next.

Essentially, it takes a route and returns a route with a catch handler.

What happens when the route is called?

The fn(req, res, next) passes the request, response and next into our original route function and this returns a promise.

We then check if that returned object has a catch attribute on it (if so, we know it's a promise) and we can now hook into the catch function of the promise and call next(err) to pass it directly to our error handling middleware.

So altogether, taking the example above, this could look something like:

function catchAsyncErrors(fn) {  
    return (req, res, next) => {
        const routePromise = fn(req, res, next);
        if (routePromise.catch) {
            routePromise.catch(err => next(err));
        }
    }
}

async function asyncRoute(req, res, next)  {  
        const result = await goDoSomething();
        return result.data;
});

app.get('/', catchAsyncErrors(asyncRoute));

app.use((err, req, res, next) => {  
    /* do something with the error */
    console.error(err);
});

This is a great way to not have to worry about try/catching every time you make an async call and the errors bubble up to this point every time during your route execution.

I've encapsulated this in a module which can be found here https://github.com/madole/async-error-catcher