Lightweight rendering of React to strings

For a recent project I needed to be able to render React components (or really, just React-like components) into plain HTML strings. Of course, React provides a mechanism to do this: renderToStaticMarkup in its ReactDOM library. But wow, that library is a really big dependency to import if I don’t want any of the DOM reconciliation stuff.

Also, it turns out I don’t even need React lifecycle events for my project. I don’t need component state either. I just want to render a bunch of stateless components with props and children. Something like this:

Well, maybe there’s a way to just import part of the rendering engine… but then I realized I had yet another requirement: I need to be able to modify the string version of every component as it is created. I can’t do that with React. I need a custom renderer.

Happily, with some experimentation I learned that it’s not that hard to create one! Below you can see my version of renderToString(). It accepts both stateless functional components and component classes.

Of course, the version below is a bit naive. I’m certain there’s many edge cases of rendering which it does not cover, and like I said above, it does not support state or lifecycle events at all. That said, it works very well for my own purposes and I learned a lot about how React components are put together in the doing!

The following also includes a full test suite to show how it works.

Higher Order Components and Pie Recipes

Higher Order Components (“HOCs”) are the latest hotness to come out of the JavaScript idea world and land in our apps. React apps, anyway.

I really want to write: Higher Order Components are just wrappers, but that would be simplifying the concept too much. Wouldn’t it? Maybe.

Anyway, they’re wrappers. Keep your actual Component simple as pie – it takes ingredients as props and turns them into rendered flaky crust – and have the wrapper handle getting the data from wherever, massaging it, and jamming it in.

This makes the underlying Component simpler to understand, debug, and ultimately reuse elsewhere if needed. Also testing! It’s way easier to test when the data piece is separate.

After thinking about it for a while you may realize that this concept is not new. Keep a pie-making machine abstract and separate from the ingredient machine? Where have we seen this before? Functions! (Ok, other places too.) And lo, this seems to be where React is heading (has already arrived?): Components as functions.

A Component, ideally, should take its props, nothing else (except maybe more Components) and turn those props into output. The output should be the same no matter what as long as the props are the same. It can use libraries to process the data it already has, but not get new data. I mean, that was an idea that React was built upon. But hey, I’d say that’s also the definition of a good function.

So if we think about our Components as functions, then we can apply function strategies to them. Mine basically boil down to: can this be done in fewer lines by extracting logic into a sub-function? Repeat as necessary until my functions are thin little beautiful pancakes. Maybe you have different strategies. Try them on Components!

Usually I see HOCs being talked about in reference to Redux and the global state tree, but my favorite thing is: who cares where the data comes from? Nest those Components and let them remain simple. The HOC could pull data from many sources, even just from a JSON file.

My new rule of thumb is: whenever my Component starts getting too many props (that are not directly related to its concept), when I am pulling data into my Component from a library, or when I find myself caching data in a Component’s state, there’s a strong chance I could use an HOC to smooth things out.

Give it a try! You may be surprised at how fun it can be.

An iframe without a url

Sometimes you need to display html inside an iframe, but it’s not at a URL. Perhaps you got the markup from an API endpoint or generated it yourself. Maybe the markup is from a different domain and you need to be able to manipulate its DOM without cross-origin errors. For all these reasons, I created MarkupFrame.

A React component to display raw html markup inside an iframe

In several projects I’ve worked on recently I’ve wanted to treat the contents of an iframe as data, fetching it from an API, manipulating it directly using cheerio, and directly reaching into the iframe’s DOM to adjust the elements inside without hitting cross-domain errors.

Normally iframes are set up in such a way that these things are difficult, but there’s a trick:

iframe.contentWindow.document.open();
iframe.contentWindow.document.write( content );
iframe.contentWindow.document.close();

Those three lines allow us to inject html markup directly into the iframe, where the browser will render it: CSS, scripts, and all.

MarkupFrame takes those three lines of magic and wraps them in a React component to use in your application. You can install it using npm:

npm install markup-frame

Then just use the markup prop like this (JSX syntax):

&lt;MarkupFrame markup={ &#039;<h1>hello world</h1>' } /&gt;

There’s also a prop called onLoad which is a function that will be called when the markup is finished loading. The callback will be passed a reference to the document object of the iframe, which lets you directly manipulate the DOM (I know: isn’t that what React is supposed to prevent? Yes, but inside the iframe it’s a whole other world.)

...
render: function() {
  var onLoad = function( previewDocument ) {
    previewDocument.querySelector( 'h1' ).innerHTML = 'hello markup-frame';
  };
  return &lt;MarkupFrame markup={ &#039;<h1>hello world</h1>' } onLoad={ onLoad } /&gt;;
}
...

This has certainly been helpful to me. I hope it’s helpful to you too! The GitHub repo is here if you want to report any issues or contribute!

Caveat Lector:

Some JavaScript rendered inside an iframe like this doesn’t work correctly since it often makes assumptions about the page having a URL.

Following (clicking) on a link inside an iframe like this will load the resulting page inside the iframe, but then the iframe contents will probably no longer be accessible via its contentWindow.document object without throwing a cross-domain error. For this reason it’s recommended to disable clicking on links using the onClick prop.

JavaScript: Mocking Window

When I code in JavaScript I try to avoid using the window or document objects as much as possible, but sometimes there’s just no getting around them. Bootstrapping data from a server (eg: WordPress’s wp_localize_script), manipulating the DOM directly (eg: jQuery), or listening to viewport resize events all require touching the global window object.

Why is this trouble? Because it makes testing a real challenge. The easiest code to test is a pure function, which generally means a function without side-effects, but it also means a function which gets all its data from its arguments and not from some overarching state. Any global variable, like window, is effectively global state.

Fortunately for us, it’s relatively easy to mock a window object. If you’re bootstrapping data, you can just use a plain object. If you’re doing DOM things, you can use a library like jsdom. But let’s say you have a bunch of modules all accessing window in different places? As soon as we start requiring those modules in our tests, we’ll see failures because window will be null.

My answer, as seems to be the case a lot these days, is dependency injection. That is, directly providing our window object to the code before it runs. It might be awkward to pass window to every function which might want to use it, so instead we can create a module something like the following:

var windowObj = null;

module.exports = {
    setWindow: function( newWindowObj ) {
        windowObj = newWindowObj;
    }

    getWindow: function() {
        if ( ! windowObj && ! window ) {
            throw new Error( 'No window object found.' );
        }
        return windowObj || window;
    }
};

Now in other modules that need a window we write:

var getWindow = require( './helpers/window' ).getWindow;

function getSomethingFromTheDOM() {
    return getWindow().document.querySelector( '.something' );
}

By itself, that will work just as well as using window directly, and when we want to test the code, we can write this in a test helper:

var jsdom = require( 'jsdom' ).jsdom;
var setWindow = require( './helpers/window' ).setWindow;
setWindow( jsdom().defaultView ); 

Now all the calls to getWindow() will use our mock window object instead.

Copying files… sometimes

File this one under “tools that probably only I will find useful”. In the course of my normal job I need to copy files to a synchronized directory on my computer (something like a DropBox folder). The files are JavaScript code that’s been transpiled and copying them to the synchronized directory is what deploys them to my staging server.

Therefore, as I work, I need to: Code → Transpile → Deploy → Test → Repeat. My ideal work cycle is more like: Code → Test → Repeat. I want to be able to just write code and then reload my browser, not all that boring stuff in the middle.

The process of transpiling my code can be handled by a watcher (usually grunt-watch or watchify) so I don’t have to worry about that part. Unfortunately because of the deploy step my process still looks like: Code → Deploy → Test → Repeat. (Grumble grumble inefficient grumble.)

Now, one way I could handle this is just to move my whole project into the synchronized directory. That way the transpiled files are already in the right place and I don’t need a deploy step. That feels like cheating to me, though. It means that my project directory has to exactly mirror the deploy directory and it means that all of my working files are also being synchronized as I code; that’s a lot of unnecessary data over the wire.

Naturally I decided to automate my way out of this problem. I thought to myself: I could just add another task so that the watcher copies my transpiled code to the synchronized directory when it’s done! Ah, but I’d need to give it an absolute path, and several developers work on this project, each with different synchronized directories. That’s not ideal. And even worse is that another developer might want to handle the deploy process differently.

This led me to write copytotheplace. It’s a very simple library and command-line tool that will allow copying files to a directory by setting the destination as an environment variable or using a config file (or a command-line option or parameter if you’re using the library directly).

If no destination directory is set, this tool will do nothing, which allows it to be used as the last step in a build pipeline without doing anything unless specifically called-for.

To hook it into my particular build tool and watcher, I wrote grunt-copytotheplace which just loads the library into a Grunt task.

Now I just put COPYTOTHEPLACE=/path/to/my/sync/directory in the .env file in my local project directory and Grunt will copy the files there every time they change. More importantly, when other developers who don’t have that option set run their version of Grunt, nothing will be copied anywhere.

I know, it’s a weird solution to a weird problem, but it was a simple way to dramatically speed up my workflow without harming others, and so for now I consider it a win. Maybe next week I’ll come up with a better way. But in the mean time, if you find this tool useful it’s up there in the cloud for all to share. Just npm install copytotheplace and away you go! (See details in the README.)

Partial application and making tea

Partial application is like making tea. The person making the tea needs two pieces of information: what kind of tea, and how many people to serve. We can save time by knowing one of those pieces of information before we begin.

Let’s say we have a tea shop. Whenever a new person comes in with a group, you need to ask them two questions: what kind of tea would they like, and how many cups do they need? After that, you know how to do the rest. Of course, sometimes the customer hems and haws about what kind of tea so it can take a while…

But then the door opens and one of your regulars comes in. You know that she always asks for a green tea, but you’re never sure who she’s coming to meet, so you still have to ask her how many cups she’d like. Because you already know the kind of tea, you only need one piece of information instead of two. You’ve kept the first piece of information in your memory. This is partial application.

When programming, you may be writing a function that takes two (or more!) arguments. If you often use that function in a particular way, you can prepare it by creating a partially applied version that only takes one argument instead of two, keeping the missing argument in memory (often through the clever use of closures). Then when you want to call it, you can call the partially applied version.

Partial application means taking a function and partially applying it to one or more of its arguments, but not all, creating a new function in the process. (Dave Atchley)

A little harder to explain is why you’d want to do that. Well, for me the main reason is to be able to pipe the result of a series of functions together.

That is, when you have a series of functions that need to be run on the same piece of data, you might write something like:

adjustedData = adjustData( data );
preparedData = prepareData( properties, adjustedData );
doSomething( preparedData );

That works fine, but in some cases it can get messy. Also, we’ve created two temporary variables whose entire purpose is to pass the result of one function on to the next. We can do better!

doSomething( prepareData( properties, adjustData( data ) ) );

Ok, that gets rid of the variables, but it’s way less readable. Just think what it would look like if there were ten functions in that chain! What we need is a way to pipe data from one function to the next just like the | character in UNIX shells. There is a type of function which does this, sometimes called “pipe”, “compose”, or “flow”.

composedFunction = flow(
  adjustData,
  prepareData,
  doSomething
);
composedFunction( data );

Wow, that’s so much better! It’s even easy to add or remove steps in the chain. But you may have noticed that I skipped a step: prepareData takes two arguments, so how can we get that second argument in there?

What we need is a way to transform prepareData from a function that accepts two arguments into a function that accepts one argument. This is partial application.

prepareDataWithProperties = partiallyApply( prepareData, properties );
composedFunction = flow(
  adjustData,
  prepareDataWithProperties,
  doSomething
);
composedFunction( data );

Now the partially applied version can be used in our pipe because its first argument is already saved. This is like knowing what kind of tea to make already. We go from needing two pieces of information to only needing one.

There are many ways to actually use partial application in practice. In JavaScript we can use bind and the Lodash flow methods to achieve the above example. But it’s a big topic and there’s a lot of other options available. I’m still learning about them myself.

Using React stateless components for quick prototyping

When you’re building a new app using React it’s nice to start laying out all the components you’ll want to use inside the components that you’re building. Unfortunately this can be a little slow because for each new component you have to:

  1. Create a new file
  2. Import that file
  3. Include React in the file
  4. Export a Component from the file
  5. Add a simple render method to the Component

Step 5 is really the core of what you want to do here, and I realized today that I can quickly skip the other four steps by just using React Stateless Components, which are just functions. When using ES2015 fat-arrow syntax, they can be one-liners!

Using these you can mock up your Component layout very quickly, then move them over to your new Component files one at a time as you are ready. And you may even have some of your render methods there for you!