• log out


Qbix promotes several coding patterns in Javascript, which simplify coding and make your apps more efficient at the same time. They are often used in the Qbix core itself.


Let's start with the simplest one. If you want to normalize some strings, so they use a standard set of characters, you'd do it like this in Javascript:

var normalized = Q.normalize(name);

And the corresponding operation in PHP:

var normalized = Q_Utils::normalize(name);

This is useful for generating identifiers, and other things besides.


Suppose you want to provide an existing function as a callback, but ensure it's called at most once. Simply wrap it as follows:



Now suppose your app has an "autocomplete" tool, which sends an asynchronous request to the server as you type in a textbox, and shows a list of suggestions with the results. The network latency can vary between requests, and you want to avoid having the list end up showing the results of an earlier request, overwriting results from a request that was sent later.

More generally, in situations where you process the response from a request, you'll often want to ignore responses to earlier requests if later requests were already made. Here is how you do that in Qbix:

var ordinal = Q.latest(key);
requestSomeResults(function (err, results) {
  // see if this is still the latest ordinal
  if (!Q.latest(key, ordinal)) return;
  // otherwise, show latest results on the client

Requests under the same key share the same incrementing ordinal. In place of a key, you can pass a reference to a tool, just like elsewhere in Q.

Another way to make sure you do operations in the right order is to handle incoming messages.

Q.debounce and Q.throttle

You can cut down on the frequency a function is called by wrapping it with one of these two functions. This is useful for wrapping handlers for rapidly firing events, such as keydown, mousemove, etc.

Q.debounce(milliseconds) is like holding an elevator door open for a little while after each person comes in, and only starting the elevator ride when no one has come in for that many milliseconds.

Q.throttle(milliseconds) is like having an elevator that will only take one person per that many milliseconds.

Q.each and Q.queue

You can iterate most containers (Arrays, Objects, Strings) using the function

Q.each(container, callback, options)

It is a general-purpose function like Q.handle, which can take functions, Q.Event objects, urls, etc. In fact, Q.each calls Q.handle for each callback, so the callbacks can themselves be various types of things.

During the iteration, the callback receives the arguments (key, value, container), and the value is also passed as this to the callback.

Q.each also accepts options such as "ascending" and "numeric" which allow you to easily iterate in ascending or descending key order.

You can also use Q.each() to set up "for loops", by calling it in the following form:

Q.each(from, to, step, callback)

By combining Q.each(container, handler) with Q.queue(handler, milliseconds), you can schedule asynchronous execution of handlers. For example, milliseconds=0 would execute them "as soon as possible" asynchronously.


You'll often want to have a function execute as soon as all the objects it needs are available. For example, you may want to fetch a User and a Stream from the database, and do something when both of them become available.

To do this, simply create Q.Pipe objects and add handler functions to the pipes. Each time the pipe is run, the handlers may be called, depending on what was filled. This is best shown by example, so here is a basic one:

var p = new Q.Pipe(
  ['user', 'stream'],
  function handler(args, subjects) {
    // passed args in args.user, args.stream
    // "this" in subjects.user, subjects.stream

The Q.Pipe objects returned are intended to be middleware — they work with regular functions that expect callbacks. For example, if you had a function to fetch objects from a database, you would generate callbacks using the pipe, and pass them in:

  "SELECT * FROM user WHERE user_id = 2",
  "SELECT * FROM stream WHERE publisher_id = 2",

Whichever callback is called first, only one of ['user', 'stream'] will be filled, and the handler in the pipe won't run. Only when both are filled will it run.

You can add handlers to a pipe using pipe.add(..., handler), where the handler can be preceded by:

  • An array of strings -- as in the example above. Until all the names are filled, the handler won't run.
  • Or an array or hash containing objects, followed by the name of a Q.Event property to wait for on all the objects:
  • A number -- this is the maximum number of times the function would be called. Useful e.g. for getting just "the first 5" things.

Here is an example of an array or object, followed by the name of an event, followed by a number:

p.add(tools, "state.onRefresh", 1,
  function (args, subjects) { ... }

A simpler method called on can be used to add handlers that wait for only one field, and execute only once. You can use it like Javascript Promises:

p = new Q.Pipe();
Q.Users.get(userId, p.fill("user"));
p.on("user", function (err, user) {
  // Here, the arguments and context
  // are the same ones passed to the callback
  // closure returned by p.fill("user").

You can also run through the pipe at any time with pipe.run() — you will often want to do this right after you add some handlers, in case the some objects in the pipe have already been filled.


Suppose you have a function in Javascript that gets some objects and then invokes a callback, passing it the results. This function is a getter. Some getters might make a request over the internet for some data. Other getters may request objects from the local file system, or the database. Qbix has a few tricks to turn these getters into really useful functions. All you have to do is wrap them like this:

function getStuff (arg1, arg2, cbGood, cbBad) {
  // This is the function you'd normally write
  // it looks at the args, and makes a request
  // to the server. When the response arrives,
  // it passes the results to one of the
  // callbacks.

var options = {
  cache: new Q.Cache.document("First.getStuff"),
  throttle: "First.getStuff"

// Now, let's expose a smarter getter:
First.getStuff = Q.getter(getStuff, options);

The options include:

  • cache - pass false here to turn off caching, or an object to do your own caching
  • throttle - you can pass a string id to throttle on, or an object to do your own throttling
  • throttleSize - defaults to 10, but you can choose your own throttle size

By wrapping getters with Q.getter, the resulting function sports a lot of new features:

  • If the object has already been obtained and cached, it is passed to the callback immediately.
  • If the object has been requested but not yet obtained, the callback is placed in a queue and invoked when the object is returned.
  • Throttling. Only a certain number of requests will be sent at one time, and others will have to wait until they return.

Inside the original callback function(s), you can check Q.getter.usingCache if you want to know whether the callback is being called with cached result.


Do you really like the Promises API? Not all browsers support it, so Q doesn't require this API in order to work. In fact, Q's pipe and getter patterns are designed to achieve similar things, but without the restriction of having only one argument to the callback, no this etc. However, if you want to promisify some functions, simply call Q.promisify(f) and it will use the Promise constructor, if any, pointed to by Q.Promise. Doing this probably adds a bit of overhead to every function call, so you'd have to do it yourself for each function you want to use promises with:

Q.Users.get = Q.promisify(Q.Users.get);


Your getter functions can also batch many requests for data into one "batch request". This can be very useful when, for example, you want to minimize the number of requests to a server. This requires the server side to implement batch support for that particular type of request. Here is how you'd make it happen:

function getStuff (pubId, streamName, cb) {
  // This is the function you'd normally write
  // to retrieve stuff and call a callback.
  // But this time, we replace straight requests
  // to the server with batch functions:

  var func = batchFunction(pubId, streamName);
  func.call(this, arg1, arg2, callback);

  function callback(errors, content) {
    var msg;
    if (msg = Q.firstErrorMessage(errors)) {
    console.log(content.data); // "data" slot
    console.log(content.foo); // "foo" slot

function batchFunction(pubId, streamName) {
  // First, let's compute the request base URL:
  var baseUrl = Q.baseUrl({
    publisherid: pubId,
    streamName: streamName

  // Now invoke the batcher factory
  // to return a batcher function
  // which hits the First/batch action
  return Q.batcher.factory(
batchFunction.functions = {};
// Now, let's expose a smarter getter:
First.getStuff = Q.getter(getStuff, options);

Now, whenever someone calls First.getStuff(pubId, streamName), the following occurs:

  1. A base URL is computed, to determine which host to send the request to
  2. If there are 10 requests outstanding, then the batch request is made
  3. Otherwise, a timer is started. If no other requests are made within 50ms, the batch request is made.
  4. When the batch response arrives, the appropriate callbacks are called one by one
  5. The response is then handled by the wrapper function produced by Q.getter, which deals with caching, waitlisting, throttling, etc.

The default throttleSize for Q.getter is 100, and the default max for Q.batcher is 10. This means that if you request 200 distinct uncached objects in quick succession, the first 100 will be requested via 10 batch requests of 10 objects each, and then the second 100 will be requested the same way. Of course, you can customize these options as you need.

Back end support

On the back end, you need to implement a handler for the batch requests. In this case, it would be PHP handlers for the action named "First/batch":


function First_batch_post()
  if (empty($_REQUEST['batch'])) {
    throw new Q_Exception_RequiredField(
      array('field' => 'batch')
  $batch = json_decode(
    $_REQUEST['batch'], true

  if (empty($batch['args'])) {
    throw new Q_Exception_RequiredField(
      array('field' => 'args')

  // Gather object ids to fetch
  $toFetch = array();
  foreach ($batch['args'] as $args) {
    if (count($args) < 2) continue;
    $toFetch[] = $args[1];

  // Fetch a bunch of objects at once

  // Now, build the result
  $result = array();
  foreach ($batch['args'] as $args) {
    try {
      // override request info
      Q_Request::$slotNames_override = $args[1];
      Q_Request::$method_override = 'GET';

      // now execute that action handler
      $action = $args[0];

      // now append the result to the output
      $slots = Q_Response::slots(true);
      $result[] = compact('slots');
    } catch (Exception $e) {
      $result[] = array(
        'errors' => Q_Exception::buildArray($e)

    // restore request info
    Q_Request::$slotNames_override = null;
    Q_Request::$method_override = null;

  // Return the results to the client
  Q_Response::setSlot('batch', $result);


The Q.Cache class represents a cache in which objects can be stored. They are almost always used to support getters. You can choose three types of storage:

var name = "First", // the name of the new cache,
    max = 100, // maximum number of items
    a, b, c;

a = Q.Cache.document(name, max); // page lifetime
b = Q.Cache.session(name, max); // sessionStorage
c = Q.Cache.local(name, max); // localStorage

Each Q.Cache object supports the following methods:

  • cache.get: get an item by args
  • cache.set: set an item by args
  • cache.remove: remove an item
  • cache.clear: clear the cache
  • cache.each: enumerate by first few args

If you are using getters, you would almost never need to use these methods directly. The only time you'll really need to access these methods is when you want to manually update the cache for a getter, for example if new updates have arrived.

In fact, the more high-level Streams API takes care of dealing with getters and caches underneath, so you can just focus on your data. It encourages you to just write handlers for new messages that may arrive from the server.