score:0

Since you said you have jQuery available (*), we can use it's Deferred feature to manage the two asynchronous operations you are looking at.

We are doing this by converting D3's callback-based approach into a promise-based approach.

For that, we set up two helper functions that wrap D3's .csv and .json helpers and return jQuery promises:

d3.csvAsync = function (url, accessor) {
    var result = $.Deferred();

    this.csv(url, accessor, function (data) {
        if (data) {
            result.resolve(data);
        } else {
            result.reject("failed to load " + url);
        }
    });
    return result.promise();
};

d3.jsonAsync = function (url) {
    var result = $.Deferred();

    this.json(url, function (error, data) {
        if (error) {
            result.reject("failed to load " + url + ", " + error);
        } else {
            result.resolve(data);
        }
    });
    return result.promise();
};

Now we can invoke the requests in parallel and store them in variables. We can use .then() to transform the results on the fly, as well:

var colDataReq = d3.jsonAsync("colData.json");
var xyDataReq = d3.csvAsync("xyData.csv").then(function (data) {
    data.forEach(function (d) {
        d.x = +d.x;
        d.y = +d.y;
    });
    return data;
});

Finally, we use the $.when() utility function to wait on both resources and have them handled by a single callback.

$.when(xyDataReq, colDataReq).done(function (xyData, colData) {
    var combinedData = d3.zip(colData, xyData);

    // now do something with combinedData
}).fail(function (error) {
    console.warn(error);
});

This way we can avoid nesting (and therefore needlessly serializing) the two requests.

Also, since the requests are stored in variables, we can simply re-use them without having to change our existing functions. For example, if you wanted to log the contents of one of the requests, you could do this anywhere in your code:

xyDataReq.done(function (data) {
    console.log(data);
});

and it would run as soon as xyDataReq has returned.

Another consequence of this approach is that — since we have decoupled loading a resource from using it — we can perform the loading very early, even before the rest of the page has rendered. This can save additional time.

score:0

D3.js can actually process a JavaScript object instead of a file. If you replace the file name with the variable name of the object storing (let's say, a JSON array of data) with D3.json(myData){...}, it will have access to that data.

Let's say we are using jQuery and we also include a helper library called Papa Parse (it makes life easier).

Step 1. Turn your CSV data into JSON data and store it in a variable A:

var A = Papa.parse(yourCSV);

Step 2. Read your JSON data and store it in a variable called B

var B;
$(document).ready(function() {
$.getJSON('yourJSON.json', function(json){
    B = json;
});

});

Step 3. Combine datasets A and B into variable C IMPORTANT: You might need to format the CSV json stored in A to look how you expect it to look before we give it to D3 later

var C={};
$.extend(C, A, B);

Step 4. Give C to D3

d3.json(C, function(error, jsonData) {
  // Use data here to do stuff
});

I've used the above as a work around in my own projects.

You might be able to try calling D3.json within D3.csv, but I haven't tried this before:

d3.csv("A.csv", function(errorA, dataA) {
  d3.json("B.json", function(errorB, dataB) {
    // Use data to do stuff
  });
});

Related Query