Accepted answer

i'm a big proponent of putting async write operations in the action creators and async read operations in the store. the goal is to keep the store state modification code in fully synchronous action handlers; this makes them simple to reason about and simple to unit test. in order to prevent multiple simultaneous requests to the same endpoint (for example, double-reading), i'll move the actual request processing into a separate module that uses promises to prevent the multiple requests; for example:

class myresourcedao {
  get(id) {
    if (!this.promises[id]) {
      this.promises[id] = new promise((resolve, reject) => {
        // ajax handling here...
    return this.promises[id];

while reads in the store involve asynchronous functions, there is an important caveat that the stores don't update themselves in the async handlers, but instead fire an action and only fire an action when the response arrives. handlers for this action end up doing the actual state modification.

for example, a component might do:

getinitialstate() {
  return { data: mystore.getsomedata( };

the store would have a method implemented, perhaps, something like this:

class store {
  getsomedata(id) {
    if (!this.cache[id]) {
      this.cache[id] = loading_token;
      // loading_token is a unique value of some kind
      // that the component can use to know that the
      // value is not yet available.

    return this.cache[id];

  updatefromserver(response) {
      type: "data_from_server",
      payload: {id:, data: response}

  // this handles the "data_from_server" action
  handledatafromserver(action) {
    this.cache[] =;
    this.emit("change"); // or whatever you do to re-render your app


here's my take on this:

hope that helps. :)


i have been using binary muse's example from the fluxxor ajax example. here is my very simple example using the same approach.

i have a simple product store some product actions and the controller-view component which has sub-components that all respond to changes made to the product store. for instance product-slider, product-list and product-search components.

fake product client

here is the fake client which you could substitute for calling an actual endpoint returning products.

var productclient = {

  load: function(success, failure) {
    settimeout(function() {
      var items = require('../data/product-data.js');
    }, 1000);

module.exports = productclient;

product store

here is the product store, obviously this is a very minimal store.

var fluxxor = require("fluxxor");

var store = fluxxor.createstore({

  initialize: function(options) {

    this.productitems = [];

      constants.load_products_success, this.onloadsuccess,
      constants.load_products_fail, this.onloadfail

  onloadsuccess: function(data) {    
    for(var i = 0; i < data.products.length; i++){

  onloadfail: function(error) {

  getstate: function() {
    return {
      productitems: this.productitems

module.exports = store;

now the product actions, which make the ajax request and on success fire the load_products_success action returning products to the store.

product actions

var productclient = require("../fake-clients/product-client");

var actions = {

  loadproducts: function() {

    productclient.load(function(products) {
      this.dispatch(constants.load_products_success, {products: products});
    }.bind(this), function(error) {
      this.dispatch(constants.load_products_fail, {error: error});


module.exports = actions;

so calling this.getflux().actions.productactions.loadproducts() from any component listening to this store would load the products.

you could imagine having different actions though which would respond to user interactions like addproduct(id) removeproduct(id) etc... following the same pattern.

hope that example helps a bit, as i found this a little tricky to implement, but certainly helped in keeping my stores 100% synchronous.


i answered a related question here: how to handle nested api calls in flux

actions are not supposed to be things that cause a change. they are supposed to be like a newspaper that informs the application of a change in the outside world, and then the application responds to that news. the stores cause changes in themselves. actions just inform them.

bill fisher, creator of flux

what you basically should be doing is, stating via actions what data you need. if the store gets informed by the action, it should decide if it needs to fetch some data.

the store should be responsible for accumulating/fetching all the needed data. it is important to note though, that after the store requested the data and gets the response, it should trigger an action itself with the fetched data, opposed to the store handling/saving the response directly.

a stores could look like something like this:

class datastore {
  constructor() { = [];

      handledataneeded: action.data_needed,
      handlenewdata: action.new_data

  handledataneeded(id) {
    if(neededdatanotthereyet){, (err, res) => {

  handlenewdata(data) {
    //code that saves data and emit change


you can call for data in either the action creators or the stores. the important thing is to not handle the response directly, but to create an action in the error/success callback. handling the response directly in the store leads to a more brittle design.


fluxxor has an example of async communication with an api.

this blog post has talks about it and has been featured on react's blog.

i find this a very important and difficult question that is not clearly answered yet, as frontend software synchronization with the backend is still a pain.

should api requests be made in jsx components? stores? other place?

performing requests in stores mean that if 2 stores need the same data for a given action, they will issue 2 similar requets (unless you introduce dependencies between stores, which i really don't like)

in my case, i have found this very handy to put q promises as payload of actions because:

  • my actions do not need to be serializable (i do not keep an event log, i don't need event replay feature of event sourcing)
  • it removes the need to have different actions/events (request fired/request completed/request failed) and have to match them using correlation ids when concurrent requests can be fired.
  • it permits to multiple store to listen to the completion of the same request, without introducing any dependency between the stores (however it may be better to introduce a caching layer?)

ajax is evil

i think ajax will be less and less used in the near future because it is very hard to reason about. the right way? considering devices as part of the distributed system i don't know where i first came across this idea (maybe in this inspiring chris granger video).

think about it. now for scalability we use distributed systems with eventual consistency as storage engines (because we can't beat the cap theorem and often we want to be available). these systems do not sync through polling each others (except maybe for consensus operations?) but rather use structures like crdt and event logs to make all the members of the distributed system eventually consistent (members will converge to the same data, given enough time).

now think about what is a mobile device or a browser. it is just a member of the distributed system that may suffer of network latency and network partitionning. (ie you are using your smartphone on the subway)

if we can build network partition and network speed tolerant databases (i mean we can still perform write operations to an isolated node), we can probably build frontend softwares (mobile or desktop) inspired by these concepts, that work well with offline mode supported out of the box without app features unavailability.

i think we should really inspire ourselves of how databases are working to architecture our frontend applications. one thing to notice is that these apps do not perform post and put and get ajax requests to send data to each others, but rather use event logs and crdt to ensure eventual consistency.

so why not do that on the frontend? notice that the backend is already moving in that direction, with tools like kafka massively adopted by big players. this is somehow related to event sourcing / cqrs / ddd too.

check these awesome articles from kafka authors to convince yourself:

maybe we can start by sending commands to the server, and receiving a stream of server events (through websockets for exemple), instead of firing ajax requests.

i have never been very comfortable with ajax requests. as we react developpers tend to be functional programmers. i think it's hard to reason about local data that is supposed to be your "source of truth" of your frontend application, while the real source of truth is actually on the server database, and your "local" source of truth may already be outdated when you receive it, and will never converge to the real source of truth value unless you press some lame refresh button... is this engineering?

however it's still a bit hard to design such a thing for some obvious reasons:

  • your mobile/browser client has limited resources and can not necessarily store all the data locally (thus sometimes requiring polling with an ajax request heavy content)
  • your client should not see all the data of the distributed system so it requires somehow to filter the events that it receives for security reasons

Related Query

More Query from same tag