score:0

I'd guess the two datasets are sometimes returning different types. When the data are Tensors, torch stacks them, and they better be the same shape. If they're something like strings, torch will make a tuple out of them. So this sounds like one of your datasets is sometimes returning something that's not a tensor. I'd put some asserts on the output of your dataset to check that it's doing what you want, or dive in with pdb.

score:0

Adding to @Leopd's answer, you can use the collate_fn function provided by PyTorch. The idea is that in the collate_fn, you will define how the examples should be stacked to make a batch. Since you are on torch 1.3.1, make sure you are looking at the correct version of the documentation.

Let me know if this helps or if you have any followup questions :)

score:15

If I got your question right, you have train and dev sets (and their corresponding loaders) as follows:

train_set = CustomDataset(...)
train_loader = DataLoader(dataset=train_set, ...)
dev_set = CustomDataset(...)
dev_loader = DataLoader(dataset=dev_set, ...)

And you want to concatenate them in order to use train+dev as the training data, right? If so, you just simply call:

train_dev_sets = torch.utils.data.ConcatDataset([train_set, dev_set])
train_dev_loader = DataLoader(dataset=train_dev_sets, ...)

The train_dev_loader is the loader containing data from both sets.

Now, be sure your data has the same shapes and the same types, that is, the same number of features, or the same categories/numbers, etc.