QQ for anyone who does #PyTorch / #Tensorflow - how do you handle encoding words into one hot character strings?
Do you just pad the final tensor? Currently I can get it to work without minibatches, but the second I put it into a minibatch, it wants all the tensors to be the same shape, and obviously a word of length 6 is going to be a different tensor to a word of length 9.
Like I don't _hate_ having to pad it, it's just if I pad it then I have to do the onehot during data load rather than during the for loop, and if I do it in the for loop I can do it in one go for the batches.