Transforms each text in texts in a sequence of integers.
Only top "num_words" most frequent words will be taken into account. Only words known by the tokenizer will be taken into account.
Vector/list of texts (strings).
Generator which yields individual sequences