This switches our sklearn.DecisionTreeClassifier serialization logic to account for multi-valued leaves in the tree.
The key difference between our inference and DecisionTreeClassifier, is that we run a softMax over the leaf where sklearn simply normalizes the results.
This means that our "probabilities" returned will be different than sklearn.
This improves the user consumed functions and classes for PyTorch NLP model upload to Elasticsearch.
Previously it was difficult to wrap your own module for uploading to Elasticsearch.
This commit splits some classes out, adds new ones, and adds tests showing how to wrap some simple modules.
This adds some more definite types for our NLP tasks and tokenization configurations.
This is the first step in allowing users to more easily import their own transformer models via something other than hugging face.