AutoML targets solving the problem once the labels or targets one wants to predict are well defined and available. Feature engineering focuses on generating features, given a dataset, labels, and targets. Both assume that the target a user wants to predict is already defined and computed. In most real world scenarios, this is something a data scientist has to do: define an outcome to predict and create labeled training examples. We structured this process and called it prediction engineering (a
play on an already well defined process feature engineering). This library provides an easy way for a user to define the target outcome and generate training examples automatically from relational, temporal, multi entity datasets.
In most KAGGLE competitions the target to predict is already defined. In many cases, they follow the same way to represent training examples as us—“label times” (see here and here). Compose is a step prior to where KAGGLE starts. Indeed, it is a step that KAGGLE or the company sponsoring the competition might have to do or would have done before publishing the competition.
In many cases, setting up prediction problem is done independently before even getting started on the machine learning. This has resulted in a very skewed availability of datasets with already defined prediction problems and labels. A number of times it also results in a data scientist not knowing how the label was defined. In opening up this part of the process, we are enabling data scientists to more flexibly define problems, explore more problems and solve problems to maximize the end goal -
If you already have label times you don’t need LabelMaker and search. However, you could use the label transforms functionality of Compose to apply lead and threshold, as well as balance labels.
Since we have automated feature engineering and autoML, the best recommended use for Compose is to closely couple LabelMaker and Search functionality of Compose with the rest of the machine learning pipeline. Certain parameters used in Search, and LabelMaker and label transforms can be tuned alongside machine learning model.
You can read about prediction engineering, the way we defined the search algorithm and technical details in this peer reviewed paper published in IEEE international conference on data science and advanced analytics. If you’re interested, you can also watch a video here. Please note that some of our thinking and terminology has evolved as we built this library and applied Compose to different industrial scale problems.
Yes. As we mentioned above, extracting value out of your data is dependent on how you set the prediction problem. Currently, data scientists do not iterate through the setting up of the prediction problem because there is no structured way of doing it or algorithms and library to help do it. We believe that prediction engineering should be taken even more seriously than any other part of actually solving a problem.
We are happy for anyone who can provide interesting labeling functions. To contribute an interesting new use case and labeling function, we request you create a representative synthetic data set, a labeling function and the parameters for label maker. Once you have these three, you can write a brief explanation about the use case and do a pull request.
Your label times is the . However, when such a data set is given one should ask for how that label was generated. It could be one of very many cases: a human could have assigned it based on their assessment/analysis, it could have been automatically generated by a system, or it could have been computed using some data. If it is the third case one should ask for the function that computed the label or rewrite it. If it is (1), one should note that the ref_time would be slightly after the