Skip to content

Latest commit

 

History

History
66 lines (52 loc) · 2.96 KB

faq.rst

File metadata and controls

66 lines (52 loc) · 2.96 KB

Frequently Asked Questions

  • Can I work with other modalities besides tables?
    • Besides tables, we have successfully used Lale for text, images, and time-series. In fact, Lale even works for multi-modal data, using the & combinator to specify different preprocessing paths per modality.
  • Can I work with other tasks besides classification?
  • I get an error when I instantiate an operator imported from Lale. What's wrong?
    • Lale raises errors on invalid hyperparameter values or combinations. This ensures that the operators are used correctly. So don't be surprised if you get any errors when you initialize Lale operators with some hyperparameter values. Chances are that those hyperpameters or combinations of hyperparameters are invalid. If not, please contact us.
  • The algorithm I want to use is not present in Lale. Can I still use it?
  • Can I use Lale for deep learning?
    • There are multiple facets to this question. The Lale library already includes a few DL operators such as a BERT transformer, a ResNet classifier, and an MLP classifier. Lale can perform joint algorithm selection and hyperparameter optimization over pipelines involving these operators. Furthermore, users can wrap additional DL operators as described. On the other hand, Lale does not currently support full-fledged neural architecture search (NAS). It can only perform architecture search when that is exposed through hyperparameters, such as hidden_layer_sizes for MLP.
  • How does the search space generation work?
    • Lale includes a search space generator that takes in a planned pipeline and the schemas of the operators in that pipeline, and returns a search space for your auto-ML tool of choice. Our arXiv paper describes how that works in detail: https://arxiv.org/abs/1906.03957
  • Does Lale optimize for computational performance?
    • While Lale focuses mostly on types and automation, we have also done a little bit of work on computational performance. However, it has not been a major focus. If you encounter pain-points, please reach out to us.