Rep learning

An example representation from the Feat docs.

What makes a representation, i.e. a feature space, good? At the minimum, a good representation produces a model with better generalization than a model trained only on the raw data attributes. In addition, a good representation teases apart the factors of variation in the data into independent components. Finally, an ideal representation is succinct so as to promote intelligibility. This means a representation should only have as many features as there are independent factors in the process, and each of those features should be digestible by the user. My research in this area centers around these three motivations.

Relevant work:

  1. La Cava, W. & Moore, J.H. (2020). Learning feature spaces for regression with genetic programming. Genetic Programming and Evolvable Machines (GPEM). link, pdf

  2. La Cava, W., & Moore, J. H. (2019). Semantic variation operators for multidimensional genetic programming. GECCO 2019. arXiv

  3. La Cava, W., & Moore, J. H. (2019). Learning concise representations for regression by evolving networks of trees. ICLR 2019. arXiv

  4. La Cava, W., & Moore, J. (2017). A General Feature Engineering Wrapper for Machine Learning Using epsilon-Lexicase Survival. European Conference on Genetic Programming.
    link, preprint

  5. La Cava, W., & Moore, J. H. (2017). Ensemble representation learning: an analysis of fitness and survival for wrapper-based genetic programming methods. GECCO ’17 (pp. 961–968). Berlin, Germany: ACM. link, arXiv

  6. La Cava, W., Silva, S., Vanneschi, L., Spector, L., & Moore, J. (2017). Genetic Programming Representations for Multi-dimensional Feature Learning in Biomedical Classification. Applications of Evolutionary Computation (pp. 158–173). Springer, Cham. link, preprint

  7. La Cava, W., Silva, S., Danai, K., Spector, L., Vanneschi, L., & Moore, J. H. (2018). Multidimensional genetic programming for multiclass classification. Swarm and Evolutionary Computation. link, preprint