Carolin Benjamins
Adresse
Welfengarten 1
30167 Hannover
Gebäude
Raum
Carolin Benjamins
Adresse
Welfengarten 1
30167 Hannover
Gebäude
Raum

I am driven by the love for automation and making complex algorithms more accessible. Further interests are robotics, automated machine learning (AutoML), hyperparameter optimization (HPO) and especially Bayesian Optimization (BO) as well as reinforcement learning and meta-learning.

I am also one of the developers of our HPO package SMAC.

Research Interests

  • Dynamic Algorithm Configuration
  • Bayesian Optimization
  • Contextual Reinforcement Learning
  • Meta-Reinforcement Learning

Curriculum Vitae

  • Education & Working Experience

    since 2020: Doctoral Researcher and PhD Student at the Leibniz University Hannover

    2017 - 2020: M.Sc. Mechatronics & Robotics at the Leibniz University Hannover. Thesis: Fast, Advanced and Low User Effort Object Detection for Robotic Applications. Supervisor: Prof. Dr.-Ing. Tobias Ortmaier

    2014 - 2017: B.Sc. Mechatronics & Robotics at the Leibniz University Hannover. Thesis: Analysis of Neural Networks for Segmentation of Image Data. Supervisor: Prof. Dr.-Ing. Eduard Reithmeier

Publications

2024


Becktepe, J., Dierkes, J., Benjamins, C., Mohan, A., Salinas, D., Rajan, R., Hutter, F., Hoos, H., Lindauer, M., & Eimer, T. (2024). ARLBench: Flexible and Efficient Benchmarking for Hyperparameter Optimization in Reinforcement Learning. In 17th European Workshop on Reinforcement Learning (EWRL 2024) Vorabveröffentlichung online.
Benjamins, C., Surana, S., Bent, O., Lindauer, M., & Duckworth, P. (2024). Bayesian Optimisation for Protein Sequence Design: Gaussian Processes with Zero-Shot Protein Language Model Prior Mean. In NeurIPS Workshop on Time Series in the Age of Large Models Vorabveröffentlichung online.
Benjamins, C., Surana, S., Bent, O., Lindauer, M., & Duckworth, P. (2024). Bayesian Optimization for Protein Sequence Design: Back to Simplicity with Gaussian Processes. In AI for Accelerated Materials Design - NeurIPS Workshop 2024 Vorabveröffentlichung online.
Benjamins, C., Cenikj, G., Nikolikj, A., Mohan, A., Eftimov, T., & Lindauer, M. (2024). Instance Selection for Dynamic Algorithm Configuration with Reinforcement Learning: Improving Generalization. In Genetic and Evolutionary Computation Conference (GECCO) Association for Computing Machinery Special Interest Group on Genetic and Evolutionary Computation (SIGEVO). Vorabveröffentlichung online.

2023


Benjamins, C., Eimer, T., Schubert, F. G., Mohan, A., Döhler, S., Biedenkapp, A., Rosenhahn, B., Hutter, F., & Lindauer, M. (2023). Contextualize Me – The Case for Context in Reinforcement Learning. Transactions on Machine Learning Research, 2023(6). Vorabveröffentlichung online. https://doi.org/10.48550/arXiv.2202.04500
Benjamins, C., Eimer, T., Schubert, F. G., Mohan, A., Döhler, S., Biedenkapp, A., Rosenhahn, B., Hutter, F., & Lindauer, M. (2023). Extended Abstract: Contextualize Me -- The Case for Context in Reinforcement Learning. In The 16th European Workshop on Reinforcement Learning (EWRL 2023) Vorabveröffentlichung online. https://openreview.net/forum?id=DJgHzXv61b
Benjamins, C., Raponi, E., Jankovic, A., Doerr, C., & Lindauer, M. (Angenommen/im Druck). Self-Adjusting Weighted Expected Improvement for Bayesian Optimization. In AutoML Conference 2023 PMLR.
Benjamins, C., Raponi, E., Jankovic, A., Doerr, C., & Lindauer, M. (Angenommen/im Druck). Towards Self-Adjusting Weighted Expected Improvement for Bayesian Optimization. In GECCO '23: Proceedings of the Genetic and Evolutionary Computation Conference Companion Association for Computing Machinery Special Interest Group on Genetic and Evolutionary Computation (SIGEVO).
Denkena, B., Dittrich, M.-A., Noske, H., Lange, D., Benjamins, C., & Lindauer, M. (2023). Application of machine learning for fleet-based condition monitoring of ball screw drives in machine tools. The international journal of advanced manufacturing technology, 127(3-4), 1143-1164. https://doi.org/10.1007/s00170-023-11524-9
Mohan, A., Benjamins, C., Wienecke, K., Dockhorn, A., & Lindauer, M. (2023). AutoRL Hyperparameter Landscapes. In Conference Proceedings - Second International Conference on Automated Machine Learning (Proceedings of Machine Learning Research; Band 228). PMLR. https://doi.org/10.48550/arXiv.2304.02396
Mohan, A., Benjamins, C., Wienecke, K., Dockhorn, A., & Lindauer, M. (Angenommen/im Druck). Extended Abstract: AutoRL Hyperparameter Landscapes. In The 16th European Workshop on Reinforcement Learning (EWRL 2023) https://openreview.net/forum?id=4Zu0l5lBgc
Schubert, F., Benjamins, C., Döhler, S., Rosenhahn, B., & Lindauer, M. (2023). POLTER: Policy Trajectory Ensemble Regularization for Unsupervised Reinforcement Learning. Transactions on Machine Learning Research, 2023(4). https://doi.org/10.48550/arXiv.2205.11357

2022


Benjamins, C., Raponi, E., Jankovic, A., Blom, K. V. D., Santoni, M. L., Lindauer, M., & Doerr, C. (2022). PI is back! Switching Acquisition Functions in Bayesian Optimization. Vorabveröffentlichung online. https://arxiv.org/abs/2211.01455
Benjamins, C., Jankovic, A., Raponi, E., Blom, K. V. D., Lindauer, M., & Doerr, C. (2022). Towards Automated Design of Bayesian Optimization via Exploratory Landscape Analysis. Beitrag in Workshop on Meta-Learning (MetaLearn 2022). https://openreview.net/forum?id=cmxtTF_IHd
Lindauer, M., Eggensperger, K., Feurer, M., Biedenkapp, A., Deng, D., Benjamins, C., Sass, R., & Hutter, F. (2022). SMAC3: A Versatile Bayesian Optimization Package for Hyperparameter Optimization. Journal of Machine Learning Research, 2022(23). https://arxiv.org/abs/2109.09831

2021


Benjamins, C., Eimer, T., Schubert, F., Biedenkapp, A., Rosenhahn, B., Hutter, F., & Lindauer, M. (2021). CARL: A Benchmark for Contextual and Adaptive Reinforcement Learning. In Workshop on Ecological Theory of Reinforcement Learning, NeurIPS 2021 Vorabveröffentlichung online. https://arxiv.org/abs/2110.02102
Eimer, T., Benjamins, C., & Lindauer, M. T. (2021). Hyperparameters in Contextual RL are Highly Situational. In International Workshop on Ecological Theory of RL (at NeurIPS) https://doi.org/10.48550/arXiv.2212.10876