-
Romera-Paredes, B., Barekatain, M., Novikov, A., Balog, M., Kumar, M.P., Dupont, E., Ruiz, F.J., Ellenberg, J.S., Wang, P., Fawzi, O., et al.: Mathematical discoveries from program search with large language models. Nature, 1–3 (2023)
-
Meyerson, E., Nelson, M.J., Bradley, H., Moradi, A., Hoover, A.K., Lehman, J.: Language model crossover: Variation through few-shot prompting. arXiv preprint arXiv:2302.12170 (2023)
-
Liu, S., Chen, C., Qu, X., Tang, K., Ong, Y.-S.: Large language models as evolutionary optimizers. arXiv preprint arXiv:2310.19046 (2023)
-
Lehman, J., Gordon, J., Jain, S., Ndousse, K., Yeh, C., Stanley, K.O.: In: Banzhaf, W., Machado, P., Zhang, M. (eds.) Evolution Through Large Models, pp. 331–366. Springer, Singapore (2024)
-
Ma, Y.J., Liang, W., Wang, G., Huang, D.-A., Bastani, O., Jayaraman, D., Zhu, Y., Fan, L., Anandkumar, A.: Eureka: Human-level reward design via coding large language models. arXiv preprint arXiv:2310.12931 (2023)
-
Nasir, M.U., Earle, S., Togelius, J., James, S., Cleghorn, C.: Llmatic: Neural architecture search via large language models and quality-diversity optimization. arXiv preprint arXiv:2306.01102 (2023)
-
Zheng, M., Su, X., You, S., Wang, F., Qian, C., Xu, C., Albanie, S.: Can gpt-4 perform neural architecture search? arXiv preprint arXiv:2304.10970 (2023)
-
Wang, H., Gao, Y., Zheng, X., Zhang, P., Chen, H., Bu, J.: Graph neural architecture search with gpt-4. arXiv preprint arXiv:2310.01436 (2023)
-
Zhang, M., Desai, N., Bae, J., Lorraine, J., Ba, J.: Using large language mod- els for hyperparameter optimization. In: NeurIPS 2023 Foundation Models for Decision Making Workshop (2023)
-
Zhang, S., Gong, C., Wu, L., Liu, X., Zhou, M.: Automl-gpt: Automatic machine learning with gpt. arXiv preprint arXiv:2305.02499 (2023)
-
Liu, F., Lin, X., Wang, Z., Yao, S., Tong, X., Yuan, M., Zhang, Q.: Large language model for multi-objective evolutionary optimization. arXiv preprint arXiv:2310.12541 (2023)
-
Yang, C., Wang, X., Lu, Y., Liu, H., Le, Q.V., Zhou, D., Chen, X.: Large language models as optimizers. arXiv preprint arXiv:2309.03409 (2023)
-
Xiao, L., Chen, X.: Enhancing llm with evolutionary fine tuning for news summary generation. arXiv preprint arXiv:2307.02839 (2023)
-
Bradley, H., Dai, A., Teufel, H., Zhang, J., Oostermeijer, K., Bellagente, M., Clune, J., Stanley, K., Schott, G., Lehman, J.: Quality-diversity through ai feedback. arXiv preprint arXiv:2310.13032 (2023)
-
Lanzi, P.L., Loiacono, D.: Chatgpt and other large language models as evolutionary engines for online interactive collaborative game design. In: Proceedings of the Genetic and Evolutionary Computation Conference. GECCO ’23, pp. 1383–1390. Association for Computing Machinery, New York, NY, USA (2023)
-
Sudhakaran, S., Gonz ́alez-Duque, M., Freiberger, M., Glanois, C., Najarro, E., Risi, S.: MarioGPT: Open-ended text2level generation through large language models. In: Thirty-seventh Conference on Neural Information Processing Systems (2023)
-
Jablonka, K.M., Ai, Q., Al-Feghali, A., Badhwar, S., Bocarsly, J.D., Bran, A.M., Bringuier, S., Brinson, L.C., Choudhary, K., Circi, D., et al.: 14 examples of how llms can transform materials science and chemistry: a reflection on a large language model hackathon. Digital Discovery 2(5), 1233–1250 (2023)
-
Liu, F., Tong, X., Yuan, M., Lin, X., Luo, F., Wang, Z., Lu, Z., Zhang, Q.: An example of evolutionary computation+ large language model beating human: Design of efficient guided local search. arXiv preprint arXiv:2401.02051 (2024)
-
Liu, F., Tong, X., Yuan, M., Zhang, Q. Algorithm Evolution Using Large Language Model. arXiv preprint arXiv:2311.15249 (2023)
-
Brownlee, A.E., Callan, J., Even-Mendoza, K., Geiger, A., Hanna, C., Petke, J., Sarro, F., Sobania, D.: Enhancing genetic improvement mutations using large language models. In: International Symposium on Search Based Software Engineering, pp. 153–159 (2023). Springer
-
CHen, Z., Cao, L., Madden, S., Fan, J., Tang, N., Gu, Z., Shang, Z., Liu, C., Cafarella, M., Kraska, T.: Seed: Simple, efficient, and effective data management via large language models. arXiv preprint arXiv:2310.00749 (2023)
-
Xia, C.S., Paltenghi, M., Tian, J.L., Pradel, M., Zhang, L.: Fuzz4all: Universal fuzzing with large language models. In: 2024 IEEE/ACM 46th International Conference on Software Engineering (ICSE) (2024)
-
Bradley H., Fan H., Galanos T., et al. The OpenELM Library: Leveraging Progress in Language Models for Novel Evolutionary Algorithms (2023)
-
Liu, Fei, et al. "An example of evolutionary computation+ large language model beating human: Design of efficient guided local search." arXiv preprint arXiv:2401.02051 (2024).
-
Ye, Haoran, et al. "ReEvo: Large Language Models as Hyper-Heuristics with Reflective Evolution." arXiv preprint arXiv:2402.01145 (2024).
-
Li, Xiaoxia, et al. "Semantic Mirror Jailbreak: Genetic Algorithm Based Jailbreak Prompts Against Open-source LLMs." arXiv preprint arXiv:2402.14872 (2024).
-
Lange, Robert Tjarko, Yingtao Tian, and Yujin Tang. "Large Language Models As Evolution Strategies." arXiv preprint arXiv:2402.18381 (2024).
-
Mercer, Johnathan. "EvoGPT-f: An Evolutionary GPT Framework for Benchmarking Formal Math Languages." arXiv preprint arXiv:2402.16878 (2024).
-
Ma, Zeyuan, et al. "LLaMoCo: Instruction Tuning of Large Language Models for Optimization Code Generation." arXiv preprint arXiv:2403.01131 (2024).
-
Maddigan, Paula, Andrew Lensen, and Bing Xue. "Explaining genetic programming trees using large language models." arXiv preprint arXiv:2403.03397 (2024).
-
Lange, Robert Tjarko, Yingtao Tian, and Yujin Tang. "Evolution Transformer: In-Context Evolutionary Optimization." arXiv preprint arXiv:2403.02985 (2024).
-
Morris, Clint, Michael Jurado, and Jason Zutty. "LLM Guided Evolution-The Automation of Models Advancing Models." arXiv preprint arXiv:2403.11446 (2024).
-
Shem-Tov, Eliad, and Achiya Elyasaf. "Deep Neural Crossover." arXiv preprint arXiv:2403.11159 (2024).
-
Huang, Beichen, et al. "Exploring the True Potential: Evaluating the Black-box Optimization Capability of Large Language Models." arXiv preprint arXiv:2404.06290 (2024).
-
Lim, Bryan, Manon Flageat, and Antoine Cully. "Large Language Models as In-context AI Generators for Quality-Diversity." arXiv preprint arXiv:2404.15794 (2024).
-
Yao, Yiming, et al. "Evolve Cost-aware Acquisition Functions Using Large Language Models." arXiv preprint arXiv:2404.16906 (2024).
-
Shojaee, Parshin, et al. "LLM-SR: Scientific Equation Discovery via Programming with Large Language Models." arXiv preprint arXiv:2404.18400 (2024).
-
Song, et al. "Position: Leverage Foundational Models for Black-Box Optimization." arXiv preprint arXiv:2405.03547 (2024).
-
Cai, Jinyu, et al. "Exploring the Improvement of Evolutionary Computation via Large Language Models." arXiv preprint arXiv:2405.02876 (2024).
-
Wang, Zeyi, et al. "Large Language Model-Aided Evolutionary Search for Constrained Multiobjective Optimization." arXiv preprint arXiv:2405.05767 (2024).
-
Huang, Sen, et al. "When Large Language Model Meets Optimization." arXiv preprint arXiv:2405.10098 (2024).
-
Gaier, Adam, et al. "Generative Design through Quality-Diversity Data Synthesis and Language Models." arXiv preprint arXiv:2405.09997 (2024).
-
Singh, Gaurav, and Kavitesh Kumar Bali. "Enhancing Decision-Making in Optimization through LLM-Assisted Inference: A Neural Networks Perspective." arXiv preprint arXiv:2405.07212 (2024).
-
Arora, Akhil, et al. "Fleet of Agents: Coordinated Problem Solving with Large Language Models using Genetic Particle Filtering." arXiv preprint arXiv:2405.06691 (2024).
-
Kramer, Oliver. "Large Language Models for Tuning Evolution Strategies." arXiv preprint arXiv:2405.10999 (2024).
-
Nie, Allen, et al. "The Importance of Directional Feedback for LLM-based Optimizers." arXiv preprint arXiv:2405.16434 (2024).
-
Niki van Stein, Thomas Bäck. "LLaMEA: A Large Language Model Evolutionary Algorithm for Automatically Generating Metaheuristics." arXiv preprint arXiv:2405.20132 (2024).
-
Wang, Haorui, et al. "Efficient Evolutionary Search Over Chemical Space with Large Language Models." arXiv preprint arXiv:2406.16976 (2024).
-
Hao, Hao, Xiaoqun Zhang, and Aimin Zhou. "Large Language Models as Surrogate Models in Evolutionary Algorithms: A Preliminary Study." arXiv preprint arXiv:2406.10675 (2024).
-
Huang, Yuxiao, et al. "Towards Next Era of Multi-objective Optimization: Large Language Models as Architects of Evolutionary Operators." arXiv preprint arXiv:2406.08987 (2024).
-
Reissmann, Maximilian, et al. "Accelerating evolutionary exploration through language model-based transfer learning." arXiv preprint arXiv:2406.05166 (2024).
-
Aki, Fuma, et al. "LLM-POET: Evolving Complex Environments using Large Language Models." arXiv preprint arXiv:2406.04663 (2024).
-
Sobania, Dominik, et al. "A Comparison of Large Language Models and Genetic Programming for Program Synthesis." IEEE Transactions on Evolutionary Computation (2024).
-
Liu, Fei, et al. "Evolution of Heuristics: Towards Efficient Automatic Algorithm Design Using Large Language Model." Forty-first International Conference on Machine Learning. 2024.
-
Sun, T., Shao, Y., Qian, H., Huang, X., Qiu, X.: Black-box tuning for language-model-as-a-service. In: International Conference on Machine Learning, pp. 20841–20855 (2022). PMLR
-
Fernando, C., Banarse, D., Michalewski, H., Osindero, S., Rockt ̈aschel, T.: Promptbreeder: Self-referential self-improvement via prompt evolution. arXiv preprint arXiv:2309.16797 (2023)
-
Li, Y.B., Wu, K.: Spell: Semantic prompt evolution based on a llm. arXiv preprint arXiv:2310.01260 (2023)
-
Chen, A., Dohan, D., So, D.: Evoprompting: Language models for code-level neural architecture search. In: Thirty-seventh Conference on Neural Information Processing Systems (2023)
-
Zhang, Z., Wang, S., Yu, W., Xu, Y., Iter, D., Zeng, Q., Liu, Y., Zhu, C., Jiang, M.: Auto-instruct: Automatic instruction generation and ranking for black-box language models. In: Bouamor, H., Pino, J., Bali, K. (eds.) Findings of the Association for Computational Linguistics: EMNLP 2023, pp. 9850–9867. Association for Computational Linguistics, Singapore (2023)
-
Choong, H.X., Ong, Y.-S., Gupta, A., Chen, C., Lim, R.: Jack and masters of all trades: One-pass learning sets of model sets from large pre-trained models. IEEE Computational Intelligence Magazine 18(3), 29–40 (2023)
-
Klein, A., Golebiowski, J., Ma, X., Perrone, V., Archambeau, C.: Structural pruning of large language models via neural architecture search. In: AutoML Conference 2023 (Workshop) (2023)
-
Anonymous: Knowledge fusion by evolving language models. In: Submitted to The Twelfth International Conference on Learning Representations (2023). under review
-
Sun, T., He, Z., Qian, H., Zhou, Y., Huang, X.-J., Qiu, X.: Bbtv2: towards a gradient-free future with large language models. In: Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pp. 3916–3930 (2022)
-
Fei, Z., Fan, M., Huang, J.: Gradient-free textual inversion. In: Proceedings of the 31st ACM International Conference on Multimedia. MM ’23, pp. 1364–1373. Association for Computing Machinery, New York, NY, USA (2023)
-
Shen, M., Ghosh, S., Sattigeri, P., Das, S., Bu, Y., Wornell, G.: Reliable gradient-free and likelihood-free prompt tuning. In: Vlachos, A., Augenstein, I.(eds.) Findings of the Association for Computational Linguistics: EACL 2023, pp. 2416–2429. Association for Computational Linguistics, Dubrovnik, Croatia(2023)
-
Qi, S., Zhang, Y.: Prompt-calibrated tuning: Improving black-box optimization for few-shot scenarios. In: 2023 4th International Seminar on Artificial Intelligence, Networking and Information Technology (AINIT), pp. 402–407 (2023). IEEE
-
Sun, Q., Han, C., Chen, N., Zhu, R., Gong, J., Li, X., Gao, M.: Make prompt-based black-box tuning colorful: Boosting model generalization from three orthogonal perspectives. arXiv preprint arXiv:2305.08088 (2023)
-
Zheng, Y., Tan, Z., Li, P., Liu, Y.: Black-box prompt tuning with subspace learning. arXiv preprint arXiv:2305.03518 (2023)
-
Han, C., Cui, L., Zhu, R., Wang, J., Chen, N., Sun, Q., Li, X., Gao, M.: When gradient descent meets derivative-free optimization: A match made in black-box scenario. In: Rogers, A., Boyd-Graber, J., Okazaki, N. (eds.) Findings of the Association for Computational Linguistics: ACL 2023, pp. 868–880. Association for Computational Linguistics, Toronto, Canada (2023)
-
Sun, J., Xu, Z., Yin, H., Yang, D., Xu, D., Chen, Y., Roth, H.R.: Fedbpt: Efficient federated black-box prompt tuning for large language models. arXiv preprint arXiv:2310.01467 (2023)
-
Zhao, J., Wang, Z., Yang, F.: Genetic prompt search via exploiting language model probabilities. In: Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, pp. 5296–5305 (2023)
-
Prasad, A., Hase, P., Zhou, X., Bansal, M.: GrIPS: Gradient-free, edit-based instruction search for prompting large language models. In: Vlachos, A., Augen- stein, I. (eds.) Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, pp. 3845–3864. Association for Computational Linguistics, Dubrovnik, Croatia (2023)
-
Zhou, H., Wan, X., Vuli ́c, I., Korhonen, A.: Survival of the most influential prompts: Efficient black-box prompt search via clustering and pruning. In: Bouamor, H., Pino, J., Bali, K. (eds.) Findings of the Association for Computational Linguistics: EMNLP 2023, pp. 13064–13077. Association for Computational Linguistics, Singapore (2023)
-
Lapid, R., Langberg, R., Sipper, M.: Open sesame! universal black box jailbreaking of large language models. arXiv preprint arXiv:2309.01446 (2023)
-
Yu, L., Chen, Q., Lin, J., He, L.: Black-box prompt tuning for vision-language model as a service. In: Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, pp. 1686–1694 (2023)
-
Akiba, Takuya, et al. "Evolutionary optimization of model merging recipes." arXiv preprint arXiv:2403.13187 (2024).
-
Du, Guodong, et al. "Knowledge Fusion By Evolving Weights of Language Models." arXiv preprint arXiv:2406.12208 (2024).
-
Wong, Melvin, et al. "Generative AI-based Prompt Evolution Engineering Design Optimization With Vision-Language Model." arXiv preprint arXiv:2406.09143 (2024).
-
Petruzzellis, Flavio, Alberto Testolin, and Alessandro Sperduti. "Assessing the Emergent Symbolic Reasoning Abilities of Llama Large Language Models." arXiv preprint arXiv:2406.06588 (2024).
-
Hazra, Rishi, et al. "REvolve: Reward Evolution with Large Language Models for Autonomous Driving." arXiv preprint arXiv:2406.01309 (2024).
Disclaimer
If you have any questions, please feel free to contact us. Emails: xiaofengxd@126.com
Authors of scientific papers are encouraged to cite the following paper:
- Chao, Wang, et al. "When large language models meet evolutionary algorithms." arXiv preprint arXiv:2401.10510 (2024).