tree-of-thought-llm/readme.md

56 lines
3.1 KiB
Markdown
Raw Normal View History

2023-05-31 19:03:38 +00:00
# Offical Repo of Tree of Thoughts (ToT)
2023-05-25 15:26:04 +00:00
2023-05-31 19:08:06 +00:00
![teaser](teaser.png)
2023-05-31 19:03:38 +00:00
Offical implementation for paper [Tree of Thoughts: Deliberate Problem Solving with Large Language Models](https://arxiv.org/abs/2305.10601) with code, prompts, model outputs.
2023-05-23 22:34:41 +00:00
Also check [its tweet thread](https://twitter.com/ShunyuYao12/status/1659357547474681857) in 1min.
2023-05-31 19:03:38 +00:00
**Note: https://github.com/kyegomez/tree-of-thoughts is not the offical/correct implementation for the results in the paper. Please check https://github.com/ysymyth/tree-of-thought-llm/issues/17**
Please cite the paper and star this repo if you use ToT and find it interesting/useful. Thanks!
```bibtex
@misc{yao2023tree,
title={{Tree of Thoughts}: Deliberate Problem Solving with Large Language Models},
author={Shunyu Yao and Dian Yu and Jeffrey Zhao and Izhak Shafran and Thomas L. Griffiths and Yuan Cao and Karthik Narasimhan},
year={2023},
eprint={2305.10601},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
2023-05-23 22:34:41 +00:00
## Setup
2023-05-25 23:27:49 +00:00
You need to first have an OpenAI API key and store it in the environment variable ``OPENAI_API_KEY`` (see [here](https://help.openai.com/en/articles/5112595-best-practices-for-api-key-safety)). If you use custom base url, set it by environment variable ``OPENAI_API_BASE`` (e.g. https://api.openai.com/v1).
2023-05-23 22:34:41 +00:00
2023-05-25 23:27:49 +00:00
Package requirement: ``pip install openai backoff sympy numpy``
2023-05-24 15:59:18 +00:00
2023-05-23 22:34:41 +00:00
## Experiments
Run experiments via ``sh scripts/{game24, text, crosswords}/{standard_sampling, cot_sampling, bfs}.sh``, except in crosswords we use a DFS algorithm for ToT, which can be run via ``scripts/crosswords/search_crosswords-dfs.ipynb``.
The very simple ``run.py`` implements the ToT + BFS algorithm, as well as the naive IO/CoT sampling. Some key arguments:
- ``--naive_run``: if True, run naive IO/CoT sampling instead of ToT + BFS.
- ``--prompt_sample`` (choices=[``standard``, ``cot``]): sampling prompt
- ``--method_generate`` (choices=[``sample``, ``propose``]): thought generator, whether to sample independent thoughts (used in Creative Writing) or propose sequential thoughts (used in Game of 24)
- ``--method_evaluate`` (choices=[``value``, ``vote``]): state evaluator, whether to use the value states independently (used in Game of 24) or vote on states together (used in Creative Writing)
- ``--n_generate_sample``: number of times to prompt for thought generation
- ``--n_evaluate_sample``: number of times to prompt for state evaluation
- ``--n_select_sample``: number of states to keep from each step (i.e. ``b`` in the paper's ToT + BFS algorithm)
## Trajectories
2023-05-27 19:35:17 +00:00
``logs/`` contains all the trajectories from the paper's experiments, except for ``logs/game24/gpt-4_0.7_propose1_value3_greedy5_start900_end1000.json`` which was reproduced after the paper (as the original experiment was done in a notebook) and achieved a 69\% score instead of the original 74\% score due to randomness in GPT decoding. We hope to aggregate multiple runs in the future to account for sampling randomness and update the paper, but this shouldn't affect the main conclusions of the paper.
2023-05-23 22:34:41 +00:00
## Questions
Feel free to contact shunyuyao.cs@gmail.com or open an issue if you have any questions.
2023-05-31 15:48:52 +00:00