mirror of
https://github.com/stackblitz/bolt.new
synced 2025-02-05 20:46:43 +00:00
chore(doc): add prompt caching to README
This commit is contained in:
parent
394d1439c1
commit
62bdbc416c
@ -31,6 +31,7 @@ https://thinktank.ottomator.ai
|
||||
- ✅ Ability to revert code to earlier version (@wonderwhy-er)
|
||||
- ✅ Cohere Integration (@hasanraiyan)
|
||||
- ✅ Dynamic model max token length (@hasanraiyan)
|
||||
- ✅ Prompt caching (@SujalXplores)
|
||||
- ⬜ **HIGH PRIORITY** - Prevent Bolt from rewriting files as often (file locking and diffs)
|
||||
- ⬜ **HIGH PRIORITY** - Better prompting for smaller LLMs (code window sometimes doesn't start)
|
||||
- ⬜ **HIGH PRIORITY** - Load local projects into the app
|
||||
@ -42,7 +43,6 @@ https://thinktank.ottomator.ai
|
||||
- ⬜ Perplexity Integration
|
||||
- ⬜ Vertex AI Integration
|
||||
- ⬜ Deploy directly to Vercel/Netlify/other similar platforms
|
||||
- ⬜ Prompt caching
|
||||
- ⬜ Better prompt enhancing
|
||||
- ⬜ Have LLM plan the project in a MD file for better results/transparency
|
||||
- ⬜ VSCode Integration with git-like confirmations
|
||||
|
Loading…
Reference in New Issue
Block a user