chore(doc): add prompt caching to README

This commit is contained in:
SujalXplores 2024-11-27 00:21:57 +05:30
parent 394d1439c1
commit 62bdbc416c

View File

@ -31,6 +31,7 @@ https://thinktank.ottomator.ai
- ✅ Ability to revert code to earlier version (@wonderwhy-er) - ✅ Ability to revert code to earlier version (@wonderwhy-er)
- ✅ Cohere Integration (@hasanraiyan) - ✅ Cohere Integration (@hasanraiyan)
- ✅ Dynamic model max token length (@hasanraiyan) - ✅ Dynamic model max token length (@hasanraiyan)
- ✅ Prompt caching (@SujalXplores)
- ⬜ **HIGH PRIORITY** - Prevent Bolt from rewriting files as often (file locking and diffs) - ⬜ **HIGH PRIORITY** - Prevent Bolt from rewriting files as often (file locking and diffs)
- ⬜ **HIGH PRIORITY** - Better prompting for smaller LLMs (code window sometimes doesn't start) - ⬜ **HIGH PRIORITY** - Better prompting for smaller LLMs (code window sometimes doesn't start)
- ⬜ **HIGH PRIORITY** - Load local projects into the app - ⬜ **HIGH PRIORITY** - Load local projects into the app
@ -42,7 +43,6 @@ https://thinktank.ottomator.ai
- ⬜ Perplexity Integration - ⬜ Perplexity Integration
- ⬜ Vertex AI Integration - ⬜ Vertex AI Integration
- ⬜ Deploy directly to Vercel/Netlify/other similar platforms - ⬜ Deploy directly to Vercel/Netlify/other similar platforms
- ⬜ Prompt caching
- ⬜ Better prompt enhancing - ⬜ Better prompt enhancing
- ⬜ Have LLM plan the project in a MD file for better results/transparency - ⬜ Have LLM plan the project in a MD file for better results/transparency
- ⬜ VSCode Integration with git-like confirmations - ⬜ VSCode Integration with git-like confirmations