From 50de8d0e1b26e3198427aab6b1fe1302d904a99f Mon Sep 17 00:00:00 2001 From: Cole Medin Date: Wed, 20 Nov 2024 08:27:21 -0600 Subject: [PATCH] Updating README with new features and a link to our community --- README.md | 18 ++++++++++++------ 1 file changed, 12 insertions(+), 6 deletions(-) diff --git a/README.md b/README.md index 52889233..6bcacd4e 100644 --- a/README.md +++ b/README.md @@ -1,8 +1,12 @@ [![Bolt.new: AI-Powered Full-Stack Web Development in the Browser](./public/social_preview_index.jpg)](https://bolt.new) -# Bolt.new Fork by Cole Medin +# Bolt.new Fork by Cole Medin - oTToDev -This fork of Bolt.new allows you to choose the LLM that you use for each prompt! Currently, you can use OpenAI, Anthropic, Ollama, OpenRouter, Gemini, or Groq models - and it is easily extended to use any other model supported by the Vercel AI SDK! See the instructions below for running this locally and extending it to include more models. +This fork of Bolt.new (oTToDev) allows you to choose the LLM that you use for each prompt! Currently, you can use OpenAI, Anthropic, Ollama, OpenRouter, Gemini, LMStudio, Mistral, xAI, HuggingFace, DeepSeek, or Groq models - and it is easily extended to use any other model supported by the Vercel AI SDK! See the instructions below for running this locally and extending it to include more models. + +Join the community for oTToDev! + +https://thinktank.ottomator.ai # Requested Additions to this Fork - Feel Free to Contribute!! @@ -20,21 +24,23 @@ This fork of Bolt.new allows you to choose the LLM that you use for each prompt! - ✅ Publish projects directly to GitHub (@goncaloalves) - ✅ Ability to enter API keys in the UI (@ali00209) - ✅ xAI Grok Beta Integration (@milutinke) +- ✅ LM Studio Integration (@karrot0) +- ✅ HuggingFace Integration (@ahsan3219) +- ✅ Bolt terminal to see the output of LLM run commands (@thecodacus) +- ✅ Streaming of code output (@thecodacus) +- ✅ Ability to revert code to earlier version (@wonderwhy-er) - ⬜ **HIGH PRIORITY** - Prevent Bolt from rewriting files as often (file locking and diffs) - ⬜ **HIGH PRIORITY** - Better prompting for smaller LLMs (code window sometimes doesn't start) -- ⬜ **HIGH PRIORITY** Load local projects into the app +- ⬜ **HIGH PRIORITY** - Load local projects into the app - ⬜ **HIGH PRIORITY** - Attach images to prompts - ⬜ **HIGH PRIORITY** - Run agents in the backend as opposed to a single model call - ⬜ Mobile friendly -- ⬜ LM Studio Integration - ⬜ Together Integration - ⬜ Azure Open AI API Integration -- ⬜ HuggingFace Integration - ⬜ Perplexity Integration - ⬜ Vertex AI Integration - ⬜ Cohere Integration - ⬜ Deploy directly to Vercel/Netlify/other similar platforms -- ⬜ Ability to revert code to earlier version - ⬜ Prompt caching - ⬜ Better prompt enhancing - ⬜ Have LLM plan the project in a MD file for better results/transparency