Merge branch 'main' into env-file-fix

This commit is contained in:
Anirban Kar 2024-12-18 21:57:02 +05:30 committed by GitHub
commit 3fba4f0b61
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
4 changed files with 58 additions and 45 deletions

32
FAQ.md
View File

@ -2,6 +2,18 @@
# bolt.diy
## Recommended Models for bolt.diy
For the best experience with bolt.diy, we recommend using the following models:
- **Claude 3.5 Sonnet (old)**: Best overall coder, providing excellent results across all use cases
- **Gemini 2.0 Flash**: Exceptional speed while maintaining good performance
- **GPT-4o**: Strong alternative to Claude 3.5 Sonnet with comparable capabilities
- **DeepSeekCoder V2 236b**: Best open source model (available through OpenRouter, DeepSeek API, or self-hosted)
- **Qwen 2.5 Coder 32b**: Best model for self-hosting with reasonable hardware requirements
**Note**: Models with less than 7b parameters typically lack the capability to properly interact with bolt!
## FAQ
### How do I get the best results with bolt.diy?
@ -34,14 +46,18 @@ We have seen this error a couple times and for some reason just restarting the D
We promise you that we are constantly testing new PRs coming into bolt.diy and the preview is core functionality, so the application is not broken! When you get a blank preview or dont get a preview, this is generally because the LLM hallucinated bad code or incorrect commands. We are working on making this more transparent so it is obvious. Sometimes the error will appear in developer console too so check that as well.
### How to add a LLM:
To make new LLMs available to use in this version of bolt.new, head on over to `app/utils/constants.ts` and find the constant MODEL_LIST. Each element in this array is an object that has the model ID for the name (get this from the provider's API documentation), a label for the frontend model dropdown, and the provider.
By default, Anthropic, OpenAI, Groq, and Ollama are implemented as providers, but the YouTube video for this repo covers how to extend this to work with more providers if you wish!
When you add a new model to the MODEL_LIST array, it will immediately be available to use when you run the app locally or reload it. For Ollama models, make sure you have the model installed already before trying to use it here!
### Everything works but the results are bad
This goes to the point above about how local LLMs are getting very powerful but you still are going to see better (sometimes much better) results with the largest LLMs like GPT-4o, Claude 3.5 Sonnet, and DeepSeek Coder V2 236b. If you are using smaller LLMs like Qwen-2.5-Coder, consider it more experimental and educational at this point. It can build smaller applications really well, which is super impressive for a local LLM, but for larger scale applications you want to use the larger LLMs still!
### Received structured exception #0xc0000005: access violation
If you are getting this, you are probably on Windows. The fix is generally to update the [Visual C++ Redistributable](https://learn.microsoft.com/en-us/cpp/windows/latest-supported-vc-redist?view=msvc-170)
### How to add an LLM:
To make new LLMs available to use in this version of bolt.new, head on over to `app/utils/constants.ts` and find the constant MODEL_LIST. Each element in this array is an object that has the model ID for the name (get this from the provider's API documentation), a label for the frontend model dropdown, and the provider.
By default, many providers are already implemented, but the YouTube video for this repo covers how to extend this to work with more providers if you wish!
When you add a new model to the MODEL_LIST array, it will immediately be available to use when you run the app locally or reload it.

View File

@ -4,7 +4,9 @@
Welcome to bolt.diy, the official open source version of Bolt.new (previously known as oTToDev and bolt.new ANY LLM), which allows you to choose the LLM that you use for each prompt! Currently, you can use OpenAI, Anthropic, Ollama, OpenRouter, Gemini, LMStudio, Mistral, xAI, HuggingFace, DeepSeek, or Groq models - and it is easily extended to use any other model supported by the Vercel AI SDK! See the instructions below for running this locally and extending it to include more models.
Check the [bolt.diy Docs](https://stackblitz-labs.github.io/bolt.diy/) for more information. This documentation is still being updated after the transfer.
Check the [bolt.diy Docs](https://stackblitz-labs.github.io/bolt.diy/) for more information.
We have also launched an experimental agent called the "bolt.diy Expert" that can answer common questions about bolt.diy. Find it here on the [oTTomator Live Agent Studio](https://studio.ottomator.ai/).
bolt.diy was originally started by [Cole Medin](https://www.youtube.com/@ColeMedin) but has quickly grown into a massive community effort to build the BEST open source AI coding assistant!
@ -95,34 +97,6 @@ Clone the repository using Git:
git clone -b stable https://github.com/stackblitz-labs/bolt.diy
```
### (Optional) Configure Environment Variables
Most environment variables can be configured directly through the settings menu of the application. However, if you need to manually configure them:
1. Rename `.env.example` to `.env.local`.
2. Add your LLM API keys. For example:
```env
GROQ_API_KEY=YOUR_GROQ_API_KEY
OPENAI_API_KEY=YOUR_OPENAI_API_KEY
ANTHROPIC_API_KEY=YOUR_ANTHROPIC_API_KEY
```
**Note**: Ollama does not require an API key as it runs locally.
3. Optionally, set additional configurations:
```env
# Debugging
VITE_LOG_LEVEL=debug
# Ollama settings (example: 8K context, localhost port 11434)
OLLAMA_API_BASE_URL=http://localhost:11434
DEFAULT_NUM_CTX=8192
```
**Important**: Do not commit your `.env.local` file to version control. This file is already included in `.gitignore`.
---
## Run the Application
@ -155,27 +129,30 @@ DEFAULT_NUM_CTX=8192
Use the provided NPM scripts:
```bash
npm run dockerbuild # Development build
npm run dockerbuild:prod # Production build
npm run dockerbuild
```
Alternatively, use Docker commands directly:
```bash
docker build . --target bolt-ai-development # Development build
docker build . --target bolt-ai-production # Production build
docker build . --target bolt-ai-development
```
2. **Run the Container**:
Use Docker Compose profiles to manage environments:
```bash
docker-compose --profile development up # Development
docker-compose --profile production up # Production
docker-compose --profile development up
```
- With the development profile, changes to your code will automatically reflect in the running container (hot reloading).
---
### Entering API Keys
All of your API Keys can be configured directly in the application. Just selecte the provider you want from the dropdown and click the pencile icon to enter your API key.
---
### Update Your Local Version to the Latest
To keep your local version of bolt.diy up to date with the latest changes, follow these steps for your operating system:
@ -236,4 +213,4 @@ Explore upcoming features and priorities on our [Roadmap](https://roadmap.sh/r/o
## FAQ
For answers to common questions, visit our [FAQ Page](FAQ.md).
For answers to common questions, issues, and to see a list of recommended models, visit our [FAQ Page](FAQ.md).

View File

@ -1 +1 @@
{ "commit": "d37c3736d5e73b0305f19d1bbc7c47a6dfbf7656" }
{ "commit": "6458211bed379396e797e6da2944f6627a428c40", "version": "0.0.3" }

View File

@ -1,5 +1,19 @@
# Frequently Asked Questions (FAQ)
## What are the best models for bolt.diy?
For the best experience with bolt.diy, we recommend using the following models:
- **Claude 3.5 Sonnet (old)**: Best overall coder, providing excellent results across all use cases
- **Gemini 2.0 Flash**: Exceptional speed while maintaining good performance
- **GPT-4o**: Strong alternative to Claude 3.5 Sonnet with comparable capabilities
- **DeepSeekCoder V2 236b**: Best open source model (available through OpenRouter, DeepSeek API, or self-hosted)
- **Qwen 2.5 Coder 32b**: Best model for self-hosting with reasonable hardware requirements
**Note**: Models with less than 7b parameters typically lack the capability to properly interact with bolt!
---
## How do I get the best results with bolt.diy?
- **Be specific about your stack**:
@ -72,6 +86,12 @@ Local LLMs like Qwen-2.5-Coder are powerful for small applications but still exp
---
### **"Received structured exception #0xc0000005: access violation"**
If you are getting this, you are probably on Windows. The fix is generally to update the [Visual C++ Redistributable](https://learn.microsoft.com/en-us/cpp/windows/latest-supported-vc-redist?view=msvc-170)
---
### **"Miniflare or Wrangler errors in Windows"**
You will need to make sure you have the latest version of Visual Studio C++ installed (14.40.33816), more information here https://github.com/stackblitz-labs/bolt.diy/issues/19.