diff --git a/FAQ.md b/FAQ.md index ecd4158..dcf250d 100644 --- a/FAQ.md +++ b/FAQ.md @@ -2,6 +2,18 @@ # bolt.diy +## Recommended Models for bolt.diy + +For the best experience with bolt.diy, we recommend using the following models: + +- **Claude 3.5 Sonnet (old)**: Best overall coder, providing excellent results across all use cases +- **Gemini 2.0 Flash**: Exceptional speed while maintaining good performance +- **GPT-4o**: Strong alternative to Claude 3.5 Sonnet with comparable capabilities +- **DeepSeekCoder V2 236b**: Best open source model (available through OpenRouter, DeepSeek API, or self-hosted) +- **Qwen 2.5 Coder 32b**: Best model for self-hosting with reasonable hardware requirements + +**Note**: Models with less than 7b parameters typically lack the capability to properly interact with bolt! + ## FAQ ### How do I get the best results with bolt.diy? @@ -34,14 +46,18 @@ We have seen this error a couple times and for some reason just restarting the D We promise you that we are constantly testing new PRs coming into bolt.diy and the preview is core functionality, so the application is not broken! When you get a blank preview or don’t get a preview, this is generally because the LLM hallucinated bad code or incorrect commands. We are working on making this more transparent so it is obvious. Sometimes the error will appear in developer console too so check that as well. -### How to add a LLM: - -To make new LLMs available to use in this version of bolt.new, head on over to `app/utils/constants.ts` and find the constant MODEL_LIST. Each element in this array is an object that has the model ID for the name (get this from the provider's API documentation), a label for the frontend model dropdown, and the provider. - -By default, Anthropic, OpenAI, Groq, and Ollama are implemented as providers, but the YouTube video for this repo covers how to extend this to work with more providers if you wish! - -When you add a new model to the MODEL_LIST array, it will immediately be available to use when you run the app locally or reload it. For Ollama models, make sure you have the model installed already before trying to use it here! - ### Everything works but the results are bad This goes to the point above about how local LLMs are getting very powerful but you still are going to see better (sometimes much better) results with the largest LLMs like GPT-4o, Claude 3.5 Sonnet, and DeepSeek Coder V2 236b. If you are using smaller LLMs like Qwen-2.5-Coder, consider it more experimental and educational at this point. It can build smaller applications really well, which is super impressive for a local LLM, but for larger scale applications you want to use the larger LLMs still! + +### Received structured exception #0xc0000005: access violation + +If you are getting this, you are probably on Windows. The fix is generally to update the [Visual C++ Redistributable](https://learn.microsoft.com/en-us/cpp/windows/latest-supported-vc-redist?view=msvc-170) + +### How to add an LLM: + +To make new LLMs available to use in this version of bolt.new, head on over to `app/utils/constants.ts` and find the constant MODEL_LIST. Each element in this array is an object that has the model ID for the name (get this from the provider's API documentation), a label for the frontend model dropdown, and the provider. + +By default, many providers are already implemented, but the YouTube video for this repo covers how to extend this to work with more providers if you wish! + +When you add a new model to the MODEL_LIST array, it will immediately be available to use when you run the app locally or reload it. diff --git a/README.md b/README.md index b55fda8..b0bd406 100644 --- a/README.md +++ b/README.md @@ -1,19 +1,32 @@ -[![bolt.diy: AI-Powered Full-Stack Web Development in the Browser](./public/social_preview_index.jpg)](https://bolt.diy) - # bolt.diy (Previously oTToDev) +[![bolt.diy: AI-Powered Full-Stack Web Development in the Browser](./public/social_preview_index.jpg)](https://bolt.diy) Welcome to bolt.diy, the official open source version of Bolt.new (previously known as oTToDev and bolt.new ANY LLM), which allows you to choose the LLM that you use for each prompt! Currently, you can use OpenAI, Anthropic, Ollama, OpenRouter, Gemini, LMStudio, Mistral, xAI, HuggingFace, DeepSeek, or Groq models - and it is easily extended to use any other model supported by the Vercel AI SDK! See the instructions below for running this locally and extending it to include more models. -Check the [bolt.diy Docs](https://stackblitz-labs.github.io/bolt.diy/) for more information. This documentation is still being updated after the transfer. +Check the [bolt.diy Docs](https://stackblitz-labs.github.io/bolt.diy/) for more information. + +We have also launched an experimental agent called the "bolt.diy Expert" that can answer common questions about bolt.diy. Find it here on the [oTTomator Live Agent Studio](https://studio.ottomator.ai/). bolt.diy was originally started by [Cole Medin](https://www.youtube.com/@ColeMedin) but has quickly grown into a massive community effort to build the BEST open source AI coding assistant! -## Join the community for bolt.diy! +## Table of Contents -https://thinktank.ottomator.ai +- [Join the Community](#join-the-community) +- [Requested Additions](#requested-additions) +- [Features](#features) +- [Setup](#setup) +- [Run the Application](#run-the-application) +- [Available Scripts](#available-scripts) +- [Contributing](#contributing) +- [Roadmap](#roadmap) +- [FAQ](#faq) + +## Join the community + +[Join the bolt.diy community here, in the thinktank on ottomator.ai!](https://thinktank.ottomator.ai) -## Requested Additions - Feel Free to Contribute! +## Requested Additions - ✅ OpenRouter Integration (@coleam00) - ✅ Gemini Integration (@jonathands) @@ -60,7 +73,7 @@ https://thinktank.ottomator.ai - ⬜ Perplexity Integration - ⬜ Vertex AI Integration -## bolt.diy Features +## Features - **AI-powered full-stack web development** directly in your browser. - **Support for multiple LLMs** with an extensible architecture to integrate additional models. @@ -70,7 +83,7 @@ https://thinktank.ottomator.ai - **Download projects as ZIP** for easy portability. - **Integration-ready Docker support** for a hassle-free setup. -## Setup bolt.diy +## Setup If you're new to installing software from GitHub, don't worry! If you encounter any issues, feel free to submit an "issue" using the provided links or improve this documentation by forking the repository, editing the instructions, and submitting a pull request. The following instruction will help you get the stable branch up and running on your local machine in no time. @@ -305,4 +318,4 @@ Explore upcoming features and priorities on our [Roadmap](https://roadmap.sh/r/o ## FAQ -For answers to common questions, visit our [FAQ Page](FAQ.md). +For answers to common questions, issues, and to see a list of recommended models, visit our [FAQ Page](FAQ.md). diff --git a/app/components/chat/BaseChat.tsx b/app/components/chat/BaseChat.tsx index 2084cbb..5db6653 100644 --- a/app/components/chat/BaseChat.tsx +++ b/app/components/chat/BaseChat.tsx @@ -119,6 +119,9 @@ export const BaseChat = React.forwardRef( useEffect(() => { // Load API keys from cookies on component mount + + let parsedApiKeys: Record | undefined = {}; + try { const storedApiKeys = Cookies.get('apiKeys'); @@ -127,6 +130,7 @@ export const BaseChat = React.forwardRef( if (typeof parsedKeys === 'object' && parsedKeys !== null) { setApiKeys(parsedKeys); + parsedApiKeys = parsedKeys; } } } catch (error) { @@ -155,7 +159,7 @@ export const BaseChat = React.forwardRef( Cookies.remove('providers'); } - initializeModelList(providerSettings).then((modelList) => { + initializeModelList({ apiKeys: parsedApiKeys, providerSettings }).then((modelList) => { setModelList(modelList); }); diff --git a/app/components/settings/SettingsWindow.tsx b/app/components/settings/SettingsWindow.tsx index 541323f..1fffcf4 100644 --- a/app/components/settings/SettingsWindow.tsx +++ b/app/components/settings/SettingsWindow.tsx @@ -63,7 +63,7 @@ export const SettingsWindow = ({ open, onClose }: SettingsProps) => { variants={dialogBackdropVariants} /> - + LOCAL_PROVIDERS.includes(provider.name)) .map(async ([, provider]) => { const envVarName = - provider.name.toLowerCase() === 'ollama' - ? 'OLLAMA_API_BASE_URL' - : provider.name.toLowerCase() === 'lmstudio' - ? 'LMSTUDIO_API_BASE_URL' - : `REACT_APP_${provider.name.toUpperCase()}_URL`; + providerBaseUrlEnvKeys[provider.name].baseUrlKey || `REACT_APP_${provider.name.toUpperCase()}_URL`; // Access environment variables through import.meta.env - const url = import.meta.env[envVarName] || provider.settings.baseUrl || null; // Ensure baseUrl is used + let settingsUrl = provider.settings.baseUrl; + + if (settingsUrl && settingsUrl.trim().length === 0) { + settingsUrl = undefined; + } + + const url = settingsUrl || import.meta.env[envVarName] || null; // Ensure baseUrl is used console.log(`[Debug] Using URL for ${provider.name}:`, url, `(from ${envVarName})`); const status = await checkProviderStatus(url, provider.name); diff --git a/app/components/settings/providers/ProvidersTab.tsx b/app/components/settings/providers/ProvidersTab.tsx index 281b4c8..58c8dac 100644 --- a/app/components/settings/providers/ProvidersTab.tsx +++ b/app/components/settings/providers/ProvidersTab.tsx @@ -7,6 +7,7 @@ import { logStore } from '~/lib/stores/logs'; // Import a default fallback icon import DefaultIcon from '/icons/Default.svg'; // Adjust the path as necessary +import { providerBaseUrlEnvKeys } from '~/utils/constants'; export default function ProvidersTab() { const { providers, updateProviderSettings, isLocalModel } = useSettings(); @@ -33,9 +34,87 @@ export default function ProvidersTab() { newFilteredProviders.sort((a, b) => a.name.localeCompare(b.name)); - setFilteredProviders(newFilteredProviders); + // Split providers into regular and URL-configurable + const regular = newFilteredProviders.filter(p => !URL_CONFIGURABLE_PROVIDERS.includes(p.name)); + const urlConfigurable = newFilteredProviders.filter(p => URL_CONFIGURABLE_PROVIDERS.includes(p.name)); + + setFilteredProviders([...regular, ...urlConfigurable]); }, [providers, searchTerm, isLocalModel]); + const renderProviderCard = (provider: IProviderConfig) => { + const envBaseUrlKey = providerBaseUrlEnvKeys[provider.name].baseUrlKey; + const envBaseUrl = envBaseUrlKey ? import.meta.env[envBaseUrlKey] : undefined; + const isUrlConfigurable = URL_CONFIGURABLE_PROVIDERS.includes(provider.name); + + return ( +
+
+
+ { + e.currentTarget.src = DefaultIcon; + }} + alt={`${provider.name} icon`} + className="w-6 h-6 dark:invert" + /> + {provider.name} +
+ { + updateProviderSettings(provider.name, { ...provider.settings, enabled }); + + if (enabled) { + logStore.logProvider(`Provider ${provider.name} enabled`, { provider: provider.name }); + } else { + logStore.logProvider(`Provider ${provider.name} disabled`, { provider: provider.name }); + } + }} + /> +
+ {isUrlConfigurable && provider.settings.enabled && ( +
+ {envBaseUrl && ( + + )} + + { + let newBaseUrl: string | undefined = e.target.value; + + if (newBaseUrl && newBaseUrl.trim().length === 0) { + newBaseUrl = undefined; + } + + updateProviderSettings(provider.name, { ...provider.settings, baseUrl: newBaseUrl }); + logStore.logProvider(`Base URL updated for ${provider.name}`, { + provider: provider.name, + baseUrl: newBaseUrl, + }); + }} + placeholder={`Enter ${provider.name} base URL`} + className="w-full bg-white dark:bg-bolt-elements-background-depth-4 relative px-2 py-1.5 rounded-md focus:outline-none placeholder-bolt-elements-textTertiary text-bolt-elements-textPrimary dark:text-bolt-elements-textPrimary border border-bolt-elements-borderColor" + /> +
+ )} +
+ ); + }; + + const regularProviders = filteredProviders.filter(p => !URL_CONFIGURABLE_PROVIDERS.includes(p.name)); + const urlConfigurableProviders = filteredProviders.filter(p => URL_CONFIGURABLE_PROVIDERS.includes(p.name)); + return (
@@ -47,60 +126,24 @@ export default function ProvidersTab() { className="w-full bg-white dark:bg-bolt-elements-background-depth-4 relative px-2 py-1.5 rounded-md focus:outline-none placeholder-bolt-elements-textTertiary text-bolt-elements-textPrimary dark:text-bolt-elements-textPrimary border border-bolt-elements-borderColor" />
- {filteredProviders.map((provider) => ( -
-
-
- { - // Fallback to default icon on error - e.currentTarget.src = DefaultIcon; - }} - alt={`${provider.name} icon`} - className="w-6 h-6 dark:invert" - /> - {provider.name} -
- { - updateProviderSettings(provider.name, { ...provider.settings, enabled }); - if (enabled) { - logStore.logProvider(`Provider ${provider.name} enabled`, { provider: provider.name }); - } else { - logStore.logProvider(`Provider ${provider.name} disabled`, { provider: provider.name }); - } - }} - /> + {/* Regular Providers Grid */} +
+ {regularProviders.map(renderProviderCard)} +
+ + {/* URL Configurable Providers Section */} + {urlConfigurableProviders.length > 0 && ( +
+

Experimental Providers

+

+ These providers are experimental and allow you to run AI models locally or connect to your own infrastructure. They require additional setup but offer more flexibility. +

+
+ {urlConfigurableProviders.map(renderProviderCard)}
- {/* Base URL input for configurable providers */} - {URL_CONFIGURABLE_PROVIDERS.includes(provider.name) && provider.settings.enabled && ( -
- - { - const newBaseUrl = e.target.value; - updateProviderSettings(provider.name, { ...provider.settings, baseUrl: newBaseUrl }); - logStore.logProvider(`Base URL updated for ${provider.name}`, { - provider: provider.name, - baseUrl: newBaseUrl, - }); - }} - placeholder={`Enter ${provider.name} base URL`} - className="w-full bg-white dark:bg-bolt-elements-background-depth-4 relative px-2 py-1.5 rounded-md focus:outline-none placeholder-bolt-elements-textTertiary text-bolt-elements-textPrimary dark:text-bolt-elements-textPrimary border border-bolt-elements-borderColor" - /> -
- )}
- ))} + )}
); -} +} \ No newline at end of file diff --git a/app/entry.server.tsx b/app/entry.server.tsx index a44917f..5e92d21 100644 --- a/app/entry.server.tsx +++ b/app/entry.server.tsx @@ -14,7 +14,7 @@ export default async function handleRequest( remixContext: EntryContext, _loadContext: AppLoadContext, ) { - await initializeModelList(); + await initializeModelList({}); const readable = await renderToReadableStream(, { signal: request.signal, diff --git a/app/lib/.server/llm/api-key.ts b/app/lib/.server/llm/api-key.ts index e82d08e..4b0fc53 100644 --- a/app/lib/.server/llm/api-key.ts +++ b/app/lib/.server/llm/api-key.ts @@ -1,8 +1,6 @@ -/* - * @ts-nocheck - * Preventing TS checks with files presented in the video for a better presentation. - */ import { env } from 'node:process'; +import type { IProviderSetting } from '~/types/model'; +import { getProviderBaseUrlAndKey } from '~/utils/constants'; export function getAPIKey(cloudflareEnv: Env, provider: string, userApiKeys?: Record) { /** @@ -15,7 +13,20 @@ export function getAPIKey(cloudflareEnv: Env, provider: string, userApiKeys?: Re return userApiKeys[provider]; } - // Fall back to environment variables + const { apiKey } = getProviderBaseUrlAndKey({ + provider, + apiKeys: userApiKeys, + providerSettings: undefined, + serverEnv: cloudflareEnv as any, + defaultBaseUrlKey: '', + defaultApiTokenKey: '', + }); + + if (apiKey) { + return apiKey; + } + + // Fall back to hardcoded environment variables names switch (provider) { case 'Anthropic': return env.ANTHROPIC_API_KEY || cloudflareEnv.ANTHROPIC_API_KEY; @@ -50,16 +61,43 @@ export function getAPIKey(cloudflareEnv: Env, provider: string, userApiKeys?: Re } } -export function getBaseURL(cloudflareEnv: Env, provider: string) { +export function getBaseURL(cloudflareEnv: Env, provider: string, providerSettings?: Record) { + const { baseUrl } = getProviderBaseUrlAndKey({ + provider, + apiKeys: {}, + providerSettings, + serverEnv: cloudflareEnv as any, + defaultBaseUrlKey: '', + defaultApiTokenKey: '', + }); + + if (baseUrl) { + return baseUrl; + } + + let settingBaseUrl = providerSettings?.[provider].baseUrl; + + if (settingBaseUrl && settingBaseUrl.length == 0) { + settingBaseUrl = undefined; + } + switch (provider) { case 'Together': - return env.TOGETHER_API_BASE_URL || cloudflareEnv.TOGETHER_API_BASE_URL || 'https://api.together.xyz/v1'; + return ( + settingBaseUrl || + env.TOGETHER_API_BASE_URL || + cloudflareEnv.TOGETHER_API_BASE_URL || + 'https://api.together.xyz/v1' + ); case 'OpenAILike': - return env.OPENAI_LIKE_API_BASE_URL || cloudflareEnv.OPENAI_LIKE_API_BASE_URL; + return settingBaseUrl || env.OPENAI_LIKE_API_BASE_URL || cloudflareEnv.OPENAI_LIKE_API_BASE_URL; case 'LMStudio': - return env.LMSTUDIO_API_BASE_URL || cloudflareEnv.LMSTUDIO_API_BASE_URL || 'http://localhost:1234'; + return ( + settingBaseUrl || env.LMSTUDIO_API_BASE_URL || cloudflareEnv.LMSTUDIO_API_BASE_URL || 'http://localhost:1234' + ); case 'Ollama': { - let baseUrl = env.OLLAMA_API_BASE_URL || cloudflareEnv.OLLAMA_API_BASE_URL || 'http://localhost:11434'; + let baseUrl = + settingBaseUrl || env.OLLAMA_API_BASE_URL || cloudflareEnv.OLLAMA_API_BASE_URL || 'http://localhost:11434'; if (env.RUNNING_IN_DOCKER === 'true') { baseUrl = baseUrl.replace('localhost', 'host.docker.internal'); diff --git a/app/lib/.server/llm/model.ts b/app/lib/.server/llm/model.ts index 1a5aab7..308e27d 100644 --- a/app/lib/.server/llm/model.ts +++ b/app/lib/.server/llm/model.ts @@ -140,7 +140,7 @@ export function getPerplexityModel(apiKey: OptionalApiKey, model: string) { export function getModel( provider: string, model: string, - env: Env, + serverEnv: Env, apiKeys?: Record, providerSettings?: Record, ) { @@ -148,9 +148,12 @@ export function getModel( * let apiKey; // Declare first * let baseURL; */ + // console.log({provider,model}); - const apiKey = getAPIKey(env, provider, apiKeys); // Then assign - const baseURL = providerSettings?.[provider].baseUrl || getBaseURL(env, provider); + const apiKey = getAPIKey(serverEnv, provider, apiKeys); // Then assign + const baseURL = getBaseURL(serverEnv, provider, providerSettings); + + // console.log({apiKey,baseURL}); switch (provider) { case 'Anthropic': diff --git a/app/lib/.server/llm/stream-text.ts b/app/lib/.server/llm/stream-text.ts index 74cdd9d..6bbf568 100644 --- a/app/lib/.server/llm/stream-text.ts +++ b/app/lib/.server/llm/stream-text.ts @@ -151,10 +151,13 @@ export async function streamText(props: { providerSettings?: Record; promptId?: string; }) { - const { messages, env, options, apiKeys, files, providerSettings, promptId } = props; + const { messages, env: serverEnv, options, apiKeys, files, providerSettings, promptId } = props; + + // console.log({serverEnv}); + let currentModel = DEFAULT_MODEL; let currentProvider = DEFAULT_PROVIDER.name; - const MODEL_LIST = await getModelList(apiKeys || {}, providerSettings); + const MODEL_LIST = await getModelList({ apiKeys, providerSettings, serverEnv: serverEnv as any }); const processedMessages = messages.map((message) => { if (message.role === 'user') { const { model, provider, content } = extractPropertiesFromMessage(message); @@ -196,7 +199,7 @@ export async function streamText(props: { } return _streamText({ - model: getModel(currentProvider, currentModel, env, apiKeys, providerSettings) as any, + model: getModel(currentProvider, currentModel, serverEnv, apiKeys, providerSettings) as any, system: systemPrompt, maxTokens: dynamicMaxTokens, messages: convertToCoreMessages(processedMessages as any), diff --git a/app/lib/hooks/useEditChatDescription.ts b/app/lib/hooks/useEditChatDescription.ts index 5230d6c..25147a0 100644 --- a/app/lib/hooks/useEditChatDescription.ts +++ b/app/lib/hooks/useEditChatDescription.ts @@ -92,6 +92,7 @@ export function useEditChatDescription({ } const lengthValid = trimmedDesc.length > 0 && trimmedDesc.length <= 100; + // Allow letters, numbers, spaces, and common punctuation but exclude characters that could cause issues const characterValid = /^[a-zA-Z0-9\s\-_.,!?()[\]{}'"]+$/.test(trimmedDesc); diff --git a/app/types/model.ts b/app/types/model.ts index 3bfbfde..b449363 100644 --- a/app/types/model.ts +++ b/app/types/model.ts @@ -3,7 +3,12 @@ import type { ModelInfo } from '~/utils/types'; export type ProviderInfo = { staticModels: ModelInfo[]; name: string; - getDynamicModels?: (apiKeys?: Record, providerSettings?: IProviderSetting) => Promise; + getDynamicModels?: ( + providerName: string, + apiKeys?: Record, + providerSettings?: IProviderSetting, + serverEnv?: Record, + ) => Promise; getApiKeyLink?: string; labelForGetApiKey?: string; icon?: string; diff --git a/app/utils/constants.ts b/app/utils/constants.ts index 6425995..dca3320 100644 --- a/app/utils/constants.ts +++ b/app/utils/constants.ts @@ -220,7 +220,6 @@ const PROVIDER_LIST: ProviderInfo[] = [ ], getApiKeyLink: 'https://huggingface.co/settings/tokens', }, - { name: 'OpenAI', staticModels: [ @@ -233,7 +232,10 @@ const PROVIDER_LIST: ProviderInfo[] = [ }, { name: 'xAI', - staticModels: [{ name: 'grok-beta', label: 'xAI Grok Beta', provider: 'xAI', maxTokenAllowed: 8000 }], + staticModels: [ + { name: 'grok-beta', label: 'xAI Grok Beta', provider: 'xAI', maxTokenAllowed: 8000 }, + { name: 'grok-2-1212', label: 'xAI Grok2 1212', provider: 'xAI', maxTokenAllowed: 8000 }, + ], getApiKeyLink: 'https://docs.x.ai/docs/quickstart#creating-an-api-key', }, { @@ -319,44 +321,130 @@ const PROVIDER_LIST: ProviderInfo[] = [ }, ]; +export const providerBaseUrlEnvKeys: Record = { + Anthropic: { + apiTokenKey: 'ANTHROPIC_API_KEY', + }, + OpenAI: { + apiTokenKey: 'OPENAI_API_KEY', + }, + Groq: { + apiTokenKey: 'GROQ_API_KEY', + }, + HuggingFace: { + apiTokenKey: 'HuggingFace_API_KEY', + }, + OpenRouter: { + apiTokenKey: 'OPEN_ROUTER_API_KEY', + }, + Google: { + apiTokenKey: 'GOOGLE_GENERATIVE_AI_API_KEY', + }, + OpenAILike: { + baseUrlKey: 'OPENAI_LIKE_API_BASE_URL', + apiTokenKey: 'OPENAI_LIKE_API_KEY', + }, + Together: { + baseUrlKey: 'TOGETHER_API_BASE_URL', + apiTokenKey: 'TOGETHER_API_KEY', + }, + Deepseek: { + apiTokenKey: 'DEEPSEEK_API_KEY', + }, + Mistral: { + apiTokenKey: 'MISTRAL_API_KEY', + }, + LMStudio: { + baseUrlKey: 'LMSTUDIO_API_BASE_URL', + }, + xAI: { + apiTokenKey: 'XAI_API_KEY', + }, + Cohere: { + apiTokenKey: 'COHERE_API_KEY', + }, + Perplexity: { + apiTokenKey: 'PERPLEXITY_API_KEY', + }, + Ollama: { + baseUrlKey: 'OLLAMA_API_BASE_URL', + }, +}; + +export const getProviderBaseUrlAndKey = (options: { + provider: string; + apiKeys?: Record; + providerSettings?: IProviderSetting; + serverEnv?: Record; + defaultBaseUrlKey: string; + defaultApiTokenKey: string; +}) => { + const { provider, apiKeys, providerSettings, serverEnv, defaultBaseUrlKey, defaultApiTokenKey } = options; + let settingsBaseUrl = providerSettings?.baseUrl; + + if (settingsBaseUrl && settingsBaseUrl.length == 0) { + settingsBaseUrl = undefined; + } + + const baseUrlKey = providerBaseUrlEnvKeys[provider]?.baseUrlKey || defaultBaseUrlKey; + const baseUrl = settingsBaseUrl || serverEnv?.[baseUrlKey] || process.env[baseUrlKey] || import.meta.env[baseUrlKey]; + + const apiTokenKey = providerBaseUrlEnvKeys[provider]?.apiTokenKey || defaultApiTokenKey; + const apiKey = + apiKeys?.[provider] || serverEnv?.[apiTokenKey] || process.env[apiTokenKey] || import.meta.env[apiTokenKey]; + + return { + baseUrl, + apiKey, + }; +}; export const DEFAULT_PROVIDER = PROVIDER_LIST[0]; const staticModels: ModelInfo[] = PROVIDER_LIST.map((p) => p.staticModels).flat(); export let MODEL_LIST: ModelInfo[] = [...staticModels]; -export async function getModelList( - apiKeys: Record, - providerSettings?: Record, -) { +export async function getModelList(options: { + apiKeys?: Record; + providerSettings?: Record; + serverEnv?: Record; +}) { + const { apiKeys, providerSettings, serverEnv } = options; + MODEL_LIST = [ ...( await Promise.all( PROVIDER_LIST.filter( (p): p is ProviderInfo & { getDynamicModels: () => Promise } => !!p.getDynamicModels, - ).map((p) => p.getDynamicModels(apiKeys, providerSettings?.[p.name])), + ).map((p) => p.getDynamicModels(p.name, apiKeys, providerSettings?.[p.name], serverEnv)), ) ).flat(), ...staticModels, ]; + return MODEL_LIST; } -async function getTogetherModels(apiKeys?: Record, settings?: IProviderSetting): Promise { +async function getTogetherModels( + name: string, + apiKeys?: Record, + settings?: IProviderSetting, + serverEnv: Record = {}, +): Promise { try { - const baseUrl = settings?.baseUrl || import.meta.env.TOGETHER_API_BASE_URL || ''; - const provider = 'Together'; + const { baseUrl, apiKey } = getProviderBaseUrlAndKey({ + provider: name, + apiKeys, + providerSettings: settings, + serverEnv, + defaultBaseUrlKey: 'TOGETHER_API_BASE_URL', + defaultApiTokenKey: 'TOGETHER_API_KEY', + }); if (!baseUrl) { return []; } - let apiKey = import.meta.env.OPENAI_LIKE_API_KEY ?? ''; - - if (apiKeys && apiKeys[provider]) { - apiKey = apiKeys[provider]; - } - if (!apiKey) { return []; } @@ -374,7 +462,7 @@ async function getTogetherModels(apiKeys?: Record, settings?: IP label: `${m.display_name} - in:$${m.pricing.input.toFixed( 2, )} out:$${m.pricing.output.toFixed(2)} - context ${Math.floor(m.context_length / 1000)}k`, - provider, + provider: name, maxTokenAllowed: 8000, })); } catch (e) { @@ -383,24 +471,40 @@ async function getTogetherModels(apiKeys?: Record, settings?: IP } } -const getOllamaBaseUrl = (settings?: IProviderSetting) => { - const defaultBaseUrl = settings?.baseUrl || import.meta.env.OLLAMA_API_BASE_URL || 'http://localhost:11434'; +const getOllamaBaseUrl = (name: string, settings?: IProviderSetting, serverEnv: Record = {}) => { + const { baseUrl } = getProviderBaseUrlAndKey({ + provider: name, + providerSettings: settings, + serverEnv, + defaultBaseUrlKey: 'OLLAMA_API_BASE_URL', + defaultApiTokenKey: '', + }); // Check if we're in the browser if (typeof window !== 'undefined') { // Frontend always uses localhost - return defaultBaseUrl; + return baseUrl; } // Backend: Check if we're running in Docker const isDocker = process.env.RUNNING_IN_DOCKER === 'true'; - return isDocker ? defaultBaseUrl.replace('localhost', 'host.docker.internal') : defaultBaseUrl; + return isDocker ? baseUrl.replace('localhost', 'host.docker.internal') : baseUrl; }; -async function getOllamaModels(apiKeys?: Record, settings?: IProviderSetting): Promise { +async function getOllamaModels( + name: string, + _apiKeys?: Record, + settings?: IProviderSetting, + serverEnv: Record = {}, +): Promise { try { - const baseUrl = getOllamaBaseUrl(settings); + const baseUrl = getOllamaBaseUrl(name, settings, serverEnv); + + if (!baseUrl) { + return []; + } + const response = await fetch(`${baseUrl}/api/tags`); const data = (await response.json()) as OllamaApiResponse; @@ -419,22 +523,25 @@ async function getOllamaModels(apiKeys?: Record, settings?: IPro } async function getOpenAILikeModels( + name: string, apiKeys?: Record, settings?: IProviderSetting, + serverEnv: Record = {}, ): Promise { try { - const baseUrl = settings?.baseUrl || import.meta.env.OPENAI_LIKE_API_BASE_URL || ''; + const { baseUrl, apiKey } = getProviderBaseUrlAndKey({ + provider: name, + apiKeys, + providerSettings: settings, + serverEnv, + defaultBaseUrlKey: 'OPENAI_LIKE_API_BASE_URL', + defaultApiTokenKey: 'OPENAI_LIKE_API_KEY', + }); if (!baseUrl) { return []; } - let apiKey = ''; - - if (apiKeys && apiKeys.OpenAILike) { - apiKey = apiKeys.OpenAILike; - } - const response = await fetch(`${baseUrl}/models`, { headers: { Authorization: `Bearer ${apiKey}`, @@ -445,7 +552,7 @@ async function getOpenAILikeModels( return res.data.map((model: any) => ({ name: model.id, label: model.id, - provider: 'OpenAILike', + provider: name, })); } catch (e) { console.error('Error getting OpenAILike models:', e); @@ -486,9 +593,26 @@ async function getOpenRouterModels(): Promise { })); } -async function getLMStudioModels(_apiKeys?: Record, settings?: IProviderSetting): Promise { +async function getLMStudioModels( + name: string, + apiKeys?: Record, + settings?: IProviderSetting, + serverEnv: Record = {}, +): Promise { try { - const baseUrl = settings?.baseUrl || import.meta.env.LMSTUDIO_API_BASE_URL || 'http://localhost:1234'; + const { baseUrl } = getProviderBaseUrlAndKey({ + provider: name, + apiKeys, + providerSettings: settings, + serverEnv, + defaultBaseUrlKey: 'LMSTUDIO_API_BASE_URL', + defaultApiTokenKey: '', + }); + + if (!baseUrl) { + return []; + } + const response = await fetch(`${baseUrl}/v1/models`); const data = (await response.json()) as any; @@ -503,29 +627,37 @@ async function getLMStudioModels(_apiKeys?: Record, settings?: I } } -async function initializeModelList(providerSettings?: Record): Promise { - let apiKeys: Record = {}; +async function initializeModelList(options: { + env?: Record; + providerSettings?: Record; + apiKeys?: Record; +}): Promise { + const { providerSettings, apiKeys: providedApiKeys, env } = options; + let apiKeys: Record = providedApiKeys || {}; - try { - const storedApiKeys = Cookies.get('apiKeys'); + if (!providedApiKeys) { + try { + const storedApiKeys = Cookies.get('apiKeys'); - if (storedApiKeys) { - const parsedKeys = JSON.parse(storedApiKeys); + if (storedApiKeys) { + const parsedKeys = JSON.parse(storedApiKeys); - if (typeof parsedKeys === 'object' && parsedKeys !== null) { - apiKeys = parsedKeys; + if (typeof parsedKeys === 'object' && parsedKeys !== null) { + apiKeys = parsedKeys; + } } + } catch (error: any) { + logStore.logError('Failed to fetch API keys from cookies', error); + logger.warn(`Failed to fetch apikeys from cookies: ${error?.message}`); } - } catch (error: any) { - logStore.logError('Failed to fetch API keys from cookies', error); - logger.warn(`Failed to fetch apikeys from cookies: ${error?.message}`); } + MODEL_LIST = [ ...( await Promise.all( PROVIDER_LIST.filter( (p): p is ProviderInfo & { getDynamicModels: () => Promise } => !!p.getDynamicModels, - ).map((p) => p.getDynamicModels(apiKeys, providerSettings?.[p.name])), + ).map((p) => p.getDynamicModels(p.name, apiKeys, providerSettings?.[p.name], env)), ) ).flat(), ...staticModels, @@ -534,6 +666,7 @@ async function initializeModelList(providerSettings?: Record +What are the best models for bolt.diy? + +For the best experience with bolt.diy, we recommend using the following models: + +- **Claude 3.5 Sonnet (old)**: Best overall coder, providing excellent results across all use cases +- **Gemini 2.0 Flash**: Exceptional speed while maintaining good performance +- **GPT-4o**: Strong alternative to Claude 3.5 Sonnet with comparable capabilities +- **DeepSeekCoder V2 236b**: Best open source model (available through OpenRouter, DeepSeek API, or self-hosted) +- **Qwen 2.5 Coder 32b**: Best model for self-hosting with reasonable hardware requirements + +**Note**: Models with less than 7b parameters typically lack the capability to properly interact with bolt! + + +
+How do I get the best results with bolt.diy? - **Be specific about your stack**: Mention the frameworks or libraries you want to use (e.g., Astro, Tailwind, ShadCN) in your initial prompt. This ensures that bolt.diy scaffolds the project according to your preferences. @@ -14,66 +29,62 @@ - **Batch simple instructions**: Combine simple tasks into a single prompt to save time and reduce API credit consumption. For example: *"Change the color scheme, add mobile responsiveness, and restart the dev server."* +
---- - -## How do I contribute to bolt.diy? +
+How do I contribute to bolt.diy? Check out our [Contribution Guide](CONTRIBUTING.md) for more details on how to get involved! +
---- - -## What are the future plans for bolt.diy? +
+What are the future plans for bolt.diy? Visit our [Roadmap](https://roadmap.sh/r/ottodev-roadmap-2ovzo) for the latest updates. New features and improvements are on the way! +
---- - -## Why are there so many open issues/pull requests? +
+Why are there so many open issues/pull requests? bolt.diy began as a small showcase project on @ColeMedin's YouTube channel to explore editing open-source projects with local LLMs. However, it quickly grew into a massive community effort! -We’re forming a team of maintainers to manage demand and streamline issue resolution. The maintainers are rockstars, and we’re also exploring partnerships to help the project thrive. +We're forming a team of maintainers to manage demand and streamline issue resolution. The maintainers are rockstars, and we're also exploring partnerships to help the project thrive. +
---- - -## How do local LLMs compare to larger models like Claude 3.5 Sonnet for bolt.diy? +
+How do local LLMs compare to larger models like Claude 3.5 Sonnet for bolt.diy? While local LLMs are improving rapidly, larger models like GPT-4o, Claude 3.5 Sonnet, and DeepSeek Coder V2 236b still offer the best results for complex applications. Our ongoing focus is to improve prompts, agents, and the platform to better support smaller local LLMs. +
---- - -## Common Errors and Troubleshooting +
+Common Errors and Troubleshooting ### **"There was an error processing this request"** This generic error message means something went wrong. Check both: - The terminal (if you started the app with Docker or `pnpm`). - The developer console in your browser (press `F12` or right-click > *Inspect*, then go to the *Console* tab). ---- - ### **"x-api-key header missing"** This error is sometimes resolved by restarting the Docker container. -If that doesn’t work, try switching from Docker to `pnpm` or vice versa. We’re actively investigating this issue. - ---- +If that doesn't work, try switching from Docker to `pnpm` or vice versa. We're actively investigating this issue. ### **Blank preview when running the app** A blank preview often occurs due to hallucinated bad code or incorrect commands. To troubleshoot: - Check the developer console for errors. -- Remember, previews are core functionality, so the app isn’t broken! We’re working on making these errors more transparent. - ---- +- Remember, previews are core functionality, so the app isn't broken! We're working on making these errors more transparent. ### **"Everything works, but the results are bad"** Local LLMs like Qwen-2.5-Coder are powerful for small applications but still experimental for larger projects. For better results, consider using larger models like GPT-4o, Claude 3.5 Sonnet, or DeepSeek Coder V2 236b. ---- +### **"Received structured exception #0xc0000005: access violation"** +If you are getting this, you are probably on Windows. The fix is generally to update the [Visual C++ Redistributable](https://learn.microsoft.com/en-us/cpp/windows/latest-supported-vc-redist?view=msvc-170) ### **"Miniflare or Wrangler errors in Windows"** You will need to make sure you have the latest version of Visual Studio C++ installed (14.40.33816), more information here https://github.com/stackblitz-labs/bolt.diy/issues/19. +
--- diff --git a/docs/docs/index.md b/docs/docs/index.md index 389e74f..ef08de6 100644 --- a/docs/docs/index.md +++ b/docs/docs/index.md @@ -1,6 +1,21 @@ # Welcome to bolt diy bolt.diy allows you to choose the LLM that you use for each prompt! Currently, you can use OpenAI, Anthropic, Ollama, OpenRouter, Gemini, LMStudio, Mistral, xAI, HuggingFace, DeepSeek, or Groq models - and it is easily extended to use any other model supported by the Vercel AI SDK! See the instructions below for running this locally and extending it to include more models. +## Table of Contents +- [Join the community!](#join-the-community) +- [What's bolt.diy](#whats-boltdiy) +- [What Makes bolt.diy Different](#what-makes-boltdiy-different) +- [Setup](#setup) +- [Run with Docker](#run-with-docker) + - [Using Helper Scripts](#1a-using-helper-scripts) + - [Direct Docker Build Commands](#1b-direct-docker-build-commands-alternative-to-using-npm-scripts) + - [Docker Compose with Profiles](#2-docker-compose-with-profiles-to-run-the-container) +- [Run Without Docker](#run-without-docker) +- [Adding New LLMs](#adding-new-llms) +- [Available Scripts](#available-scripts) +- [Development](#development) +- [Tips and Tricks](#tips-and-tricks) + --- ## Join the community! diff --git a/pre-start.cjs b/pre-start.cjs index e6b7001..841e3eb 100644 --- a/pre-start.cjs +++ b/pre-start.cjs @@ -7,4 +7,5 @@ console.log(` ★═══════════════════════════════════════★ `); console.log('📍 Current Commit Version:', commit); +console.log(' Please wait until the URL appears here') console.log('★═══════════════════════════════════════★'); \ No newline at end of file diff --git a/vite.config.ts b/vite.config.ts index f18b8b9..b2f795d 100644 --- a/vite.config.ts +++ b/vite.config.ts @@ -28,7 +28,7 @@ export default defineConfig((config) => { chrome129IssuePlugin(), config.mode === 'production' && optimizeCssModules({ apply: 'build' }), ], - envPrefix: ["VITE_", "OPENAI_LIKE_API_", "OLLAMA_API_BASE_URL", "LMSTUDIO_API_BASE_URL","TOGETHER_API_BASE_URL"], + envPrefix: ["VITE_","OPENAI_LIKE_API_BASE_URL", "OLLAMA_API_BASE_URL", "LMSTUDIO_API_BASE_URL","TOGETHER_API_BASE_URL"], css: { preprocessorOptions: { scss: {