Merge branch 'main' into bolt-shell-race-condition

This commit is contained in:
Anirban Kar 2024-12-19 14:50:24 +05:30 committed by GitHub
commit 69c58c1410
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
18 changed files with 407 additions and 444 deletions

32
FAQ.md
View File

@ -2,6 +2,18 @@
# bolt.diy
## Recommended Models for bolt.diy
For the best experience with bolt.diy, we recommend using the following models:
- **Claude 3.5 Sonnet (old)**: Best overall coder, providing excellent results across all use cases
- **Gemini 2.0 Flash**: Exceptional speed while maintaining good performance
- **GPT-4o**: Strong alternative to Claude 3.5 Sonnet with comparable capabilities
- **DeepSeekCoder V2 236b**: Best open source model (available through OpenRouter, DeepSeek API, or self-hosted)
- **Qwen 2.5 Coder 32b**: Best model for self-hosting with reasonable hardware requirements
**Note**: Models with less than 7b parameters typically lack the capability to properly interact with bolt!
## FAQ
### How do I get the best results with bolt.diy?
@ -34,14 +46,18 @@ We have seen this error a couple times and for some reason just restarting the D
We promise you that we are constantly testing new PRs coming into bolt.diy and the preview is core functionality, so the application is not broken! When you get a blank preview or dont get a preview, this is generally because the LLM hallucinated bad code or incorrect commands. We are working on making this more transparent so it is obvious. Sometimes the error will appear in developer console too so check that as well.
### How to add a LLM:
To make new LLMs available to use in this version of bolt.new, head on over to `app/utils/constants.ts` and find the constant MODEL_LIST. Each element in this array is an object that has the model ID for the name (get this from the provider's API documentation), a label for the frontend model dropdown, and the provider.
By default, Anthropic, OpenAI, Groq, and Ollama are implemented as providers, but the YouTube video for this repo covers how to extend this to work with more providers if you wish!
When you add a new model to the MODEL_LIST array, it will immediately be available to use when you run the app locally or reload it. For Ollama models, make sure you have the model installed already before trying to use it here!
### Everything works but the results are bad
This goes to the point above about how local LLMs are getting very powerful but you still are going to see better (sometimes much better) results with the largest LLMs like GPT-4o, Claude 3.5 Sonnet, and DeepSeek Coder V2 236b. If you are using smaller LLMs like Qwen-2.5-Coder, consider it more experimental and educational at this point. It can build smaller applications really well, which is super impressive for a local LLM, but for larger scale applications you want to use the larger LLMs still!
### Received structured exception #0xc0000005: access violation
If you are getting this, you are probably on Windows. The fix is generally to update the [Visual C++ Redistributable](https://learn.microsoft.com/en-us/cpp/windows/latest-supported-vc-redist?view=msvc-170)
### How to add an LLM:
To make new LLMs available to use in this version of bolt.new, head on over to `app/utils/constants.ts` and find the constant MODEL_LIST. Each element in this array is an object that has the model ID for the name (get this from the provider's API documentation), a label for the frontend model dropdown, and the provider.
By default, many providers are already implemented, but the YouTube video for this repo covers how to extend this to work with more providers if you wish!
When you add a new model to the MODEL_LIST array, it will immediately be available to use when you run the app locally or reload it.

View File

@ -4,7 +4,9 @@
Welcome to bolt.diy, the official open source version of Bolt.new (previously known as oTToDev and bolt.new ANY LLM), which allows you to choose the LLM that you use for each prompt! Currently, you can use OpenAI, Anthropic, Ollama, OpenRouter, Gemini, LMStudio, Mistral, xAI, HuggingFace, DeepSeek, or Groq models - and it is easily extended to use any other model supported by the Vercel AI SDK! See the instructions below for running this locally and extending it to include more models.
Check the [bolt.diy Docs](https://stackblitz-labs.github.io/bolt.diy/) for more information. This documentation is still being updated after the transfer.
Check the [bolt.diy Docs](https://stackblitz-labs.github.io/bolt.diy/) for more information.
We have also launched an experimental agent called the "bolt.diy Expert" that can answer common questions about bolt.diy. Find it here on the [oTTomator Live Agent Studio](https://studio.ottomator.ai/).
bolt.diy was originally started by [Cole Medin](https://www.youtube.com/@ColeMedin) but has quickly grown into a massive community effort to build the BEST open source AI coding assistant!
@ -95,34 +97,6 @@ Clone the repository using Git:
git clone -b stable https://github.com/stackblitz-labs/bolt.diy
```
### (Optional) Configure Environment Variables
Most environment variables can be configured directly through the settings menu of the application. However, if you need to manually configure them:
1. Rename `.env.example` to `.env.local`.
2. Add your LLM API keys. For example:
```env
GROQ_API_KEY=YOUR_GROQ_API_KEY
OPENAI_API_KEY=YOUR_OPENAI_API_KEY
ANTHROPIC_API_KEY=YOUR_ANTHROPIC_API_KEY
```
**Note**: Ollama does not require an API key as it runs locally.
3. Optionally, set additional configurations:
```env
# Debugging
VITE_LOG_LEVEL=debug
# Ollama settings (example: 8K context, localhost port 11434)
OLLAMA_API_BASE_URL=http://localhost:11434
DEFAULT_NUM_CTX=8192
```
**Important**: Do not commit your `.env.local` file to version control. This file is already included in `.gitignore`.
---
## Run the Application
@ -155,27 +129,30 @@ DEFAULT_NUM_CTX=8192
Use the provided NPM scripts:
```bash
npm run dockerbuild # Development build
npm run dockerbuild:prod # Production build
npm run dockerbuild
```
Alternatively, use Docker commands directly:
```bash
docker build . --target bolt-ai-development # Development build
docker build . --target bolt-ai-production # Production build
docker build . --target bolt-ai-development
```
2. **Run the Container**:
Use Docker Compose profiles to manage environments:
```bash
docker-compose --profile development up # Development
docker-compose --profile production up # Production
docker-compose --profile development up
```
- With the development profile, changes to your code will automatically reflect in the running container (hot reloading).
---
### Entering API Keys
All of your API Keys can be configured directly in the application. Just selecte the provider you want from the dropdown and click the pencile icon to enter your API key.
---
### Update Your Local Version to the Latest
To keep your local version of bolt.diy up to date with the latest changes, follow these steps for your operating system:
@ -236,4 +213,4 @@ Explore upcoming features and priorities on our [Roadmap](https://roadmap.sh/r/o
## FAQ
For answers to common questions, visit our [FAQ Page](FAQ.md).
For answers to common questions, issues, and to see a list of recommended models, visit our [FAQ Page](FAQ.md).

View File

@ -1 +1 @@
{ "commit": "69c0bf5873334c25d691b8db4a995b86125a6799" }
{ "commit": "50e677878446f622531123b19912f38e8246afbd", "version": "0.0.3" }

View File

@ -119,6 +119,9 @@ export const BaseChat = React.forwardRef<HTMLDivElement, BaseChatProps>(
useEffect(() => {
// Load API keys from cookies on component mount
let parsedApiKeys: Record<string, string> | undefined = {};
try {
const storedApiKeys = Cookies.get('apiKeys');
@ -127,6 +130,7 @@ export const BaseChat = React.forwardRef<HTMLDivElement, BaseChatProps>(
if (typeof parsedKeys === 'object' && parsedKeys !== null) {
setApiKeys(parsedKeys);
parsedApiKeys = parsedKeys;
}
}
} catch (error) {
@ -155,7 +159,7 @@ export const BaseChat = React.forwardRef<HTMLDivElement, BaseChatProps>(
Cookies.remove('providers');
}
initializeModelList(providerSettings).then((modelList) => {
initializeModelList({ apiKeys: parsedApiKeys, providerSettings }).then((modelList) => {
setModelList(modelList);
});

View File

@ -2,6 +2,7 @@ import React, { useCallback, useEffect, useState } from 'react';
import { useSettings } from '~/lib/hooks/useSettings';
import commit from '~/commit.json';
import { toast } from 'react-toastify';
import { providerBaseUrlEnvKeys } from '~/utils/constants';
interface ProviderStatus {
name: string;
@ -236,7 +237,7 @@ const checkProviderStatus = async (url: string | null, providerName: string): Pr
}
// Try different endpoints based on provider
const checkUrls = [`${url}/api/health`, `${url}/v1/models`];
const checkUrls = [`${url}/api/health`, url.endsWith('v1') ? `${url}/models` : `${url}/v1/models`];
console.log(`[Debug] Checking additional endpoints:`, checkUrls);
const results = await Promise.all(
@ -321,14 +322,16 @@ export default function DebugTab() {
.filter(([, provider]) => LOCAL_PROVIDERS.includes(provider.name))
.map(async ([, provider]) => {
const envVarName =
provider.name.toLowerCase() === 'ollama'
? 'OLLAMA_API_BASE_URL'
: provider.name.toLowerCase() === 'lmstudio'
? 'LMSTUDIO_API_BASE_URL'
: `REACT_APP_${provider.name.toUpperCase()}_URL`;
providerBaseUrlEnvKeys[provider.name].baseUrlKey || `REACT_APP_${provider.name.toUpperCase()}_URL`;
// Access environment variables through import.meta.env
const url = import.meta.env[envVarName] || provider.settings.baseUrl || null; // Ensure baseUrl is used
let settingsUrl = provider.settings.baseUrl;
if (settingsUrl && settingsUrl.trim().length === 0) {
settingsUrl = undefined;
}
const url = settingsUrl || import.meta.env[envVarName] || null; // Ensure baseUrl is used
console.log(`[Debug] Using URL for ${provider.name}:`, url, `(from ${envVarName})`);
const status = await checkProviderStatus(url, provider.name);

View File

@ -7,6 +7,7 @@ import { logStore } from '~/lib/stores/logs';
// Import a default fallback icon
import DefaultIcon from '/icons/Default.svg'; // Adjust the path as necessary
import { providerBaseUrlEnvKeys } from '~/utils/constants';
export default function ProvidersTab() {
const { providers, updateProviderSettings, isLocalModel } = useSettings();
@ -47,60 +48,77 @@ export default function ProvidersTab() {
className="w-full bg-white dark:bg-bolt-elements-background-depth-4 relative px-2 py-1.5 rounded-md focus:outline-none placeholder-bolt-elements-textTertiary text-bolt-elements-textPrimary dark:text-bolt-elements-textPrimary border border-bolt-elements-borderColor"
/>
</div>
{filteredProviders.map((provider) => (
<div
key={provider.name}
className="flex flex-col mb-2 provider-item hover:bg-bolt-elements-bg-depth-3 p-4 rounded-lg border border-bolt-elements-borderColor "
>
<div className="flex items-center justify-between mb-2">
<div className="flex items-center gap-2">
<img
src={`/icons/${provider.name}.svg`} // Attempt to load the specific icon
onError={(e) => {
// Fallback to default icon on error
e.currentTarget.src = DefaultIcon;
}}
alt={`${provider.name} icon`}
className="w-6 h-6 dark:invert"
/>
<span className="text-bolt-elements-textPrimary">{provider.name}</span>
</div>
<Switch
className="ml-auto"
checked={provider.settings.enabled}
onCheckedChange={(enabled) => {
updateProviderSettings(provider.name, { ...provider.settings, enabled });
{filteredProviders.map((provider) => {
const envBaseUrlKey = providerBaseUrlEnvKeys[provider.name].baseUrlKey;
const envBaseUrl = envBaseUrlKey ? import.meta.env[envBaseUrlKey] : undefined;
if (enabled) {
logStore.logProvider(`Provider ${provider.name} enabled`, { provider: provider.name });
} else {
logStore.logProvider(`Provider ${provider.name} disabled`, { provider: provider.name });
}
}}
/>
</div>
{/* Base URL input for configurable providers */}
{URL_CONFIGURABLE_PROVIDERS.includes(provider.name) && provider.settings.enabled && (
<div className="mt-2">
<label className="block text-sm text-bolt-elements-textSecondary mb-1">Base URL:</label>
<input
type="text"
value={provider.settings.baseUrl || ''}
onChange={(e) => {
const newBaseUrl = e.target.value;
updateProviderSettings(provider.name, { ...provider.settings, baseUrl: newBaseUrl });
logStore.logProvider(`Base URL updated for ${provider.name}`, {
provider: provider.name,
baseUrl: newBaseUrl,
});
return (
<div
key={provider.name}
className="flex flex-col mb-2 provider-item hover:bg-bolt-elements-bg-depth-3 p-4 rounded-lg border border-bolt-elements-borderColor "
>
<div className="flex items-center justify-between mb-2">
<div className="flex items-center gap-2">
<img
src={`/icons/${provider.name}.svg`} // Attempt to load the specific icon
onError={(e) => {
// Fallback to default icon on error
e.currentTarget.src = DefaultIcon;
}}
alt={`${provider.name} icon`}
className="w-6 h-6 dark:invert"
/>
<span className="text-bolt-elements-textPrimary">{provider.name}</span>
</div>
<Switch
className="ml-auto"
checked={provider.settings.enabled}
onCheckedChange={(enabled) => {
updateProviderSettings(provider.name, { ...provider.settings, enabled });
if (enabled) {
logStore.logProvider(`Provider ${provider.name} enabled`, { provider: provider.name });
} else {
logStore.logProvider(`Provider ${provider.name} disabled`, { provider: provider.name });
}
}}
placeholder={`Enter ${provider.name} base URL`}
className="w-full bg-white dark:bg-bolt-elements-background-depth-4 relative px-2 py-1.5 rounded-md focus:outline-none placeholder-bolt-elements-textTertiary text-bolt-elements-textPrimary dark:text-bolt-elements-textPrimary border border-bolt-elements-borderColor"
/>
</div>
)}
</div>
))}
{/* Base URL input for configurable providers */}
{URL_CONFIGURABLE_PROVIDERS.includes(provider.name) && provider.settings.enabled && (
<div className="mt-2">
{envBaseUrl && (
<label className="block text-xs text-bolt-elements-textSecondary text-green-300 mb-2">
Set On (.env) : {envBaseUrl}
</label>
)}
<label className="block text-sm text-bolt-elements-textSecondary mb-2">
{envBaseUrl ? 'Override Base Url' : 'Base URL '}:{' '}
</label>
<input
type="text"
value={provider.settings.baseUrl || ''}
onChange={(e) => {
let newBaseUrl: string | undefined = e.target.value;
if (newBaseUrl && newBaseUrl.trim().length === 0) {
newBaseUrl = undefined;
}
updateProviderSettings(provider.name, { ...provider.settings, baseUrl: newBaseUrl });
logStore.logProvider(`Base URL updated for ${provider.name}`, {
provider: provider.name,
baseUrl: newBaseUrl,
});
}}
placeholder={`Enter ${provider.name} base URL`}
className="w-full bg-white dark:bg-bolt-elements-background-depth-4 relative px-2 py-1.5 rounded-md focus:outline-none placeholder-bolt-elements-textTertiary text-bolt-elements-textPrimary dark:text-bolt-elements-textPrimary border border-bolt-elements-borderColor"
/>
</div>
)}
</div>
);
})}
</div>
);
}

View File

@ -14,7 +14,7 @@ export default async function handleRequest(
remixContext: EntryContext,
_loadContext: AppLoadContext,
) {
await initializeModelList();
await initializeModelList({});
const readable = await renderToReadableStream(<RemixServer context={remixContext} url={request.url} />, {
signal: request.signal,

View File

@ -3,6 +3,8 @@
* Preventing TS checks with files presented in the video for a better presentation.
*/
import { env } from 'node:process';
import type { IProviderSetting } from '~/types/model';
import { getProviderBaseUrlAndKey } from '~/utils/constants';
export function getAPIKey(cloudflareEnv: Env, provider: string, userApiKeys?: Record<string, string>) {
/**
@ -15,7 +17,20 @@ export function getAPIKey(cloudflareEnv: Env, provider: string, userApiKeys?: Re
return userApiKeys[provider];
}
// Fall back to environment variables
const { apiKey } = getProviderBaseUrlAndKey({
provider,
apiKeys: userApiKeys,
providerSettings: undefined,
serverEnv: cloudflareEnv as any,
defaultBaseUrlKey: '',
defaultApiTokenKey: '',
});
if (apiKey) {
return apiKey;
}
// Fall back to hardcoded environment variables names
switch (provider) {
case 'Anthropic':
return env.ANTHROPIC_API_KEY || cloudflareEnv.ANTHROPIC_API_KEY;
@ -50,16 +65,43 @@ export function getAPIKey(cloudflareEnv: Env, provider: string, userApiKeys?: Re
}
}
export function getBaseURL(cloudflareEnv: Env, provider: string) {
export function getBaseURL(cloudflareEnv: Env, provider: string, providerSettings?: Record<string, IProviderSetting>) {
const { baseUrl } = getProviderBaseUrlAndKey({
provider,
apiKeys: {},
providerSettings,
serverEnv: cloudflareEnv as any,
defaultBaseUrlKey: '',
defaultApiTokenKey: '',
});
if (baseUrl) {
return baseUrl;
}
let settingBaseUrl = providerSettings?.[provider].baseUrl;
if (settingBaseUrl && settingBaseUrl.length == 0) {
settingBaseUrl = undefined;
}
switch (provider) {
case 'Together':
return env.TOGETHER_API_BASE_URL || cloudflareEnv.TOGETHER_API_BASE_URL || 'https://api.together.xyz/v1';
return (
settingBaseUrl ||
env.TOGETHER_API_BASE_URL ||
cloudflareEnv.TOGETHER_API_BASE_URL ||
'https://api.together.xyz/v1'
);
case 'OpenAILike':
return env.OPENAI_LIKE_API_BASE_URL || cloudflareEnv.OPENAI_LIKE_API_BASE_URL;
return settingBaseUrl || env.OPENAI_LIKE_API_BASE_URL || cloudflareEnv.OPENAI_LIKE_API_BASE_URL;
case 'LMStudio':
return env.LMSTUDIO_API_BASE_URL || cloudflareEnv.LMSTUDIO_API_BASE_URL || 'http://localhost:1234';
return (
settingBaseUrl || env.LMSTUDIO_API_BASE_URL || cloudflareEnv.LMSTUDIO_API_BASE_URL || 'http://localhost:1234'
);
case 'Ollama': {
let baseUrl = env.OLLAMA_API_BASE_URL || cloudflareEnv.OLLAMA_API_BASE_URL || 'http://localhost:11434';
let baseUrl =
settingBaseUrl || env.OLLAMA_API_BASE_URL || cloudflareEnv.OLLAMA_API_BASE_URL || 'http://localhost:11434';
if (env.RUNNING_IN_DOCKER === 'true') {
baseUrl = baseUrl.replace('localhost', 'host.docker.internal');

View File

@ -140,7 +140,7 @@ export function getPerplexityModel(apiKey: OptionalApiKey, model: string) {
export function getModel(
provider: string,
model: string,
env: Env,
serverEnv: Env,
apiKeys?: Record<string, string>,
providerSettings?: Record<string, IProviderSetting>,
) {
@ -148,9 +148,12 @@ export function getModel(
* let apiKey; // Declare first
* let baseURL;
*/
// console.log({provider,model});
const apiKey = getAPIKey(env, provider, apiKeys); // Then assign
const baseURL = providerSettings?.[provider].baseUrl || getBaseURL(env, provider);
const apiKey = getAPIKey(serverEnv, provider, apiKeys); // Then assign
const baseURL = getBaseURL(serverEnv, provider, providerSettings);
// console.log({apiKey,baseURL});
switch (provider) {
case 'Anthropic':

View File

@ -151,10 +151,13 @@ export async function streamText(props: {
providerSettings?: Record<string, IProviderSetting>;
promptId?: string;
}) {
const { messages, env, options, apiKeys, files, providerSettings, promptId } = props;
const { messages, env: serverEnv, options, apiKeys, files, providerSettings, promptId } = props;
// console.log({serverEnv});
let currentModel = DEFAULT_MODEL;
let currentProvider = DEFAULT_PROVIDER.name;
const MODEL_LIST = await getModelList(apiKeys || {}, providerSettings);
const MODEL_LIST = await getModelList({ apiKeys, providerSettings, serverEnv: serverEnv as any });
const processedMessages = messages.map((message) => {
if (message.role === 'user') {
const { model, provider, content } = extractPropertiesFromMessage(message);
@ -196,7 +199,7 @@ export async function streamText(props: {
}
return _streamText({
model: getModel(currentProvider, currentModel, env, apiKeys, providerSettings) as any,
model: getModel(currentProvider, currentModel, serverEnv, apiKeys, providerSettings) as any,
system: systemPrompt,
maxTokens: dynamicMaxTokens,
messages: convertToCoreMessages(processedMessages as any),

View File

@ -92,7 +92,9 @@ export function useEditChatDescription({
}
const lengthValid = trimmedDesc.length > 0 && trimmedDesc.length <= 100;
const characterValid = /^[a-zA-Z0-9\s]+$/.test(trimmedDesc);
// Allow letters, numbers, spaces, and common punctuation but exclude characters that could cause issues
const characterValid = /^[a-zA-Z0-9\s\-_.,!?()[\]{}'"]+$/.test(trimmedDesc);
if (!lengthValid) {
toast.error('Description must be between 1 and 100 characters.');
@ -100,7 +102,7 @@ export function useEditChatDescription({
}
if (!characterValid) {
toast.error('Description can only contain alphanumeric characters and spaces.');
toast.error('Description can only contain letters, numbers, spaces, and basic punctuation.');
return false;
}

View File

@ -5,9 +5,6 @@ import { streamText } from '~/lib/.server/llm/stream-text';
import { stripIndents } from '~/utils/stripIndent';
import type { IProviderSetting, ProviderInfo } from '~/types/model';
const encoder = new TextEncoder();
const decoder = new TextDecoder();
export async function action(args: ActionFunctionArgs) {
return enhancerAction(args);
}
@ -107,29 +104,7 @@ async function enhancerAction({ context, request }: ActionFunctionArgs) {
providerSettings,
});
const transformStream = new TransformStream({
transform(chunk, controller) {
const text = decoder.decode(chunk);
const lines = text.split('\n').filter((line) => line.trim() !== '');
for (const line of lines) {
try {
const parsed = JSON.parse(line);
if (parsed.type === 'text') {
controller.enqueue(encoder.encode(parsed.value));
}
} catch (e) {
// skip invalid JSON lines
console.warn('Failed to parse stream part:', line, e);
}
}
},
});
const transformedStream = result.toDataStream().pipeThrough(transformStream);
return new Response(transformedStream, {
return new Response(result.textStream, {
status: 200,
headers: {
'Content-Type': 'text/plain; charset=utf-8',

View File

@ -3,7 +3,12 @@ import type { ModelInfo } from '~/utils/types';
export type ProviderInfo = {
staticModels: ModelInfo[];
name: string;
getDynamicModels?: (apiKeys?: Record<string, string>, providerSettings?: IProviderSetting) => Promise<ModelInfo[]>;
getDynamicModels?: (
providerName: string,
apiKeys?: Record<string, string>,
providerSettings?: IProviderSetting,
serverEnv?: Record<string, string>,
) => Promise<ModelInfo[]>;
getApiKeyLink?: string;
labelForGetApiKey?: string;
icon?: string;

View File

@ -220,7 +220,6 @@ const PROVIDER_LIST: ProviderInfo[] = [
],
getApiKeyLink: 'https://huggingface.co/settings/tokens',
},
{
name: 'OpenAI',
staticModels: [
@ -319,44 +318,130 @@ const PROVIDER_LIST: ProviderInfo[] = [
},
];
export const providerBaseUrlEnvKeys: Record<string, { baseUrlKey?: string; apiTokenKey?: string }> = {
Anthropic: {
apiTokenKey: 'ANTHROPIC_API_KEY',
},
OpenAI: {
apiTokenKey: 'OPENAI_API_KEY',
},
Groq: {
apiTokenKey: 'GROQ_API_KEY',
},
HuggingFace: {
apiTokenKey: 'HuggingFace_API_KEY',
},
OpenRouter: {
apiTokenKey: 'OPEN_ROUTER_API_KEY',
},
Google: {
apiTokenKey: 'GOOGLE_GENERATIVE_AI_API_KEY',
},
OpenAILike: {
baseUrlKey: 'OPENAI_LIKE_API_BASE_URL',
apiTokenKey: 'OPENAI_LIKE_API_KEY',
},
Together: {
baseUrlKey: 'TOGETHER_API_BASE_URL',
apiTokenKey: 'TOGETHER_API_KEY',
},
Deepseek: {
apiTokenKey: 'DEEPSEEK_API_KEY',
},
Mistral: {
apiTokenKey: 'MISTRAL_API_KEY',
},
LMStudio: {
baseUrlKey: 'LMSTUDIO_API_BASE_URL',
},
xAI: {
apiTokenKey: 'XAI_API_KEY',
},
Cohere: {
apiTokenKey: 'COHERE_API_KEY',
},
Perplexity: {
apiTokenKey: 'PERPLEXITY_API_KEY',
},
Ollama: {
baseUrlKey: 'OLLAMA_API_BASE_URL',
},
};
export const getProviderBaseUrlAndKey = (options: {
provider: string;
apiKeys?: Record<string, string>;
providerSettings?: IProviderSetting;
serverEnv?: Record<string, string>;
defaultBaseUrlKey: string;
defaultApiTokenKey: string;
}) => {
const { provider, apiKeys, providerSettings, serverEnv, defaultBaseUrlKey, defaultApiTokenKey } = options;
let settingsBaseUrl = providerSettings?.baseUrl;
if (settingsBaseUrl && settingsBaseUrl.length == 0) {
settingsBaseUrl = undefined;
}
const baseUrlKey = providerBaseUrlEnvKeys[provider]?.baseUrlKey || defaultBaseUrlKey;
const baseUrl = settingsBaseUrl || serverEnv?.[baseUrlKey] || process.env[baseUrlKey] || import.meta.env[baseUrlKey];
const apiTokenKey = providerBaseUrlEnvKeys[provider]?.apiTokenKey || defaultApiTokenKey;
const apiKey =
apiKeys?.[provider] || serverEnv?.[apiTokenKey] || process.env[apiTokenKey] || import.meta.env[apiTokenKey];
return {
baseUrl,
apiKey,
};
};
export const DEFAULT_PROVIDER = PROVIDER_LIST[0];
const staticModels: ModelInfo[] = PROVIDER_LIST.map((p) => p.staticModels).flat();
export let MODEL_LIST: ModelInfo[] = [...staticModels];
export async function getModelList(
apiKeys: Record<string, string>,
providerSettings?: Record<string, IProviderSetting>,
) {
export async function getModelList(options: {
apiKeys?: Record<string, string>;
providerSettings?: Record<string, IProviderSetting>;
serverEnv?: Record<string, string>;
}) {
const { apiKeys, providerSettings, serverEnv } = options;
MODEL_LIST = [
...(
await Promise.all(
PROVIDER_LIST.filter(
(p): p is ProviderInfo & { getDynamicModels: () => Promise<ModelInfo[]> } => !!p.getDynamicModels,
).map((p) => p.getDynamicModels(apiKeys, providerSettings?.[p.name])),
).map((p) => p.getDynamicModels(p.name, apiKeys, providerSettings?.[p.name], serverEnv)),
)
).flat(),
...staticModels,
];
return MODEL_LIST;
}
async function getTogetherModels(apiKeys?: Record<string, string>, settings?: IProviderSetting): Promise<ModelInfo[]> {
async function getTogetherModels(
name: string,
apiKeys?: Record<string, string>,
settings?: IProviderSetting,
serverEnv: Record<string, string> = {},
): Promise<ModelInfo[]> {
try {
const baseUrl = settings?.baseUrl || import.meta.env.TOGETHER_API_BASE_URL || '';
const provider = 'Together';
const { baseUrl, apiKey } = getProviderBaseUrlAndKey({
provider: name,
apiKeys,
providerSettings: settings,
serverEnv,
defaultBaseUrlKey: 'TOGETHER_API_BASE_URL',
defaultApiTokenKey: 'TOGETHER_API_KEY',
});
if (!baseUrl) {
return [];
}
let apiKey = import.meta.env.OPENAI_LIKE_API_KEY ?? '';
if (apiKeys && apiKeys[provider]) {
apiKey = apiKeys[provider];
}
if (!apiKey) {
return [];
}
@ -374,7 +459,7 @@ async function getTogetherModels(apiKeys?: Record<string, string>, settings?: IP
label: `${m.display_name} - in:$${m.pricing.input.toFixed(
2,
)} out:$${m.pricing.output.toFixed(2)} - context ${Math.floor(m.context_length / 1000)}k`,
provider,
provider: name,
maxTokenAllowed: 8000,
}));
} catch (e) {
@ -383,24 +468,40 @@ async function getTogetherModels(apiKeys?: Record<string, string>, settings?: IP
}
}
const getOllamaBaseUrl = (settings?: IProviderSetting) => {
const defaultBaseUrl = settings?.baseUrl || import.meta.env.OLLAMA_API_BASE_URL || 'http://localhost:11434';
const getOllamaBaseUrl = (name: string, settings?: IProviderSetting, serverEnv: Record<string, string> = {}) => {
const { baseUrl } = getProviderBaseUrlAndKey({
provider: name,
providerSettings: settings,
serverEnv,
defaultBaseUrlKey: 'OLLAMA_API_BASE_URL',
defaultApiTokenKey: '',
});
// Check if we're in the browser
if (typeof window !== 'undefined') {
// Frontend always uses localhost
return defaultBaseUrl;
return baseUrl;
}
// Backend: Check if we're running in Docker
const isDocker = process.env.RUNNING_IN_DOCKER === 'true';
return isDocker ? defaultBaseUrl.replace('localhost', 'host.docker.internal') : defaultBaseUrl;
return isDocker ? baseUrl.replace('localhost', 'host.docker.internal') : baseUrl;
};
async function getOllamaModels(apiKeys?: Record<string, string>, settings?: IProviderSetting): Promise<ModelInfo[]> {
async function getOllamaModels(
name: string,
_apiKeys?: Record<string, string>,
settings?: IProviderSetting,
serverEnv: Record<string, string> = {},
): Promise<ModelInfo[]> {
try {
const baseUrl = getOllamaBaseUrl(settings);
const baseUrl = getOllamaBaseUrl(name, settings, serverEnv);
if (!baseUrl) {
return [];
}
const response = await fetch(`${baseUrl}/api/tags`);
const data = (await response.json()) as OllamaApiResponse;
@ -419,22 +520,25 @@ async function getOllamaModels(apiKeys?: Record<string, string>, settings?: IPro
}
async function getOpenAILikeModels(
name: string,
apiKeys?: Record<string, string>,
settings?: IProviderSetting,
serverEnv: Record<string, string> = {},
): Promise<ModelInfo[]> {
try {
const baseUrl = settings?.baseUrl || import.meta.env.OPENAI_LIKE_API_BASE_URL || '';
const { baseUrl, apiKey } = getProviderBaseUrlAndKey({
provider: name,
apiKeys,
providerSettings: settings,
serverEnv,
defaultBaseUrlKey: 'OPENAI_LIKE_API_BASE_URL',
defaultApiTokenKey: 'OPENAI_LIKE_API_KEY',
});
if (!baseUrl) {
return [];
}
let apiKey = '';
if (apiKeys && apiKeys.OpenAILike) {
apiKey = apiKeys.OpenAILike;
}
const response = await fetch(`${baseUrl}/models`, {
headers: {
Authorization: `Bearer ${apiKey}`,
@ -445,7 +549,7 @@ async function getOpenAILikeModels(
return res.data.map((model: any) => ({
name: model.id,
label: model.id,
provider: 'OpenAILike',
provider: name,
}));
} catch (e) {
console.error('Error getting OpenAILike models:', e);
@ -486,9 +590,26 @@ async function getOpenRouterModels(): Promise<ModelInfo[]> {
}));
}
async function getLMStudioModels(_apiKeys?: Record<string, string>, settings?: IProviderSetting): Promise<ModelInfo[]> {
async function getLMStudioModels(
name: string,
apiKeys?: Record<string, string>,
settings?: IProviderSetting,
serverEnv: Record<string, string> = {},
): Promise<ModelInfo[]> {
try {
const baseUrl = settings?.baseUrl || import.meta.env.LMSTUDIO_API_BASE_URL || 'http://localhost:1234';
const { baseUrl } = getProviderBaseUrlAndKey({
provider: name,
apiKeys,
providerSettings: settings,
serverEnv,
defaultBaseUrlKey: 'LMSTUDIO_API_BASE_URL',
defaultApiTokenKey: '',
});
if (!baseUrl) {
return [];
}
const response = await fetch(`${baseUrl}/v1/models`);
const data = (await response.json()) as any;
@ -503,29 +624,37 @@ async function getLMStudioModels(_apiKeys?: Record<string, string>, settings?: I
}
}
async function initializeModelList(providerSettings?: Record<string, IProviderSetting>): Promise<ModelInfo[]> {
let apiKeys: Record<string, string> = {};
async function initializeModelList(options: {
env?: Record<string, string>;
providerSettings?: Record<string, IProviderSetting>;
apiKeys?: Record<string, string>;
}): Promise<ModelInfo[]> {
const { providerSettings, apiKeys: providedApiKeys, env } = options;
let apiKeys: Record<string, string> = providedApiKeys || {};
try {
const storedApiKeys = Cookies.get('apiKeys');
if (!providedApiKeys) {
try {
const storedApiKeys = Cookies.get('apiKeys');
if (storedApiKeys) {
const parsedKeys = JSON.parse(storedApiKeys);
if (storedApiKeys) {
const parsedKeys = JSON.parse(storedApiKeys);
if (typeof parsedKeys === 'object' && parsedKeys !== null) {
apiKeys = parsedKeys;
if (typeof parsedKeys === 'object' && parsedKeys !== null) {
apiKeys = parsedKeys;
}
}
} catch (error: any) {
logStore.logError('Failed to fetch API keys from cookies', error);
logger.warn(`Failed to fetch apikeys from cookies: ${error?.message}`);
}
} catch (error: any) {
logStore.logError('Failed to fetch API keys from cookies', error);
logger.warn(`Failed to fetch apikeys from cookies: ${error?.message}`);
}
MODEL_LIST = [
...(
await Promise.all(
PROVIDER_LIST.filter(
(p): p is ProviderInfo & { getDynamicModels: () => Promise<ModelInfo[]> } => !!p.getDynamicModels,
).map((p) => p.getDynamicModels(apiKeys, providerSettings?.[p.name])),
).map((p) => p.getDynamicModels(p.name, apiKeys, providerSettings?.[p.name], env)),
)
).flat(),
...staticModels,
@ -534,6 +663,7 @@ async function initializeModelList(providerSettings?: Record<string, IProviderSe
return MODEL_LIST;
}
// initializeModelList({})
export {
getOllamaModels,
getOpenAILikeModels,

View File

@ -1,271 +1,31 @@
# Release v0.0.2
### 🔄 Changes since v0.0.1
#### ✨ Features
- add unit configuration to uno.config.ts
- added perplexity model
- Experimental Prompt Library Added
- start update by branch
# Release v0.0.3
### 🔄 Changes since v0.0.2
#### 🐛 Bug Fixes
- added more controlled rate for code streaming
- handle conflicts between input method engine and enter key
- LM Studio sending messgae
- adjust intro section margin and textarea outline style in BaseChat component
- commit-file-ignore
- lm studio fix
- start new chat icon
- removed context optimization temporarily, to be moved to optional from menu
- Prompt Enhance
#### ♻️ Code Refactoring
#### 📚 Documentation
- remove unused React import in ImportButtons component
- simplify GitCloneButton component by removing unused tooltip and streamlining button structure
- miniflare error knowledge
#### 🔧 Chores
- update commit hash to 7bafd2a5d67dce70d15b77201ef8de9745efab61
- update commit hash to e5ecb0b7d5e0fb53f13654689cebd8eb99b10578
- update commit hash to 8f15c81f37f36667fe796b1f75d0003a7c0f395b
- update commit hash to d13da30bda2d10eb2da42113493625cd55e0d34d
- update commit hash to dd296ab00d4d51ea0bc30ebe9aed0e6632feb37a
- update commit hash to eeafc12522b184dcbded28c5c6606e4a23e6849f
- update commit hash to d479daa5781a533c68a6f9ffdb3b919914c9305e
- update commit hash to 5773b1e271c8effec20ff1c10a759d9a654a2a94
- update commit hash to 5f3405151043b3c32da7acc6353247a5508969b3
- update commit hash to 5f3405151043b3c32da7acc6353247a5508969b3
- update commit hash to 0c899e430a4d33e78e3e44ebf7100b5da14eda3f
- update commit hash to 1d64a15ed0110fc62091b1dca90139de9fb9fdb4
- update commit hash to d1fa70fc97dc7839ea8cd005feb03266f201cf4f
- update commit hash to 1e04ab38b07e82852626b164890f4a6df1f98cef
- update commit hash to 8c4397a19f3eab2382082a39526d66385e9d2a49
- update commit hash to 55094392cf4c5bc607aff796680ad50236a4cf20
- update commit hash to 9666b2ab67d25345542722ab9d870b36ad06252e
- update commit hash to 6a5ed21c0fed92a8c842b683bf9a430901e6bb05
- update commit hash to 4af18c069f2429ffaf410d92702a1e1294af2628
- update commit hash to a71cfba660f04a8440960ab772670b192e2d066f
- update commit hash to 4f02887565e13eeaabbfb6f699cbe089e802338f
- update commit hash to f27f7bba5132346db18e70e514a6a6202d6ab634
- update commit hash to eb53146937cbe49a84aaaaa59882df6db4a1e459
- update commit hash to 4f10fb1294e11cf8f5a68b30a1e85acdf65ffcbc
- update commit hash to 43370f515df1184be2fb54db637a73bb683d6d86
- update commit hash to ece0213500a94a6b29e29512c5040baf57884014
- update commit hash to b06f6e3a3e7e5b2b5f8d9b13a761422993559f3e
- update commit hash to 25fe15232fcd6cee83f179adbd1d3e7d6a90acca
- update commit hash to a87cfd79503a62db2be00656f4874ec747d76a09
- update commit hash to 7c3a3bbde6c61f374a6d37c888c6900a335e3d33
- update commit hash to d936c012bdeb210ee876be1941ef8e370ea0b2e3
- update commit hash to b3f7a5c3785060c7937dcd681b38f17b5396fc84
- update commit hash to 23346f6271bf2f438489660357e6ffee803befb1
- update commit hash to 9cd9ee9088467882e1e4efdf491959619307cc9d
- update commit hash to 87a90718d31bd8ec501cb32f863efd26156fb1e2
- update commit hash to e223e9b6af1f6f31300fd7ed9ce498236cedd5dc
- update commit hash to 4016f54933102bf67336b8ae58e14673dfad72ee
- update commit hash to 1e7c3a4ff8f3153f53e0b0ed7cb13434825e41d9
- update commit hash to d75899d737243cd7303704adef16d77290de5a0b
- update commit hash to b5867835f5da5c93bd9a8376df9e9d32b97acff5
- update commit hash to d22b32ae636b9f134cdb5f96a10e4398aa2171b7
- update commit hash to d9b2801434011b60dca700c19cabd0652f31f8e4
- update commit hash to 0157fddc76fd5eebc545085e2c3c4ab37d9ca925
- update commit hash to 810cc81a16955eebec943f7d504749dbcbb85b25
- update commit hash to d3727459aa594505efd0cef58c4218eaf48d5baf
- update commit hash to 6ba93974a02a98c83badf2f0002ff4812b8f75a9
- update commit hash to 960f532f8234663d0b3630d18033c959fac6882c
- update commit hash to 77073a5e7f759ae8e5752628131d0c56df6b5c34
- update commit hash to 78505ed2f347dd3a7778b4c1c7c38c89ecacedd3
- update commit hash to f752bf7da532ec6196dafff1c388250d44db4de5
- update commit hash to 995fb81ac7a03eb1a6d1c56cf2fc92a60028c024
- update commit hash to 8aee6ebf477c08d896b4419fbdeb670cc2bb8f29
- update commit hash to 6987ceae9e1e91bec301f9e25ed9e8e03449d806
- update commit hash to eb1d5417e77e699e0489f09814e87fb5afed9dd5
- update commit hash to de2cb43d170033c43a6cf436af02e033f66a7e4d
- update commit hash to 49b02dd885919e24a201f07b1a7b0fd0371b4f85
- update commit hash to 43e1f436f57fc4adb43b5481b403967803d4786d
- update commit hash to 0a4ef117ae5d3687b04415e64a22794ea55841d1
- update commit hash to 25b80ab267541b6ea290985dde09863f1a29c85c
- update commit hash to c257129a61e258650b321c19323ddebaf03b0a54
- adding back semantic pull pr check for better changelog system
- update commit hash to 1e72d52278730f7d22448be9d5cf2daf12559486
- update commit hash to 282beb96e2ee92ba8b1174aaaf9f270e03a288e8
#### 🔍 Other Changes
- Check the render method of SlotClone. #432
- Initial commit for screen cap feature
- Second commit for screen cap feature
- Add 90b llama-3.2 model for better performance
- More selection tool changes
- feat(context optimization):improved context management and redused chat overhead
- added backdrop and loading screen
- basic context menu for folders
- copyPath and copyRelativePath for files and folders
- pnpm lock file
- Refactor to use newver v4 version of Vercel AI package
- removed console logs
- Update README.md
- Update README.md
- Update README.md
- Update README.md
- Merge branch 'main' into context-optimization
- Merge branch 'main' into context-optimization
- added prompt url params
- added support for private github repo through github connections
- Add Logo icons LLM's
- Settings UI enhancement
- Event logs bug fix
- Merge branch 'stackblitz-labs:main' into main
- auto select model on provider disabled
- Update debug tab to check against fork
- debug fixes
- minor bug fixes
- Merge branch 'main' of https://github.com/stackblitz-labs/bolt.diy
- Update commit.json
- Merge branch 'main' of https://github.com/Stijnus/bolt.new-any-llm
- Update commit.json
- Merge pull request #684 from thecodacus/fix-auto-select-model
- ui styles fixed
- Update README.md
- some clean up and added a all log option
- Merge remote-tracking branch 'github-desktop-stijnus/main' into pr/676
- update README.md
- Merge branch 'main' into main
- Merge pull request #676 from Stijnus/main
- Update .gitignore
- Update commit.json
- Merge branch 'main' into fix/start-new-chat-icon
- Merge branch 'main' into fix/ui-enhancements
- Merge pull request #708 from SujalXplores/fix/ui-enhancements
- Update constants.ts
- Merge pull request #578 from thecodacus/context-optimization
- Merge pull request #713 from thecodacus/context-optimization-fix
- merged main
- Merge branch 'main' into feat/image-select-merge
- merge main into image
- Merge pull request #670 from thecodacus/private-github-repo
- Merge branch 'main' into streaming-fixed
- Merge pull request #655 from thecodacus/streaming-fixed
- Update BaseChat.tsx
- Merge pull request #679 from Dlouxgit/main
- Merge branch 'main' into feat/image-select
- merge main
- groq-llama3.3-70b
- Merge branch 'main' into feat/image-select
- Merge pull request #582 from emcconnell/feat/image-select
- update readme
- update readme
- Merge branch 'main' into update-readme
- Merge pull request #722 from emcconnell/update-readme
- Groq Llama 3.2 90B Vision Preview
- Merge
- Setting Modal Changes
- Renamed feature
- combined optional features
- Update DebugTab.tsx
- Update DebugTab.tsx
- Branding updates
- Update DebugTab.tsx
- prompt enhanced toast notification
- Merge branch 'main' into perplexity-models
- Merge pull request #715 from meetpateltech/perplexity-models
- Merge pull request #602 from mark-when/contextMenu2
- Merge pull request #728 from dustinwloring1988/branding/Change-Bolt-to-bolt
- Setting-Menu
- prompt-enhanced-toast
- Merge pull request #726 from dustinwloring1988/ui-ux/features-tab
- fallback icon for provider
- fix-perplexity-icon
- Update README.md
- updated readme
- updated readme
- Perplexity Provider Icon
- perplexity-provider-icon
- README-formatting
- Merge branch 'main' into system-prompt-variations
- update by branch
- Merge branch 'main' into ui-ux/debug-tab
- updated the examples and added strict rules
- Merge branch 'main' into system-prompt-variations
- Update commit.yaml
- Update commit.yaml
- branding update
- updated to use settings for branch selection
- Update useSettings.tsx
- quick fix
- Update FAQ.md
- Update CONTRIBUTING.md
- quick fix
- update-Bolt-to-bolt
- debug-tab
- Update mkdocs.yml
- Update vite.config.ts
- added auto detect branch name and version tag
- Update constants.ts
- Update DebugTab.tsx
- a fav.ico
- favicon-ico
- fix
- Merge pull request #753 from dustinwloring1988/fix/lm-studio-fetch-warning
- Merge pull request #751 from dustinwloring1988/fix/v3_lazyRouteDiscovery-warn
- mkdoc-update-names
- mkdoc consistent style
- Merge branch 'main' into system-prompt-variations-local
- Update ConnectionsTab.tsx
- quick fix
- mkdoc-docs-styled
- new section heading
- new section heading
- Make links clickable in docs
- Update CONTRIBUTING.md
- fix clickable links docs
- default provider icon
- default-provider-image
- Another attempt to add toek usage info
- merge
- Lint fix
- updated implementation
- Merge branch 'main' into fix-variable-name
- Merge pull request #755 from thecodacus/fix-variable-name
- Merge branch 'main' into token-usage
- Merge pull request #769 from thecodacus/token-usage
- Merge remote-tracking branch 'upstream/main'
- Merge remote-tracking branch 'origin/main' into system-prompt-variations-local
- Merge branch 'main' into main
- added missing icons for safari
- Merge pull request #760 from Stijnus/main
- Merge branch 'main' into app-fail-safari-fix
- Merge pull request #771 from thecodacus/app-fail-safari-fix
- Merge pull request #433 from DiegoSouzaPW/feature/SlotCloneError
- Merge remote-tracking branch 'upstream/main'
- commit workflow fix
- Merge pull request #772 from thecodacus/commit-workflow-fix
- Merge remote-tracking branch 'upstream/main'
- Merge branch 'main' into system-prompt-variations-local
- Merge pull request #744 from thecodacus/system-prompt-variations
- Merge remote-tracking branch 'upstream/main'
- updated workflow for commit and stable release
- Merge pull request #773 from thecodacus/workflowfix
- Fixed theming of Copy Code button
- Merge branch 'main' into copyMyFix
- Merge remote-tracking branch 'upstream/main'
- minor bugfix
- Merge branch 'minor-bugfix' into bugfix-for-stable
- Merge branch 'main' into prompt-url-params
- Merge pull request #669 from thecodacus/prompt-url-params
- Merge branch 'main' into add-loading-on-git-import-from-url
- added UI fix for loading screen
- Merge branch 'main' into add-loading-on-git-import-from-url
- Merge pull request #597 from thecodacus/add-loading-on-git-import-from-url
- Merge branch 'main' into copyMyFix
- Merge pull request #774 from D-Byte/copyMyFix
- Merge remote-tracking branch 'upstream/main'
- Merge branch 'main' into bugfix-for-stable
- Merge pull request #757 from dustinwloring1988/feat/enhanced-github-connection
- Merge remote-tracking branch 'upstream/main'
- Merge branch 'main' into bugfix-for-stable
- Merge pull request #781 from thecodacus/semantic-pull-pr
- miniflare and wrangler error
- simplified the fix
- Merge branch 'main' into fix/prompt-enhance
**Full Changelog**: [`v0.0.1..v0.0.2`](https://github.com/stackblitz-labs/bolt.diy/compare/v0.0.1...v0.0.2)
**Full Changelog**: [`v0.0.2..v0.0.3`](https://github.com/stackblitz-labs/bolt.diy/compare/v0.0.2...v0.0.3)

View File

@ -1,5 +1,19 @@
# Frequently Asked Questions (FAQ)
## What are the best models for bolt.diy?
For the best experience with bolt.diy, we recommend using the following models:
- **Claude 3.5 Sonnet (old)**: Best overall coder, providing excellent results across all use cases
- **Gemini 2.0 Flash**: Exceptional speed while maintaining good performance
- **GPT-4o**: Strong alternative to Claude 3.5 Sonnet with comparable capabilities
- **DeepSeekCoder V2 236b**: Best open source model (available through OpenRouter, DeepSeek API, or self-hosted)
- **Qwen 2.5 Coder 32b**: Best model for self-hosting with reasonable hardware requirements
**Note**: Models with less than 7b parameters typically lack the capability to properly interact with bolt!
---
## How do I get the best results with bolt.diy?
- **Be specific about your stack**:
@ -72,4 +86,15 @@ Local LLMs like Qwen-2.5-Coder are powerful for small applications but still exp
---
### **"Received structured exception #0xc0000005: access violation"**
If you are getting this, you are probably on Windows. The fix is generally to update the [Visual C++ Redistributable](https://learn.microsoft.com/en-us/cpp/windows/latest-supported-vc-redist?view=msvc-170)
---
### **"Miniflare or Wrangler errors in Windows"**
You will need to make sure you have the latest version of Visual Studio C++ installed (14.40.33816), more information here https://github.com/stackblitz-labs/bolt.diy/issues/19.
---
Got more questions? Feel free to reach out or open an issue in our GitHub repo!

View File

@ -5,7 +5,7 @@
"license": "MIT",
"sideEffects": false,
"type": "module",
"version": "0.0.2",
"version": "0.0.3",
"scripts": {
"deploy": "npm run build && wrangler pages deploy",
"build": "remix vite:build",

View File

@ -28,7 +28,7 @@ export default defineConfig((config) => {
chrome129IssuePlugin(),
config.mode === 'production' && optimizeCssModules({ apply: 'build' }),
],
envPrefix: ["VITE_", "OPENAI_LIKE_API_", "OLLAMA_API_BASE_URL", "LMSTUDIO_API_BASE_URL","TOGETHER_API_BASE_URL"],
envPrefix: ["VITE_","OPENAI_LIKE_API_BASE_URL", "OLLAMA_API_BASE_URL", "LMSTUDIO_API_BASE_URL","TOGETHER_API_BASE_URL"],
css: {
preprocessorOptions: {
scss: {