mirror of
https://github.com/stackblitz-labs/bolt.diy
synced 2025-06-26 18:26:38 +00:00
This commit introduces integrations for three new LLM providers: - Azure OpenAI: Leverages the @ai-sdk/openai package for Azure deployments. Configuration includes API Key, Endpoint, Deployment Name, and API Version. - Vertex AI: Utilizes the @ai-sdk/google/vertex package for Google Cloud's Vertex AI models (e.g., Gemini). Configuration includes Project ID and Region, relying on Application Default Credentials for authentication. - Granite AI: Provides a custom implementation using direct fetch calls. Configuration includes API Key and Base URL. The api.llmcall.ts route has been updated to handle this provider's custom generate method. Key changes include: - New provider implementation files in app/lib/modules/llm/providers/. - Updates to app/lib/modules/llm/registry.ts and manager.ts to include the new providers. - Enhancements to app/components/@settings/tabs/providers/cloud/CloudProvidersTab.tsx to support configuration UI for the new providers, including specific fields like Azure Deployment Name, Vertex Project ID/Region. - Adjustments in app/routes/api.llmcall.ts to accommodate the Granite AI provider's direct fetch implementation alongside SDK-based providers. - Addition of placeholder icons for the new providers. Additionally, this commit includes initial scaffolding for a document upload feature: - A new FileUpload.tsx UI component for selecting files. - A new /api/document-upload API route that acknowledges file uploads but does not yet process or store them. This is a placeholder for future knowledge base integration.
142 lines
5.3 KiB
Plaintext
142 lines
5.3 KiB
Plaintext
# Rename this file to .env once you have filled in the below environment variables!
|
|
|
|
# Get your GROQ API Key here -
|
|
# https://console.groq.com/keys
|
|
# You only need this environment variable set if you want to use Groq models
|
|
GROQ_API_KEY=
|
|
|
|
# Get your HuggingFace API Key here -
|
|
# https://huggingface.co/settings/tokens
|
|
# You only need this environment variable set if you want to use HuggingFace models
|
|
HuggingFace_API_KEY=
|
|
|
|
|
|
# Get your Open AI API Key by following these instructions -
|
|
# https://help.openai.com/en/articles/4936850-where-do-i-find-my-openai-api-key
|
|
# You only need this environment variable set if you want to use GPT models
|
|
OPENAI_API_KEY=
|
|
|
|
# Get your Anthropic API Key in your account settings -
|
|
# https://console.anthropic.com/settings/keys
|
|
# You only need this environment variable set if you want to use Claude models
|
|
ANTHROPIC_API_KEY=
|
|
|
|
# Get your OpenRouter API Key in your account settings -
|
|
# https://openrouter.ai/settings/keys
|
|
# You only need this environment variable set if you want to use OpenRouter models
|
|
OPEN_ROUTER_API_KEY=
|
|
|
|
# Get your Google Generative AI API Key by following these instructions -
|
|
# https://console.cloud.google.com/apis/credentials
|
|
# You only need this environment variable set if you want to use Google Generative AI models
|
|
GOOGLE_GENERATIVE_AI_API_KEY=
|
|
|
|
# You only need this environment variable set if you want to use oLLAMA models
|
|
# DONT USE http://localhost:11434 due to IPV6 issues
|
|
# USE EXAMPLE http://127.0.0.1:11434
|
|
OLLAMA_API_BASE_URL=
|
|
|
|
# You only need this environment variable set if you want to use OpenAI Like models
|
|
OPENAI_LIKE_API_BASE_URL=
|
|
|
|
# You only need this environment variable set if you want to use Together AI models
|
|
TOGETHER_API_BASE_URL=
|
|
|
|
# You only need this environment variable set if you want to use DeepSeek models through their API
|
|
DEEPSEEK_API_KEY=
|
|
|
|
# Get your OpenAI Like API Key
|
|
OPENAI_LIKE_API_KEY=
|
|
|
|
# Get your Together API Key
|
|
TOGETHER_API_KEY=
|
|
|
|
# You only need this environment variable set if you want to use Hyperbolic models
|
|
#Get your Hyperbolics API Key at https://app.hyperbolic.xyz/settings
|
|
#baseURL="https://api.hyperbolic.xyz/v1/chat/completions"
|
|
HYPERBOLIC_API_KEY=
|
|
HYPERBOLIC_API_BASE_URL=
|
|
|
|
# Get your Mistral API Key by following these instructions -
|
|
# https://console.mistral.ai/api-keys/
|
|
# You only need this environment variable set if you want to use Mistral models
|
|
MISTRAL_API_KEY=
|
|
|
|
# Get the Cohere Api key by following these instructions -
|
|
# https://dashboard.cohere.com/api-keys
|
|
# You only need this environment variable set if you want to use Cohere models
|
|
COHERE_API_KEY=
|
|
|
|
# Get LMStudio Base URL from LM Studio Developer Console
|
|
# Make sure to enable CORS
|
|
# DONT USE http://localhost:1234 due to IPV6 issues
|
|
# Example: http://127.0.0.1:1234
|
|
LMSTUDIO_API_BASE_URL=
|
|
|
|
# Get your xAI API key
|
|
# https://x.ai/api
|
|
# You only need this environment variable set if you want to use xAI models
|
|
XAI_API_KEY=
|
|
|
|
# Get your Perplexity API Key here -
|
|
# https://www.perplexity.ai/settings/api
|
|
# You only need this environment variable set if you want to use Perplexity models
|
|
PERPLEXITY_API_KEY=
|
|
|
|
# Get your AWS configuration
|
|
# https://console.aws.amazon.com/iam/home
|
|
# The JSON should include the following keys:
|
|
# - region: The AWS region where Bedrock is available.
|
|
# - accessKeyId: Your AWS access key ID.
|
|
# - secretAccessKey: Your AWS secret access key.
|
|
# - sessionToken (optional): Temporary session token if using an IAM role or temporary credentials.
|
|
# Example JSON:
|
|
# {"region": "us-east-1", "accessKeyId": "yourAccessKeyId", "secretAccessKey": "yourSecretAccessKey", "sessionToken": "yourSessionToken"}
|
|
AWS_BEDROCK_CONFIG=
|
|
|
|
# Azure OpenAI Credentials
|
|
# Find your API Key and Endpoint in the Azure Portal: Portal > Azure OpenAI > Your Resource > Keys and Endpoint
|
|
# Deployment Name is the name you give your model deployment in Azure OpenAI Studio.
|
|
AZURE_OPENAI_API_KEY=
|
|
AZURE_OPENAI_ENDPOINT=
|
|
AZURE_OPENAI_DEPLOYMENT_NAME=
|
|
|
|
# Vertex AI (Google Cloud) Credentials
|
|
# Project ID and Region can be found in the Google Cloud Console.
|
|
# Assumes Application Default Credentials (ADC) for authentication.
|
|
# For service account keys, you might need to set GOOGLE_APPLICATION_CREDENTIALS to the path of your JSON key file.
|
|
VERTEX_AI_PROJECT_ID=
|
|
VERTEX_AI_REGION=
|
|
|
|
# Granite AI Credentials
|
|
# Obtain your API Key and Base URL from your Granite AI provider.
|
|
GRANITE_AI_API_KEY=
|
|
GRANITE_AI_BASE_URL=
|
|
|
|
# Include this environment variable if you want more logging for debugging locally
|
|
VITE_LOG_LEVEL=debug
|
|
|
|
# Get your GitHub Personal Access Token here -
|
|
# https://github.com/settings/tokens
|
|
# This token is used for:
|
|
# 1. Importing/cloning GitHub repositories without rate limiting
|
|
# 2. Accessing private repositories
|
|
# 3. Automatic GitHub authentication (no need to manually connect in the UI)
|
|
#
|
|
# For classic tokens, ensure it has these scopes: repo, read:org, read:user
|
|
# For fine-grained tokens, ensure it has Repository and Organization access
|
|
VITE_GITHUB_ACCESS_TOKEN=
|
|
|
|
# Specify the type of GitHub token you're using
|
|
# Can be 'classic' or 'fine-grained'
|
|
# Classic tokens are recommended for broader access
|
|
VITE_GITHUB_TOKEN_TYPE=classic
|
|
|
|
# Example Context Values for qwen2.5-coder:32b
|
|
#
|
|
# DEFAULT_NUM_CTX=32768 # Consumes 36GB of VRAM
|
|
# DEFAULT_NUM_CTX=24576 # Consumes 32GB of VRAM
|
|
# DEFAULT_NUM_CTX=12288 # Consumes 26GB of VRAM
|
|
# DEFAULT_NUM_CTX=6144 # Consumes 24GB of VRAM
|
|
DEFAULT_NUM_CTX=
|