diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 68215a2..304b140 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -1,6 +1,6 @@ # Contributing to oTToDev -First off, thank you for considering contributing to oTToDev! This fork aims to expand the capabilities of the original project by integrating multiple LLM providers and enhancing functionality. Every contribution helps make oTToDev a better tool for developers worldwide. +First off, thank you for considering contributing to Bolt.diy! This fork aims to expand the capabilities of the original project by integrating multiple LLM providers and enhancing functionality. Every contribution helps make Bolt.diy a better tool for developers worldwide. ## 📋 Table of Contents - [Code of Conduct](#code-of-conduct) diff --git a/FAQ.md b/FAQ.md index 3e26705..c9467bb 100644 --- a/FAQ.md +++ b/FAQ.md @@ -1,45 +1,45 @@ [![Bolt.new: AI-Powered Full-Stack Web Development in the Browser](./public/social_preview_index.jpg)](https://bolt.new) -# Bolt.new Fork by Cole Medin - oTToDev +# Bolt.new Fork by Cole Medin - Bolt.diy ## FAQ -### How do I get the best results with oTToDev? +### How do I get the best results with Bolt.diy? - **Be specific about your stack**: If you want to use specific frameworks or libraries (like Astro, Tailwind, ShadCN, or any other popular JavaScript framework), mention them in your initial prompt to ensure Bolt scaffolds the project accordingly. - **Use the enhance prompt icon**: Before sending your prompt, try clicking the 'enhance' icon to have the AI model help you refine your prompt, then edit the results before submitting. -- **Scaffold the basics first, then add features**: Make sure the basic structure of your application is in place before diving into more advanced functionality. This helps oTToDev understand the foundation of your project and ensure everything is wired up right before building out more advanced functionality. +- **Scaffold the basics first, then add features**: Make sure the basic structure of your application is in place before diving into more advanced functionality. This helps Bolt.diy understand the foundation of your project and ensure everything is wired up right before building out more advanced functionality. -- **Batch simple instructions**: Save time by combining simple instructions into one message. For example, you can ask oTToDev to change the color scheme, add mobile responsiveness, and restart the dev server, all in one go saving you time and reducing API credit consumption significantly. +- **Batch simple instructions**: Save time by combining simple instructions into one message. For example, you can ask Bolt.diy to change the color scheme, add mobile responsiveness, and restart the dev server, all in one go saving you time and reducing API credit consumption significantly. -### Do you plan on merging oTToDev back into the official Bolt.new repo? +### Do you plan on merging Bolt.diy back into the official Bolt.new repo? More news coming on this coming early next month - stay tuned! ### Why are there so many open issues/pull requests? -oTToDev was started simply to showcase how to edit an open source project and to do something cool with local LLMs on my (@ColeMedin) YouTube channel! However, it quickly +Bolt.diy was started simply to showcase how to edit an open source project and to do something cool with local LLMs on my (@ColeMedin) YouTube channel! However, it quickly grew into a massive community project that I am working hard to keep up with the demand of by forming a team of maintainers and getting as many people involved as I can. That effort is going well and all of our maintainers are ABSOLUTE rockstars, but it still takes time to organize everything so we can efficiently get through all the issues and PRs. But rest assured, we are working hard and even working on some partnerships behind the scenes to really help this project take off! -### How do local LLMs fair compared to larger models like Claude 3.5 Sonnet for oTToDev/Bolt.new? +### How do local LLMs fair compared to larger models like Claude 3.5 Sonnet for Bolt.diy/Bolt.new? As much as the gap is quickly closing between open source and massive close source models, you’re still going to get the best results with the very large models like GPT-4o, Claude 3.5 Sonnet, and DeepSeek Coder V2 236b. This is one of the big tasks we have at hand - figuring out how to prompt better, use agents, and improve the platform as a whole to make it work better for even the smaller local LLMs! ### I'm getting the error: "There was an error processing this request" -If you see this error within oTToDev, that is just the application telling you there is a problem at a high level, and this could mean a number of different things. To find the actual error, please check BOTH the terminal where you started the application (with Docker or pnpm) and the developer console in the browser. For most browsers, you can access the developer console by pressing F12 or right clicking anywhere in the browser and selecting “Inspect”. Then go to the “console” tab in the top right. +If you see this error within Bolt.diy, that is just the application telling you there is a problem at a high level, and this could mean a number of different things. To find the actual error, please check BOTH the terminal where you started the application (with Docker or pnpm) and the developer console in the browser. For most browsers, you can access the developer console by pressing F12 or right clicking anywhere in the browser and selecting “Inspect”. Then go to the “console” tab in the top right. ### I'm getting the error: "x-api-key header missing" -We have seen this error a couple times and for some reason just restarting the Docker container has fixed it. This seems to be Ollama specific. Another thing to try is try to run oTToDev with Docker or pnpm, whichever you didn’t run first. We are still on the hunt for why this happens once and a while! +We have seen this error a couple times and for some reason just restarting the Docker container has fixed it. This seems to be Ollama specific. Another thing to try is try to run Bolt.diy with Docker or pnpm, whichever you didn’t run first. We are still on the hunt for why this happens once and a while! -### I'm getting a blank preview when oTToDev runs my app! +### I'm getting a blank preview when Bolt.diy runs my app! -We promise you that we are constantly testing new PRs coming into oTToDev and the preview is core functionality, so the application is not broken! When you get a blank preview or don’t get a preview, this is generally because the LLM hallucinated bad code or incorrect commands. We are working on making this more transparent so it is obvious. Sometimes the error will appear in developer console too so check that as well. +We promise you that we are constantly testing new PRs coming into Bolt.diy and the preview is core functionality, so the application is not broken! When you get a blank preview or don’t get a preview, this is generally because the LLM hallucinated bad code or incorrect commands. We are working on making this more transparent so it is obvious. Sometimes the error will appear in developer console too so check that as well. ### How to add a LLM: diff --git a/app/commit.json b/app/commit.json index 208eeff..8d73867 100644 --- a/app/commit.json +++ b/app/commit.json @@ -1 +1 @@ -{ "commit": "154935cdeb054d2cc22dfb0c7e6cf084f02b95d0" } +{ "commit": "8f3b4cd08249d26b14397e66241b9d099d3eb205" } diff --git a/app/components/chat/BaseChat.tsx b/app/components/chat/BaseChat.tsx index 0d8933b..a77932c 100644 --- a/app/components/chat/BaseChat.tsx +++ b/app/components/chat/BaseChat.tsx @@ -17,7 +17,6 @@ import Cookies from 'js-cookie'; import * as Tooltip from '@radix-ui/react-tooltip'; import styles from './BaseChat.module.scss'; -import type { ProviderInfo } from '~/utils/types'; import { ExportChatButton } from '~/components/chat/chatExportAndImport/ExportChatButton'; import { ImportButtons } from '~/components/chat/chatExportAndImport/ImportButtons'; import { ExamplePrompts } from '~/components/chat/ExamplePrompts'; @@ -26,6 +25,7 @@ import GitCloneButton from './GitCloneButton'; import FilePreview from './FilePreview'; import { ModelSelector } from '~/components/chat/ModelSelector'; import { SpeechRecognitionButton } from '~/components/chat/SpeechRecognition'; +import type { IProviderSetting, ProviderInfo } from '~/types/model'; const TEXTAREA_MIN_HEIGHT = 76; @@ -45,6 +45,7 @@ interface BaseChatProps { setModel?: (model: string) => void; provider?: ProviderInfo; setProvider?: (provider: ProviderInfo) => void; + providerList?: ProviderInfo[]; handleStop?: () => void; sendMessage?: (event: React.UIEvent, messageInput?: string) => void; handleInputChange?: (event: React.ChangeEvent) => void; @@ -70,6 +71,7 @@ export const BaseChat = React.forwardRef( setModel, provider, setProvider, + providerList, input = '', enhancingPrompt, handleInputChange, @@ -108,48 +110,10 @@ export const BaseChat = React.forwardRef( const [recognition, setRecognition] = useState(null); const [transcript, setTranscript] = useState(''); - // Load enabled providers from cookies - const [enabledProviders, setEnabledProviders] = useState(() => { - const savedProviders = Cookies.get('providers'); - - if (savedProviders) { - try { - const parsedProviders = JSON.parse(savedProviders); - return PROVIDER_LIST.filter((p) => parsedProviders[p.name]); - } catch (error) { - console.error('Failed to parse providers from cookies:', error); - return PROVIDER_LIST; - } - } - - return PROVIDER_LIST; - }); - - // Update enabled providers when cookies change - useEffect(() => { - const updateProvidersFromCookies = () => { - const savedProviders = Cookies.get('providers'); - - if (savedProviders) { - try { - const parsedProviders = JSON.parse(savedProviders); - setEnabledProviders(PROVIDER_LIST.filter((p) => parsedProviders[p.name])); - } catch (error) { - console.error('Failed to parse providers from cookies:', error); - } - } - }; - - updateProvidersFromCookies(); - - const interval = setInterval(updateProvidersFromCookies, 1000); - - return () => clearInterval(interval); - }, [PROVIDER_LIST]); - useEffect(() => { console.log(transcript); }, [transcript]); + useEffect(() => { // Load API keys from cookies on component mount try { @@ -169,7 +133,26 @@ export const BaseChat = React.forwardRef( Cookies.remove('apiKeys'); } - initializeModelList().then((modelList) => { + let providerSettings: Record | undefined = undefined; + + try { + const savedProviderSettings = Cookies.get('providers'); + + if (savedProviderSettings) { + const parsedProviderSettings = JSON.parse(savedProviderSettings); + + if (typeof parsedProviderSettings === 'object' && parsedProviderSettings !== null) { + providerSettings = parsedProviderSettings; + } + } + } catch (error) { + console.error('Error loading Provider Settings from cookies:', error); + + // Clear invalid cookie data + Cookies.remove('providers'); + } + + initializeModelList(providerSettings).then((modelList) => { setModelList(modelList); }); @@ -369,10 +352,10 @@ export const BaseChat = React.forwardRef( modelList={modelList} provider={provider} setProvider={setProvider} - providerList={PROVIDER_LIST} + providerList={providerList || PROVIDER_LIST} apiKeys={apiKeys} /> - {enabledProviders.length > 0 && provider && ( + {(providerList || []).length > 0 && provider && ( ( 0 || isStreaming || uploadedFiles.length > 0} isStreaming={isStreaming} - disabled={enabledProviders.length === 0} + disabled={!providerList || providerList.length === 0} onClick={(event) => { if (isStreaming) { handleStop?.(); @@ -528,7 +511,7 @@ export const BaseChat = React.forwardRef( !isModelSettingsCollapsed, })} onClick={() => setIsModelSettingsCollapsed(!isModelSettingsCollapsed)} - disabled={enabledProviders.length === 0} + disabled={!providerList || providerList.length === 0} >
{isModelSettingsCollapsed ? {model} : } diff --git a/app/components/chat/Chat.client.tsx b/app/components/chat/Chat.client.tsx index 2818378..751ea9c 100644 --- a/app/components/chat/Chat.client.tsx +++ b/app/components/chat/Chat.client.tsx @@ -17,8 +17,9 @@ import { cubicEasingFn } from '~/utils/easings'; import { createScopedLogger, renderLogger } from '~/utils/logger'; import { BaseChat } from './BaseChat'; import Cookies from 'js-cookie'; -import type { ProviderInfo } from '~/utils/types'; import { debounce } from '~/utils/debounce'; +import { useSettings } from '~/lib/hooks/useSettings'; +import type { ProviderInfo } from '~/types/model'; const toastAnimation = cssTransition({ enter: 'animated fadeInRight', @@ -92,6 +93,8 @@ export const ChatImpl = memo( const [uploadedFiles, setUploadedFiles] = useState([]); // Move here const [imageDataList, setImageDataList] = useState([]); // Move here const files = useStore(workbenchStore.files); + const { activeProviders } = useSettings(); + const [model, setModel] = useState(() => { const savedModel = Cookies.get('selectedModel'); return savedModel || DEFAULT_MODEL; @@ -317,6 +320,7 @@ export const ChatImpl = memo( setModel={handleModelChange} provider={provider} setProvider={handleProviderChange} + providerList={activeProviders} messageRef={messageRef} scrollRef={scrollRef} handleInputChange={(e) => { diff --git a/app/components/settings/Settings.module.scss b/app/components/settings/Settings.module.scss index 6da8288..639cbbc 100644 --- a/app/components/settings/Settings.module.scss +++ b/app/components/settings/Settings.module.scss @@ -46,7 +46,7 @@ padding: 1rem; margin-bottom: 1rem; border-style: solid; - border-color: var(--bolt-elements-button-danger-backgroundHover) ; + border-color: var(--bolt-elements-button-danger-backgroundHover); border-width: thin; button { @@ -60,4 +60,4 @@ background-color: var(--bolt-elements-button-danger-backgroundHover); } } -} \ No newline at end of file +} diff --git a/app/components/settings/SettingsWindow.tsx b/app/components/settings/SettingsWindow.tsx index 1c72711..b7b368d 100644 --- a/app/components/settings/SettingsWindow.tsx +++ b/app/components/settings/SettingsWindow.tsx @@ -1,17 +1,16 @@ import * as RadixDialog from '@radix-ui/react-dialog'; import { motion } from 'framer-motion'; -import { useState } from 'react'; +import { useState, type ReactElement } from 'react'; import { classNames } from '~/utils/classNames'; import { DialogTitle, dialogVariants, dialogBackdropVariants } from '~/components/ui/Dialog'; import { IconButton } from '~/components/ui/IconButton'; -import { providersList } from '~/lib/stores/settings'; -import { db, getAll, deleteById } from '~/lib/persistence'; -import { toast } from 'react-toastify'; -import { useNavigate } from '@remix-run/react'; -import commit from '~/commit.json'; -import Cookies from 'js-cookie'; import styles from './Settings.module.scss'; -import { Switch } from '~/components/ui/Switch'; +import ChatHistoryTab from './chat-history/ChatHistoryTab'; +import ProvidersTab from './providers/ProvidersTab'; +import { useSettings } from '~/lib/hooks/useSettings'; +import FeaturesTab from './features/FeaturesTab'; +import DebugTab from './debug/DebugTab'; +import ConnectionsTab from './connections/ConnectionsTab'; interface SettingsProps { open: boolean; @@ -21,206 +20,27 @@ interface SettingsProps { type TabType = 'chat-history' | 'providers' | 'features' | 'debug' | 'connection'; // Providers that support base URL configuration -const URL_CONFIGURABLE_PROVIDERS = ['Ollama', 'LMStudio', 'OpenAILike']; - export const SettingsWindow = ({ open, onClose }: SettingsProps) => { - const navigate = useNavigate(); + const { debug } = useSettings(); const [activeTab, setActiveTab] = useState('chat-history'); - const [isDebugEnabled, setIsDebugEnabled] = useState(() => { - const savedDebugState = Cookies.get('isDebugEnabled'); - return savedDebugState === 'true'; - }); - const [searchTerm, setSearchTerm] = useState(''); - const [isDeleting, setIsDeleting] = useState(false); - const [githubUsername, setGithubUsername] = useState(Cookies.get('githubUsername') || ''); - const [githubToken, setGithubToken] = useState(Cookies.get('githubToken') || ''); - const [isLocalModelsEnabled, setIsLocalModelsEnabled] = useState(() => { - const savedLocalModelsState = Cookies.get('isLocalModelsEnabled'); - return savedLocalModelsState === 'true'; - }); - // Load base URLs from cookies - const [baseUrls, setBaseUrls] = useState(() => { - const savedUrls = Cookies.get('providerBaseUrls'); - - if (savedUrls) { - try { - return JSON.parse(savedUrls); - } catch (error) { - console.error('Failed to parse base URLs from cookies:', error); - return { - Ollama: 'http://localhost:11434', - LMStudio: 'http://localhost:1234', - OpenAILike: '', - }; - } - } - - return { - Ollama: 'http://localhost:11434', - LMStudio: 'http://localhost:1234', - OpenAILike: '', - }; - }); - - const handleBaseUrlChange = (provider: string, url: string) => { - setBaseUrls((prev: Record) => { - const newUrls = { ...prev, [provider]: url }; - Cookies.set('providerBaseUrls', JSON.stringify(newUrls)); - - return newUrls; - }); - }; - - const tabs: { id: TabType; label: string; icon: string }[] = [ - { id: 'chat-history', label: 'Chat History', icon: 'i-ph:book' }, - { id: 'providers', label: 'Providers', icon: 'i-ph:key' }, - { id: 'features', label: 'Features', icon: 'i-ph:star' }, - { id: 'connection', label: 'Connection', icon: 'i-ph:link' }, - ...(isDebugEnabled ? [{ id: 'debug' as TabType, label: 'Debug Tab', icon: 'i-ph:bug' }] : []), + const tabs: { id: TabType; label: string; icon: string; component?: ReactElement }[] = [ + { id: 'chat-history', label: 'Chat History', icon: 'i-ph:book', component: }, + { id: 'providers', label: 'Providers', icon: 'i-ph:key', component: }, + { id: 'features', label: 'Features', icon: 'i-ph:star', component: }, + { id: 'connection', label: 'Connection', icon: 'i-ph:link', component: }, + ...(debug + ? [ + { + id: 'debug' as TabType, + label: 'Debug Tab', + icon: 'i-ph:bug', + component: , + }, + ] + : []), ]; - // Load providers from cookies on mount - const [providers, setProviders] = useState(() => { - const savedProviders = Cookies.get('providers'); - - if (savedProviders) { - try { - const parsedProviders = JSON.parse(savedProviders); - - // Merge saved enabled states with the base provider list - return providersList.map((provider) => ({ - ...provider, - isEnabled: parsedProviders[provider.name] || false, - })); - } catch (error) { - console.error('Failed to parse providers from cookies:', error); - } - } - - return providersList; - }); - - const handleToggleProvider = (providerName: string, enabled: boolean) => { - setProviders((prevProviders) => { - const newProviders = prevProviders.map((provider) => - provider.name === providerName ? { ...provider, isEnabled: enabled } : provider, - ); - - // Save to cookies - const enabledStates = newProviders.reduce( - (acc, provider) => ({ - ...acc, - [provider.name]: provider.isEnabled, - }), - {}, - ); - Cookies.set('providers', JSON.stringify(enabledStates)); - - return newProviders; - }); - }; - - const filteredProviders = providers - .filter((provider) => { - const isLocalModelProvider = ['OpenAILike', 'LMStudio', 'Ollama'].includes(provider.name); - return isLocalModelsEnabled || !isLocalModelProvider; - }) - .filter((provider) => provider.name.toLowerCase().includes(searchTerm.toLowerCase())) - .sort((a, b) => a.name.localeCompare(b.name)); - - const handleCopyToClipboard = () => { - const debugInfo = { - OS: navigator.platform, - Browser: navigator.userAgent, - ActiveFeatures: providers.filter((provider) => provider.isEnabled).map((provider) => provider.name), - BaseURLs: { - Ollama: process.env.REACT_APP_OLLAMA_URL, - OpenAI: process.env.REACT_APP_OPENAI_URL, - LMStudio: process.env.REACT_APP_LM_STUDIO_URL, - }, - Version: versionHash, - }; - navigator.clipboard.writeText(JSON.stringify(debugInfo, null, 2)).then(() => { - alert('Debug information copied to clipboard!'); - }); - }; - - const downloadAsJson = (data: any, filename: string) => { - const blob = new Blob([JSON.stringify(data, null, 2)], { type: 'application/json' }); - const url = URL.createObjectURL(blob); - const link = document.createElement('a'); - link.href = url; - link.download = filename; - document.body.appendChild(link); - link.click(); - document.body.removeChild(link); - URL.revokeObjectURL(url); - }; - - const handleDeleteAllChats = async () => { - if (!db) { - toast.error('Database is not available'); - return; - } - - try { - setIsDeleting(true); - - const allChats = await getAll(db); - - // Delete all chats one by one - await Promise.all(allChats.map((chat) => deleteById(db!, chat.id))); - - toast.success('All chats deleted successfully'); - navigate('/', { replace: true }); - } catch (error) { - toast.error('Failed to delete chats'); - console.error(error); - } finally { - setIsDeleting(false); - } - }; - - const handleExportAllChats = async () => { - if (!db) { - toast.error('Database is not available'); - return; - } - - try { - const allChats = await getAll(db); - const exportData = { - chats: allChats, - exportDate: new Date().toISOString(), - }; - - downloadAsJson(exportData, `all-chats-${new Date().toISOString()}.json`); - toast.success('Chats exported successfully'); - } catch (error) { - toast.error('Failed to export chats'); - console.error(error); - } - }; - - const versionHash = commit.commit; // Get the version hash from commit.json - - const handleSaveConnection = () => { - Cookies.set('githubUsername', githubUsername); - Cookies.set('githubToken', githubToken); - toast.success('GitHub credentials saved successfully!'); - }; - - const handleToggleDebug = (enabled: boolean) => { - setIsDebugEnabled(enabled); - Cookies.set('isDebugEnabled', String(enabled)); - }; - - const handleToggleLocalModels = (enabled: boolean) => { - setIsLocalModelsEnabled(enabled); - Cookies.set('isLocalModelsEnabled', String(enabled)); - }; - return ( @@ -272,7 +92,7 @@ export const SettingsWindow = ({ open, onClose }: SettingsProps) => { GitHub {
-
- {activeTab === 'chat-history' && ( -
-

Chat History

- - -
-

Danger Area

-

This action cannot be undone!

- -
-
- )} - {activeTab === 'providers' && ( -
-
- setSearchTerm(e.target.value)} - className="w-full bg-white dark:bg-bolt-elements-background-depth-4 relative px-2 py-1.5 rounded-md focus:outline-none placeholder-bolt-elements-textTertiary text-bolt-elements-textPrimary dark:text-bolt-elements-textPrimary border border-bolt-elements-borderColor" - /> -
- {filteredProviders.map((provider) => ( -
-
- {provider.name} - handleToggleProvider(provider.name, enabled)} - /> -
- {/* Base URL input for configurable providers */} - {URL_CONFIGURABLE_PROVIDERS.includes(provider.name) && provider.isEnabled && ( -
- - handleBaseUrlChange(provider.name, e.target.value)} - placeholder={`Enter ${provider.name} base URL`} - className="w-full bg-white dark:bg-bolt-elements-background-depth-4 relative px-2 py-1.5 rounded-md focus:outline-none placeholder-bolt-elements-textTertiary text-bolt-elements-textPrimary dark:text-bolt-elements-textPrimary border border-bolt-elements-borderColor" - /> -
- )} -
- ))} -
- )} - {activeTab === 'features' && ( -
-
-

Optional Features

-
- Debug Info - -
-
- -
-

- Experimental Features -

-

- Disclaimer: Experimental features may be unstable and are subject to change. -

-
- Enable Local Models - -
-
-
- )} - {activeTab === 'debug' && isDebugEnabled && ( -
-

Debug Tab

- - -

System Information

-

OS: {navigator.platform}

-

Browser: {navigator.userAgent}

- -

Active Features

-
    - {providers - .filter((provider) => provider.isEnabled) - .map((provider) => ( -
  • - {provider.name} -
  • - ))} -
- -

Base URLs

-
    -
  • Ollama: {process.env.REACT_APP_OLLAMA_URL}
  • -
  • OpenAI: {process.env.REACT_APP_OPENAI_URL}
  • -
  • - LM Studio: {process.env.REACT_APP_LM_STUDIO_URL} -
  • -
- -

Version Information

-

Version Hash: {versionHash}

-
- )} - {activeTab === 'connection' && ( -
-

GitHub Connection

-
-
- - setGithubUsername(e.target.value)} - className="w-full bg-white dark:bg-bolt-elements-background-depth-4 relative px-2 py-1.5 rounded-md focus:outline-none placeholder-bolt-elements-textTertiary text-bolt-elements-textPrimary dark:text-bolt-elements-textPrimary border border-bolt-elements-borderColor" - /> -
-
- - setGithubToken(e.target.value)} - className="w-full bg-white dark:bg-bolt-elements-background-depth-4 relative px-2 py-1.5 rounded-md focus:outline-none placeholder-bolt-elements-textTertiary text-bolt-elements-textPrimary dark:text-bolt-elements-textPrimary border border-bolt-elements-borderColor" - /> -
-
-
- -
-
- )} -
+
{tabs.find((tab) => tab.id === activeTab)?.component}
diff --git a/app/components/settings/chat-history/ChatHistoryTab.tsx b/app/components/settings/chat-history/ChatHistoryTab.tsx new file mode 100644 index 0000000..e96f0d8 --- /dev/null +++ b/app/components/settings/chat-history/ChatHistoryTab.tsx @@ -0,0 +1,105 @@ +import { useNavigate } from '@remix-run/react'; +import React, { useState } from 'react'; +import { toast } from 'react-toastify'; +import { db, deleteById, getAll } from '~/lib/persistence'; +import { classNames } from '~/utils/classNames'; +import styles from '~/components/settings/Settings.module.scss'; + +export default function ChatHistoryTab() { + const navigate = useNavigate(); + const [isDeleting, setIsDeleting] = useState(false); + const downloadAsJson = (data: any, filename: string) => { + const blob = new Blob([JSON.stringify(data, null, 2)], { type: 'application/json' }); + const url = URL.createObjectURL(blob); + const link = document.createElement('a'); + link.href = url; + link.download = filename; + document.body.appendChild(link); + link.click(); + document.body.removeChild(link); + URL.revokeObjectURL(url); + }; + + const handleDeleteAllChats = async () => { + if (!db) { + toast.error('Database is not available'); + return; + } + + try { + setIsDeleting(true); + + const allChats = await getAll(db); + + // Delete all chats one by one + await Promise.all(allChats.map((chat) => deleteById(db!, chat.id))); + + toast.success('All chats deleted successfully'); + navigate('/', { replace: true }); + } catch (error) { + toast.error('Failed to delete chats'); + console.error(error); + } finally { + setIsDeleting(false); + } + }; + + const handleExportAllChats = async () => { + if (!db) { + toast.error('Database is not available'); + return; + } + + try { + const allChats = await getAll(db); + const exportData = { + chats: allChats, + exportDate: new Date().toISOString(), + }; + + downloadAsJson(exportData, `all-chats-${new Date().toISOString()}.json`); + toast.success('Chats exported successfully'); + } catch (error) { + toast.error('Failed to export chats'); + console.error(error); + } + }; + + return ( + <> +
+

Chat History

+ + +
+

Danger Area

+

This action cannot be undone!

+ +
+
+ + ); +} diff --git a/app/components/settings/connections/ConnectionsTab.tsx b/app/components/settings/connections/ConnectionsTab.tsx new file mode 100644 index 0000000..32d0fa0 --- /dev/null +++ b/app/components/settings/connections/ConnectionsTab.tsx @@ -0,0 +1,48 @@ +import React, { useState } from 'react'; +import { toast } from 'react-toastify'; +import Cookies from 'js-cookie'; + +export default function ConnectionsTab() { + const [githubUsername, setGithubUsername] = useState(Cookies.get('githubUsername') || ''); + const [githubToken, setGithubToken] = useState(Cookies.get('githubToken') || ''); + + const handleSaveConnection = () => { + Cookies.set('githubUsername', githubUsername); + Cookies.set('githubToken', githubToken); + toast.success('GitHub credentials saved successfully!'); + }; + + return ( +
+

GitHub Connection

+
+
+ + setGithubUsername(e.target.value)} + className="w-full bg-white dark:bg-bolt-elements-background-depth-4 relative px-2 py-1.5 rounded-md focus:outline-none placeholder-bolt-elements-textTertiary text-bolt-elements-textPrimary dark:text-bolt-elements-textPrimary border border-bolt-elements-borderColor" + /> +
+
+ + setGithubToken(e.target.value)} + className="w-full bg-white dark:bg-bolt-elements-background-depth-4 relative px-2 py-1.5 rounded-md focus:outline-none placeholder-bolt-elements-textTertiary text-bolt-elements-textPrimary dark:text-bolt-elements-textPrimary border border-bolt-elements-borderColor" + /> +
+
+
+ +
+
+ ); +} diff --git a/app/components/settings/debug/DebugTab.tsx b/app/components/settings/debug/DebugTab.tsx new file mode 100644 index 0000000..7a84ec1 --- /dev/null +++ b/app/components/settings/debug/DebugTab.tsx @@ -0,0 +1,69 @@ +import React, { useCallback, useEffect, useState } from 'react'; +import { useSettings } from '~/lib/hooks/useSettings'; +import commit from '~/commit.json'; + +const versionHash = commit.commit; // Get the version hash from commit.json + +export default function DebugTab() { + const { providers } = useSettings(); + const [activeProviders, setActiveProviders] = useState([]); + useEffect(() => { + setActiveProviders( + Object.entries(providers) + .filter(([_key, provider]) => provider.settings.enabled) + .map(([_key, provider]) => provider.name), + ); + }, [providers]); + + const handleCopyToClipboard = useCallback(() => { + const debugInfo = { + OS: navigator.platform, + Browser: navigator.userAgent, + ActiveFeatures: activeProviders, + BaseURLs: { + Ollama: process.env.REACT_APP_OLLAMA_URL, + OpenAI: process.env.REACT_APP_OPENAI_URL, + LMStudio: process.env.REACT_APP_LM_STUDIO_URL, + }, + Version: versionHash, + }; + navigator.clipboard.writeText(JSON.stringify(debugInfo, null, 2)).then(() => { + alert('Debug information copied to clipboard!'); + }); + }, [providers]); + + return ( +
+

Debug Tab

+ + +

System Information

+

OS: {navigator.platform}

+

Browser: {navigator.userAgent}

+ +

Active Features

+
    + {activeProviders.map((name) => ( +
  • + {name} +
  • + ))} +
+ +

Base URLs

+
    +
  • Ollama: {process.env.REACT_APP_OLLAMA_URL}
  • +
  • OpenAI: {process.env.REACT_APP_OPENAI_URL}
  • +
  • LM Studio: {process.env.REACT_APP_LM_STUDIO_URL}
  • +
+ +

Version Information

+

Version Hash: {versionHash}

+
+ ); +} diff --git a/app/components/settings/features/FeaturesTab.tsx b/app/components/settings/features/FeaturesTab.tsx new file mode 100644 index 0000000..0b4fa75 --- /dev/null +++ b/app/components/settings/features/FeaturesTab.tsx @@ -0,0 +1,29 @@ +import React from 'react'; +import { Switch } from '~/components/ui/Switch'; +import { useSettings } from '~/lib/hooks/useSettings'; + +export default function FeaturesTab() { + const { debug, enableDebugMode, isLocalModel, enableLocalModels } = useSettings(); + return ( +
+
+

Optional Features

+
+ Debug Info + +
+
+ +
+

Experimental Features

+

+ Disclaimer: Experimental features may be unstable and are subject to change. +

+
+ Enable Local Models + +
+
+
+ ); +} diff --git a/app/components/settings/providers/ProvidersTab.tsx b/app/components/settings/providers/ProvidersTab.tsx new file mode 100644 index 0000000..309afb8 --- /dev/null +++ b/app/components/settings/providers/ProvidersTab.tsx @@ -0,0 +1,78 @@ +import React, { useEffect, useState } from 'react'; +import { Switch } from '~/components/ui/Switch'; +import { useSettings } from '~/lib/hooks/useSettings'; +import { LOCAL_PROVIDERS, URL_CONFIGURABLE_PROVIDERS } from '~/lib/stores/settings'; +import type { IProviderConfig } from '~/types/model'; + +export default function ProvidersTab() { + const { providers, updateProviderSettings, isLocalModel } = useSettings(); + const [filteredProviders, setFilteredProviders] = useState([]); + + // Load base URLs from cookies + const [searchTerm, setSearchTerm] = useState(''); + + useEffect(() => { + let newFilteredProviders: IProviderConfig[] = Object.entries(providers).map(([key, value]) => ({ + ...value, + name: key, + })); + + if (searchTerm && searchTerm.length > 0) { + newFilteredProviders = newFilteredProviders.filter((provider) => + provider.name.toLowerCase().includes(searchTerm.toLowerCase()), + ); + } + + if (!isLocalModel) { + newFilteredProviders = newFilteredProviders.filter((provider) => !LOCAL_PROVIDERS.includes(provider.name)); + } + + newFilteredProviders.sort((a, b) => a.name.localeCompare(b.name)); + + setFilteredProviders(newFilteredProviders); + }, [providers, searchTerm, isLocalModel]); + + return ( +
+
+ setSearchTerm(e.target.value)} + className="w-full bg-white dark:bg-bolt-elements-background-depth-4 relative px-2 py-1.5 rounded-md focus:outline-none placeholder-bolt-elements-textTertiary text-bolt-elements-textPrimary dark:text-bolt-elements-textPrimary border border-bolt-elements-borderColor" + /> +
+ {filteredProviders.map((provider) => ( +
+
+ {provider.name} + updateProviderSettings(provider.name, { ...provider.settings, enabled })} + /> +
+ {/* Base URL input for configurable providers */} + {URL_CONFIGURABLE_PROVIDERS.includes(provider.name) && provider.settings.enabled && ( +
+ + + updateProviderSettings(provider.name, { ...provider.settings, baseUrl: e.target.value }) + } + placeholder={`Enter ${provider.name} base URL`} + className="w-full bg-white dark:bg-bolt-elements-background-depth-4 relative px-2 py-1.5 rounded-md focus:outline-none placeholder-bolt-elements-textTertiary text-bolt-elements-textPrimary dark:text-bolt-elements-textPrimary border border-bolt-elements-borderColor" + /> +
+ )} +
+ ))} +
+ ); +} diff --git a/app/lib/.server/llm/model.ts b/app/lib/.server/llm/model.ts index ecbcd64..2588c2b 100644 --- a/app/lib/.server/llm/model.ts +++ b/app/lib/.server/llm/model.ts @@ -11,6 +11,7 @@ import { createOpenRouter } from '@openrouter/ai-sdk-provider'; import { createMistral } from '@ai-sdk/mistral'; import { createCohere } from '@ai-sdk/cohere'; import type { LanguageModelV1 } from 'ai'; +import type { IProviderSetting } from '~/types/model'; export const DEFAULT_NUM_CTX = process.env.DEFAULT_NUM_CTX ? parseInt(process.env.DEFAULT_NUM_CTX, 10) : 32768; @@ -127,14 +128,20 @@ export function getXAIModel(apiKey: OptionalApiKey, model: string) { return openai(model); } -export function getModel(provider: string, model: string, env: Env, apiKeys?: Record) { +export function getModel( + provider: string, + model: string, + env: Env, + apiKeys?: Record, + providerSettings?: Record, +) { /* * let apiKey; // Declare first * let baseURL; */ const apiKey = getAPIKey(env, provider, apiKeys); // Then assign - const baseURL = getBaseURL(env, provider); + const baseURL = providerSettings?.[provider].baseUrl || getBaseURL(env, provider); switch (provider) { case 'Anthropic': diff --git a/app/lib/.server/llm/stream-text.ts b/app/lib/.server/llm/stream-text.ts index 4aca822..b178cc4 100644 --- a/app/lib/.server/llm/stream-text.ts +++ b/app/lib/.server/llm/stream-text.ts @@ -4,6 +4,7 @@ import { MAX_TOKENS } from './constants'; import { getSystemPrompt } from './prompts'; import { DEFAULT_MODEL, DEFAULT_PROVIDER, getModelList, MODEL_REGEX, PROVIDER_REGEX } from '~/utils/constants'; import ignore from 'ignore'; +import type { IProviderSetting } from '~/types/model'; interface ToolResult { toolCallId: string; @@ -131,16 +132,18 @@ function extractPropertiesFromMessage(message: Message): { model: string; provid return { model, provider, content: cleanedContent }; } -export async function streamText( - messages: Messages, - env: Env, - options?: StreamingOptions, - apiKeys?: Record, - files?: FileMap, -) { +export async function streamText(props: { + messages: Messages; + env: Env; + options?: StreamingOptions; + apiKeys?: Record; + files?: FileMap; + providerSettings?: Record; +}) { + const { messages, env, options, apiKeys, files, providerSettings } = props; let currentModel = DEFAULT_MODEL; let currentProvider = DEFAULT_PROVIDER.name; - const MODEL_LIST = await getModelList(apiKeys || {}); + const MODEL_LIST = await getModelList(apiKeys || {}, providerSettings); const processedMessages = messages.map((message) => { if (message.role === 'user') { const { model, provider, content } = extractPropertiesFromMessage(message); @@ -175,7 +178,7 @@ export async function streamText( } return _streamText({ - model: getModel(currentProvider, currentModel, env, apiKeys) as any, + model: getModel(currentProvider, currentModel, env, apiKeys, providerSettings) as any, system: systemPrompt, maxTokens: dynamicMaxTokens, messages: convertToCoreMessages(processedMessages as any), diff --git a/app/lib/hooks/useSettings.tsx b/app/lib/hooks/useSettings.tsx new file mode 100644 index 0000000..531e481 --- /dev/null +++ b/app/lib/hooks/useSettings.tsx @@ -0,0 +1,97 @@ +import { useStore } from '@nanostores/react'; +import { isDebugMode, isLocalModelsEnabled, LOCAL_PROVIDERS, providersStore } from '~/lib/stores/settings'; +import { useCallback, useEffect, useState } from 'react'; +import Cookies from 'js-cookie'; +import type { IProviderSetting, ProviderInfo } from '~/types/model'; + +export function useSettings() { + const providers = useStore(providersStore); + const debug = useStore(isDebugMode); + const isLocalModel = useStore(isLocalModelsEnabled); + const [activeProviders, setActiveProviders] = useState([]); + + // reading values from cookies on mount + useEffect(() => { + const savedProviders = Cookies.get('providers'); + + if (savedProviders) { + try { + const parsedProviders: Record = JSON.parse(savedProviders); + Object.keys(parsedProviders).forEach((provider) => { + const currentProvider = providers[provider]; + providersStore.setKey(provider, { + ...currentProvider, + settings: { + ...parsedProviders[provider], + enabled: parsedProviders[provider].enabled || true, + }, + }); + }); + } catch (error) { + console.error('Failed to parse providers from cookies:', error); + } + } + + // load debug mode from cookies + const savedDebugMode = Cookies.get('isDebugEnabled'); + + if (savedDebugMode) { + isDebugMode.set(savedDebugMode === 'true'); + } + + // load local models from cookies + const savedLocalModels = Cookies.get('isLocalModelsEnabled'); + + if (savedLocalModels) { + isLocalModelsEnabled.set(savedLocalModels === 'true'); + } + }, []); + + // writing values to cookies on change + useEffect(() => { + const providers = providersStore.get(); + const providerSetting: Record = {}; + Object.keys(providers).forEach((provider) => { + providerSetting[provider] = providers[provider].settings; + }); + Cookies.set('providers', JSON.stringify(providerSetting)); + }, [providers]); + + useEffect(() => { + let active = Object.entries(providers) + .filter(([_key, provider]) => provider.settings.enabled) + .map(([_k, p]) => p); + + if (!isLocalModel) { + active = active.filter((p) => !LOCAL_PROVIDERS.includes(p.name)); + } + + setActiveProviders(active); + }, [providers, isLocalModel]); + + // helper function to update settings + const updateProviderSettings = useCallback((provider: string, config: IProviderSetting) => { + const settings = providers[provider].settings; + providersStore.setKey(provider, { ...providers[provider], settings: { ...settings, ...config } }); + }, []); + + const enableDebugMode = useCallback((enabled: boolean) => { + isDebugMode.set(enabled); + Cookies.set('isDebugEnabled', String(enabled)); + }, []); + + const enableLocalModels = useCallback((enabled: boolean) => { + isLocalModelsEnabled.set(enabled); + Cookies.set('isLocalModelsEnabled', String(enabled)); + }, []); + + return { + providers, + activeProviders, + updateProviderSettings, + debug, + enableDebugMode, + isLocalModel, + enableLocalModels, + }; +} diff --git a/app/lib/stores/settings.ts b/app/lib/stores/settings.ts index 7106cfb..31564e6 100644 --- a/app/lib/stores/settings.ts +++ b/app/lib/stores/settings.ts @@ -1,5 +1,7 @@ -import { map } from 'nanostores'; +import { atom, map } from 'nanostores'; import { workbenchStore } from './workbench'; +import { PROVIDER_LIST } from '~/utils/constants'; +import type { IProviderConfig } from '~/types/model'; export interface Shortcut { key: string; @@ -15,32 +17,10 @@ export interface Shortcuts { toggleTerminal: Shortcut; } -export interface Provider { - name: string; - isEnabled: boolean; -} +export const URL_CONFIGURABLE_PROVIDERS = ['Ollama', 'LMStudio', 'OpenAILike']; +export const LOCAL_PROVIDERS = ['OpenAILike', 'LMStudio', 'Ollama']; -export interface Settings { - shortcuts: Shortcuts; - providers: Provider[]; -} - -export const providersList: Provider[] = [ - { name: 'Groq', isEnabled: false }, - { name: 'HuggingFace', isEnabled: false }, - { name: 'OpenAI', isEnabled: false }, - { name: 'Anthropic', isEnabled: false }, - { name: 'OpenRouter', isEnabled: false }, - { name: 'Google', isEnabled: false }, - { name: 'Ollama', isEnabled: false }, - { name: 'OpenAILike', isEnabled: false }, - { name: 'Together', isEnabled: false }, - { name: 'Deepseek', isEnabled: false }, - { name: 'Mistral', isEnabled: false }, - { name: 'Cohere', isEnabled: false }, - { name: 'LMStudio', isEnabled: false }, - { name: 'xAI', isEnabled: false }, -]; +export type ProviderSetting = Record; export const shortcutsStore = map({ toggleTerminal: { @@ -50,14 +30,17 @@ export const shortcutsStore = map({ }, }); -export const settingsStore = map({ - shortcuts: shortcutsStore.get(), - providers: providersList, +const initialProviderSettings: ProviderSetting = {}; +PROVIDER_LIST.forEach((provider) => { + initialProviderSettings[provider.name] = { + ...provider, + settings: { + enabled: false, + }, + }; }); +export const providersStore = map(initialProviderSettings); -shortcutsStore.subscribe((shortcuts) => { - settingsStore.set({ - ...settingsStore.get(), - shortcuts, - }); -}); +export const isDebugMode = atom(false); + +export const isLocalModelsEnabled = atom(true); diff --git a/app/routes/api.chat.ts b/app/routes/api.chat.ts index e1b2372..87ca5c7 100644 --- a/app/routes/api.chat.ts +++ b/app/routes/api.chat.ts @@ -3,6 +3,7 @@ import { MAX_RESPONSE_SEGMENTS, MAX_TOKENS } from '~/lib/.server/llm/constants'; import { CONTINUE_PROMPT } from '~/lib/.server/llm/prompts'; import { streamText, type Messages, type StreamingOptions } from '~/lib/.server/llm/stream-text'; import SwitchableStream from '~/lib/.server/llm/switchable-stream'; +import type { IProviderSetting } from '~/types/model'; export async function action(args: ActionFunctionArgs) { return chatAction(args); @@ -38,6 +39,9 @@ async function chatAction({ context, request }: ActionFunctionArgs) { // Parse the cookie's value (returns an object or null if no cookie exists) const apiKeys = JSON.parse(parseCookies(cookieHeader || '').apiKeys || '{}'); + const providerSettings: Record = JSON.parse( + parseCookies(cookieHeader || '').providers || '{}', + ); const stream = new SwitchableStream(); @@ -60,13 +64,27 @@ async function chatAction({ context, request }: ActionFunctionArgs) { messages.push({ role: 'assistant', content }); messages.push({ role: 'user', content: CONTINUE_PROMPT }); - const result = await streamText(messages, context.cloudflare.env, options, apiKeys, files); + const result = await streamText({ + messages, + env: context.cloudflare.env, + options, + apiKeys, + files, + providerSettings, + }); return stream.switchSource(result.toAIStream()); }, }; - const result = await streamText(messages, context.cloudflare.env, options, apiKeys, files); + const result = await streamText({ + messages, + env: context.cloudflare.env, + options, + apiKeys, + files, + providerSettings, + }); stream.switchSource(result.toAIStream()); diff --git a/app/routes/api.enhancer.ts b/app/routes/api.enhancer.ts index 0738ae4..cc51116 100644 --- a/app/routes/api.enhancer.ts +++ b/app/routes/api.enhancer.ts @@ -2,7 +2,7 @@ import { type ActionFunctionArgs } from '@remix-run/cloudflare'; import { StreamingTextResponse, parseStreamPart } from 'ai'; import { streamText } from '~/lib/.server/llm/stream-text'; import { stripIndents } from '~/utils/stripIndent'; -import type { ProviderInfo } from '~/types/model'; +import type { IProviderSetting, ProviderInfo } from '~/types/model'; const encoder = new TextEncoder(); const decoder = new TextDecoder(); @@ -11,8 +11,28 @@ export async function action(args: ActionFunctionArgs) { return enhancerAction(args); } +function parseCookies(cookieHeader: string) { + const cookies: any = {}; + + // Split the cookie string by semicolons and spaces + const items = cookieHeader.split(';').map((cookie) => cookie.trim()); + + items.forEach((item) => { + const [name, ...rest] = item.split('='); + + if (name && rest) { + // Decode the name and value, and join value parts in case it contains '=' + const decodedName = decodeURIComponent(name.trim()); + const decodedValue = decodeURIComponent(rest.join('=').trim()); + cookies[decodedName] = decodedValue; + } + }); + + return cookies; +} + async function enhancerAction({ context, request }: ActionFunctionArgs) { - const { message, model, provider, apiKeys } = await request.json<{ + const { message, model, provider } = await request.json<{ message: string; model: string; provider: ProviderInfo; @@ -36,9 +56,17 @@ async function enhancerAction({ context, request }: ActionFunctionArgs) { }); } + const cookieHeader = request.headers.get('Cookie'); + + // Parse the cookie's value (returns an object or null if no cookie exists) + const apiKeys = JSON.parse(parseCookies(cookieHeader || '').apiKeys || '{}'); + const providerSettings: Record = JSON.parse( + parseCookies(cookieHeader || '').providers || '{}', + ); + try { - const result = await streamText( - [ + const result = await streamText({ + messages: [ { role: 'user', content: @@ -73,10 +101,10 @@ async function enhancerAction({ context, request }: ActionFunctionArgs) { `, }, ], - context.cloudflare.env, - undefined, + env: context.cloudflare.env, apiKeys, - ); + providerSettings, + }); const transformStream = new TransformStream({ transform(chunk, controller) { diff --git a/app/types/model.ts b/app/types/model.ts index c6c58d7..3bfbfde 100644 --- a/app/types/model.ts +++ b/app/types/model.ts @@ -3,9 +3,17 @@ import type { ModelInfo } from '~/utils/types'; export type ProviderInfo = { staticModels: ModelInfo[]; name: string; - getDynamicModels?: (apiKeys?: Record) => Promise; + getDynamicModels?: (apiKeys?: Record, providerSettings?: IProviderSetting) => Promise; getApiKeyLink?: string; labelForGetApiKey?: string; icon?: string; - isEnabled?: boolean; +}; + +export interface IProviderSetting { + enabled?: boolean; + baseUrl?: string; +} + +export type IProviderConfig = ProviderInfo & { + settings: IProviderSetting; }; diff --git a/app/utils/constants.ts b/app/utils/constants.ts index 0cd0808..240b4b9 100644 --- a/app/utils/constants.ts +++ b/app/utils/constants.ts @@ -1,6 +1,6 @@ import Cookies from 'js-cookie'; import type { ModelInfo, OllamaApiResponse, OllamaModel } from './types'; -import type { ProviderInfo } from '~/types/model'; +import type { ProviderInfo, IProviderSetting } from '~/types/model'; import { createScopedLogger } from './logger'; export const WORK_DIR_NAME = 'project'; @@ -126,6 +126,7 @@ const PROVIDER_LIST: ProviderInfo[] = [ name: 'Google', staticModels: [ { name: 'gemini-1.5-flash-latest', label: 'Gemini 1.5 Flash', provider: 'Google', maxTokenAllowed: 8192 }, + { name: 'gemini-2.0-flash-exp', label: 'Gemini 2.0 Flash', provider: 'Google', maxTokenAllowed: 8192 }, { name: 'gemini-1.5-flash-002', label: 'Gemini 1.5 Flash-002', provider: 'Google', maxTokenAllowed: 8192 }, { name: 'gemini-1.5-flash-8b', label: 'Gemini 1.5 Flash-8b', provider: 'Google', maxTokenAllowed: 8192 }, { name: 'gemini-1.5-pro-latest', label: 'Gemini 1.5 Pro', provider: 'Google', maxTokenAllowed: 8192 }, @@ -298,13 +299,16 @@ const staticModels: ModelInfo[] = PROVIDER_LIST.map((p) => p.staticModels).flat( export let MODEL_LIST: ModelInfo[] = [...staticModels]; -export async function getModelList(apiKeys: Record) { +export async function getModelList( + apiKeys: Record, + providerSettings?: Record, +) { MODEL_LIST = [ ...( await Promise.all( PROVIDER_LIST.filter( (p): p is ProviderInfo & { getDynamicModels: () => Promise } => !!p.getDynamicModels, - ).map((p) => p.getDynamicModels(apiKeys)), + ).map((p) => p.getDynamicModels(apiKeys, providerSettings?.[p.name])), ) ).flat(), ...staticModels, @@ -312,9 +316,9 @@ export async function getModelList(apiKeys: Record) { return MODEL_LIST; } -async function getTogetherModels(apiKeys?: Record): Promise { +async function getTogetherModels(apiKeys?: Record, settings?: IProviderSetting): Promise { try { - const baseUrl = import.meta.env.TOGETHER_API_BASE_URL || ''; + const baseUrl = settings?.baseUrl || import.meta.env.TOGETHER_API_BASE_URL || ''; const provider = 'Together'; if (!baseUrl) { @@ -353,8 +357,8 @@ async function getTogetherModels(apiKeys?: Record): Promise { - const defaultBaseUrl = import.meta.env.OLLAMA_API_BASE_URL || 'http://localhost:11434'; +const getOllamaBaseUrl = (settings?: IProviderSetting) => { + const defaultBaseUrl = settings?.baseUrl || import.meta.env.OLLAMA_API_BASE_URL || 'http://localhost:11434'; // Check if we're in the browser if (typeof window !== 'undefined') { @@ -368,7 +372,7 @@ const getOllamaBaseUrl = () => { return isDocker ? defaultBaseUrl.replace('localhost', 'host.docker.internal') : defaultBaseUrl; }; -async function getOllamaModels(): Promise { +async function getOllamaModels(apiKeys?: Record, settings?: IProviderSetting): Promise { /* * if (typeof window === 'undefined') { * return []; @@ -376,7 +380,7 @@ async function getOllamaModels(): Promise { */ try { - const baseUrl = getOllamaBaseUrl(); + const baseUrl = getOllamaBaseUrl(settings); const response = await fetch(`${baseUrl}/api/tags`); const data = (await response.json()) as OllamaApiResponse; @@ -392,20 +396,21 @@ async function getOllamaModels(): Promise { } } -async function getOpenAILikeModels(): Promise { +async function getOpenAILikeModels( + apiKeys?: Record, + settings?: IProviderSetting, +): Promise { try { - const baseUrl = import.meta.env.OPENAI_LIKE_API_BASE_URL || ''; + const baseUrl = settings?.baseUrl || import.meta.env.OPENAI_LIKE_API_BASE_URL || ''; if (!baseUrl) { return []; } - let apiKey = import.meta.env.OPENAI_LIKE_API_KEY ?? ''; + let apiKey = ''; - const apikeys = JSON.parse(Cookies.get('apiKeys') || '{}'); - - if (apikeys && apikeys.OpenAILike) { - apiKey = apikeys.OpenAILike; + if (apiKeys && apiKeys.OpenAILike) { + apiKey = apiKeys.OpenAILike; } const response = await fetch(`${baseUrl}/models`, { @@ -459,13 +464,13 @@ async function getOpenRouterModels(): Promise { })); } -async function getLMStudioModels(): Promise { +async function getLMStudioModels(_apiKeys?: Record, settings?: IProviderSetting): Promise { if (typeof window === 'undefined') { return []; } try { - const baseUrl = import.meta.env.LMSTUDIO_API_BASE_URL || 'http://localhost:1234'; + const baseUrl = settings?.baseUrl || import.meta.env.LMSTUDIO_API_BASE_URL || 'http://localhost:1234'; const response = await fetch(`${baseUrl}/v1/models`); const data = (await response.json()) as any; @@ -480,7 +485,7 @@ async function getLMStudioModels(): Promise { } } -async function initializeModelList(): Promise { +async function initializeModelList(providerSettings?: Record): Promise { let apiKeys: Record = {}; try { @@ -501,7 +506,7 @@ async function initializeModelList(): Promise { await Promise.all( PROVIDER_LIST.filter( (p): p is ProviderInfo & { getDynamicModels: () => Promise } => !!p.getDynamicModels, - ).map((p) => p.getDynamicModels(apiKeys)), + ).map((p) => p.getDynamicModels(apiKeys, providerSettings?.[p.name])), ) ).flat(), ...staticModels, diff --git a/app/utils/types.ts b/app/utils/types.ts index 8742891..1fa253f 100644 --- a/app/utils/types.ts +++ b/app/utils/types.ts @@ -26,12 +26,3 @@ export interface ModelInfo { provider: string; maxTokenAllowed: number; } - -export interface ProviderInfo { - staticModels: ModelInfo[]; - name: string; - getDynamicModels?: () => Promise; - getApiKeyLink?: string; - labelForGetApiKey?: string; - icon?: string; -} diff --git a/docs/docs/CONTRIBUTING.md b/docs/docs/CONTRIBUTING.md index e1edd87..b1232f9 100644 --- a/docs/docs/CONTRIBUTING.md +++ b/docs/docs/CONTRIBUTING.md @@ -4,7 +4,7 @@ The `DEFAULT_NUM_CTX` environment variable can be used to limit the maximum number of context values used by the qwen2.5-coder model. For example, to limit the context to 24576 values (which uses 32GB of VRAM), set `DEFAULT_NUM_CTX=24576` in your `.env.local` file. -First off, thank you for considering contributing to Bolt.new! This fork aims to expand the capabilities of the original project by integrating multiple LLM providers and enhancing functionality. Every contribution helps make Bolt.new a better tool for developers worldwide. +First off, thank you for considering contributing to Bolt.diy! This fork aims to expand the capabilities of the original project by integrating multiple LLM providers and enhancing functionality. Every contribution helps make Bolt.diy a better tool for developers worldwide. ## 📋 Table of Contents - [Code of Conduct](#code-of-conduct) @@ -62,7 +62,7 @@ We're looking for dedicated contributors to help maintain and grow this project. ### 🔄 Initial Setup 1. Clone the repository: ```bash -git clone https://github.com/coleam00/bolt.new-any-llm.git +git clone https://github.com/stackblitz-labs/bolt.diy.git ``` 2. Install dependencies: diff --git a/docs/docs/FAQ.md b/docs/docs/FAQ.md index 8e57502..0c339c6 100644 --- a/docs/docs/FAQ.md +++ b/docs/docs/FAQ.md @@ -1,15 +1,15 @@ # Frequently Asked Questions (FAQ) -## How do I get the best results with oTToDev? +## How do I get the best results with Bolt.diy? - **Be specific about your stack**: - Mention the frameworks or libraries you want to use (e.g., Astro, Tailwind, ShadCN) in your initial prompt. This ensures that oTToDev scaffolds the project according to your preferences. + Mention the frameworks or libraries you want to use (e.g., Astro, Tailwind, ShadCN) in your initial prompt. This ensures that Bolt.diy scaffolds the project according to your preferences. - **Use the enhance prompt icon**: Before sending your prompt, click the *enhance* icon to let the AI refine your prompt. You can edit the suggested improvements before submitting. - **Scaffold the basics first, then add features**: - Ensure the foundational structure of your application is in place before introducing advanced functionality. This helps oTToDev establish a solid base to build on. + Ensure the foundational structure of your application is in place before introducing advanced functionality. This helps Bolt.diy establish a solid base to build on. - **Batch simple instructions**: Combine simple tasks into a single prompt to save time and reduce API credit consumption. For example: @@ -17,19 +17,14 @@ --- -## How do I contribute to oTToDev? +## How do I contribute to Bolt.diy? Check out our [Contribution Guide](CONTRIBUTING.md) for more details on how to get involved! --- -## Do you plan on merging oTToDev back into the official Bolt.new repo? -Stay tuned! We’ll share updates on this early next month. - ---- - -## What are the future plans for oTToDev? +## What are the future plans for Bolt.diy? Visit our [Roadmap](https://roadmap.sh/r/ottodev-roadmap-2ovzo) for the latest updates. New features and improvements are on the way! @@ -38,13 +33,13 @@ New features and improvements are on the way! ## Why are there so many open issues/pull requests? -oTToDev began as a small showcase project on @ColeMedin's YouTube channel to explore editing open-source projects with local LLMs. However, it quickly grew into a massive community effort! +Bolt.diy began as a small showcase project on @ColeMedin's YouTube channel to explore editing open-source projects with local LLMs. However, it quickly grew into a massive community effort! We’re forming a team of maintainers to manage demand and streamline issue resolution. The maintainers are rockstars, and we’re also exploring partnerships to help the project thrive. --- -## How do local LLMs compare to larger models like Claude 3.5 Sonnet for oTToDev/Bolt.new? +## How do local LLMs compare to larger models like Claude 3.5 Sonnet for Bolt.diy? While local LLMs are improving rapidly, larger models like GPT-4o, Claude 3.5 Sonnet, and DeepSeek Coder V2 236b still offer the best results for complex applications. Our ongoing focus is to improve prompts, agents, and the platform to better support smaller local LLMs. diff --git a/docs/docs/index.md b/docs/docs/index.md index d9c953e..8a4d341 100644 --- a/docs/docs/index.md +++ b/docs/docs/index.md @@ -1,28 +1,28 @@ -# Welcome to OTTO Dev -This fork of Bolt.new (oTToDev) allows you to choose the LLM that you use for each prompt! Currently, you can use OpenAI, Anthropic, Ollama, OpenRouter, Gemini, LMStudio, Mistral, xAI, HuggingFace, DeepSeek, or Groq models - and it is easily extended to use any other model supported by the Vercel AI SDK! See the instructions below for running this locally and extending it to include more models. +# Welcome to Bolt DIY +Bolt.diy allows you to choose the LLM that you use for each prompt! Currently, you can use OpenAI, Anthropic, Ollama, OpenRouter, Gemini, LMStudio, Mistral, xAI, HuggingFace, DeepSeek, or Groq models - and it is easily extended to use any other model supported by the Vercel AI SDK! See the instructions below for running this locally and extending it to include more models. -Join the community for oTToDev! +Join the community! https://thinktank.ottomator.ai -## Whats Bolt.new +## Whats Bolt.diy -Bolt.new is an AI-powered web development agent that allows you to prompt, run, edit, and deploy full-stack applications directly from your browser—no local setup required. If you're here to build your own AI-powered web dev agent using the Bolt open source codebase, [click here to get started!](./CONTRIBUTING.md) +Bolt.diy is an AI-powered web development agent that allows you to prompt, run, edit, and deploy full-stack applications directly from your browser—no local setup required. If you're here to build your own AI-powered web dev agent using the Bolt open source codebase, [click here to get started!](./CONTRIBUTING.md) -## What Makes Bolt.new Different +## What Makes Bolt.diy Different -Claude, v0, etc are incredible- but you can't install packages, run backends, or edit code. That’s where Bolt.new stands out: +Claude, v0, etc are incredible- but you can't install packages, run backends, or edit code. That’s where Bolt.diy stands out: -- **Full-Stack in the Browser**: Bolt.new integrates cutting-edge AI models with an in-browser development environment powered by **StackBlitz’s WebContainers**. This allows you to: +- **Full-Stack in the Browser**: Bolt.diy integrates cutting-edge AI models with an in-browser development environment powered by **StackBlitz’s WebContainers**. This allows you to: - Install and run npm tools and libraries (like Vite, Next.js, and more) - Run Node.js servers - Interact with third-party APIs - Deploy to production from chat - Share your work via a URL -- **AI with Environment Control**: Unlike traditional dev environments where the AI can only assist in code generation, Bolt.new gives AI models **complete control** over the entire environment including the filesystem, node server, package manager, terminal, and browser console. This empowers AI agents to handle the whole app lifecycle—from creation to deployment. +- **AI with Environment Control**: Unlike traditional dev environments where the AI can only assist in code generation, Bolt.diy gives AI models **complete control** over the entire environment including the filesystem, node server, package manager, terminal, and browser console. This empowers AI agents to handle the whole app lifecycle—from creation to deployment. -Whether you’re an experienced developer, a PM, or a designer, Bolt.new allows you to easily build production-grade full-stack applications. +Whether you’re an experienced developer, a PM, or a designer, Bolt.diy allows you to easily build production-grade full-stack applications. For developers interested in building their own AI-powered development tools with WebContainers, check out the open-source Bolt codebase in this repo! @@ -47,10 +47,10 @@ If you see usr/local/bin in the output then you're good to go. 3. Clone the repository (if you haven't already) by opening a Terminal window (or CMD with admin permissions) and then typing in this: ``` -git clone https://github.com/coleam00/bolt.new-any-llm.git +git clone https://github.com/stackblitz-labs/bolt.diy.git ``` -3. Rename .env.example to .env.local and add your LLM API keys. You will find this file on a Mac at "[your name]/bold.new-any-llm/.env.example". For Windows and Linux the path will be similar. +3. Rename .env.example to .env.local and add your LLM API keys. You will find this file on a Mac at "[your name]/bolt.diy/.env.example". For Windows and Linux the path will be similar. ![image](https://github.com/user-attachments/assets/7e6a532c-2268-401f-8310-e8d20c731328) @@ -150,7 +150,7 @@ pnpm run dev ## Adding New LLMs: -To make new LLMs available to use in this version of Bolt.new, head on over to `app/utils/constants.ts` and find the constant MODEL_LIST. Each element in this array is an object that has the model ID for the name (get this from the provider's API documentation), a label for the frontend model dropdown, and the provider. +To make new LLMs available to use in this version of Bolt.diy, head on over to `app/utils/constants.ts` and find the constant MODEL_LIST. Each element in this array is an object that has the model ID for the name (get this from the provider's API documentation), a label for the frontend model dropdown, and the provider. By default, Anthropic, OpenAI, Groq, and Ollama are implemented as providers, but the YouTube video for this repo covers how to extend this to work with more providers if you wish! @@ -179,7 +179,7 @@ This will start the Remix Vite development server. You will need Google Chrome C ## Tips and Tricks -Here are some tips to get the most out of Bolt.new: +Here are some tips to get the most out of Bolt.diy: - **Be specific about your stack**: If you want to use specific frameworks or libraries (like Astro, Tailwind, ShadCN, or any other popular JavaScript framework), mention them in your initial prompt to ensure Bolt scaffolds the project accordingly. diff --git a/docs/mkdocs.yml b/docs/mkdocs.yml index 045d77c..0314875 100644 --- a/docs/mkdocs.yml +++ b/docs/mkdocs.yml @@ -1,4 +1,4 @@ -site_name: Bolt.Local Docs +site_name: Bolt.diy Docs site_dir: ../site theme: name: material @@ -31,19 +31,19 @@ theme: repo: fontawesome/brands/github # logo: assets/logo.png # favicon: assets/logo.png -repo_name: Bolt.Local -repo_url: https://github.com/coleam00/bolt.new-any-llm +repo_name: Bolt.diy +repo_url: https://github.com/stackblitz-labs/bolt.diy edit_uri: "" extra: generator: false social: - icon: fontawesome/brands/github - link: https://github.com/coleam00/bolt.new-any-llm - name: Bolt.Local + link: https://github.com/stackblitz-labs/bolt.diy + name: Bolt.diy - icon: fontawesome/brands/discourse link: https://thinktank.ottomator.ai/ - name: Bolt.Local Discourse + name: Bolt.diy Discourse markdown_extensions: