* fix: enhance Bayer MGA provider reliability and Docker integration
* Merge latest dev branch changes into Bayer MGA feature branch
* Improve Bayer MGA provider model filtering and error handling
* Add robust model validation with fallback mechanisms
* Enhance logging and debugging capabilities for model selection
* Add Bayer MGA environment variables to Docker configurations
* Update worker configuration with Bayer MGA API keys
* Add comprehensive Bayer MGA setup to .env.example
* Create standalone test script for Bayer MGA provider debugging
* Fix intermittent model selection issues beyond Claude 3.7 Sonnet
* Ensure provider switching works without breaking other providers
* Bayer MGA provider multimodel support and test coverage.
* Add Claude.md.
- Enhanced BayerMGAProvider getModelInstance method with model validation
- Added fallback mechanism when requested model is not available
- Improved dynamic model filtering with better validation
- Added UI model selection handling for unavailable models
- Added README.md to ECR deploy workflow paths-ignore
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Added claude-3-7-sonnet and gpt-4o-mini as static models
- Fixes 500 'Model not found' error during inference
- Now properly returns 401 for invalid API key (expected behavior)
- Models now appear in /api/models response
- Inference pipeline working correctly
This resolves the core 500 error. Next step is configuring valid API key.
- Remove overly complex validateApiConfig method
- Follow exact same pattern as working OpenAI provider
- Simplify getModelInstance method to be more robust
- Fix 500 error in inference by removing strict validation
- Maintains dynamic model fetching functionality
This should resolve the inference execution issues while keeping model listing working.
Key improvements:
- Switch from getOpenAILikeModel to createOpenAI for better control
- Comprehensive error handling and validation
- Better base URL normalization and configuration
- Enhanced logging for debugging inference issues
- Proper header configuration for API requests
- Detailed error messages for troubleshooting
This should resolve the inference execution issues while maintaining model listing functionality.
- Added comprehensive logging for model fetching and instance creation
- Improved error handling with detailed error messages
- Added input validation for API responses
- Better debugging for inference endpoint issues
- Implemented BayerMGAProvider extending BaseProvider
- Configured base URL: https://chat.int.bayer.com/api/v2
- Added dynamic model fetching from /models endpoint with filters
- Integrated provider into UI (CloudProvidersTab, ServiceStatusTab)
- Added provider registration in LLM registry
- Supports user-configurable API token input
- Filters models by availability status
- Maps API response to ModelInfo format with proper token limits
- Added logging for dynamic max tokens based on model details.
- Increased max token limit for Claude model from 8000 to 128000.
- Included beta header for Anthropik API call.
Add 'grok-3-beta' to xAI provider and 'gemini-2.5-flash-preview-04-17' to Google provider. Also, ensure file saving when content is updated in WorkbenchStore and update streaming indicator styling in chat messages.
* Fix: error building my application #1414
* fix for vite
* Update vite.config.ts
* Update root.tsx
* fix the root.tsx and the debugtab
* lm studio fix and fix for the api key
* Update api.enhancer for prompt enhancement
* bugfixes
* Revert api.enhancer.ts back to original code
* Update api.enhancer.ts
* Update api.git-proxy.$.ts
* Update api.git-proxy.$.ts
* Update api.enhancer.ts
Added the new gemini-2.0-flash-thinking-exp-01-21 model to the GoogleProvider's static model configuration. This model supports a significantly increased maxTokenAllowed limit of 65,536 tokens, enabling it to handle larger context windows compared to existing Gemini models (previously capped at 8k tokens). The model is labeled as "Gemini 2.0 Flash-thinking-exp-01-21" for clear identification in the UI/dropdowns.
This PR introduces a new model, deepseek-r1-distill-llama-70b, to the staticModels array and ensures compatibility with the Groq API. The changes include:
Adding the deepseek-r1-distill-llama-70b model to the staticModels array with its relevant metadata.
Updating the Groq API call to use the new model for chat completions.
These changes enable the application to support the deepseek-r1-distill-llama-70b model, expanding the range of available models for users.
* Use backend API route to fetch dynamic models
# Conflicts:
# app/components/chat/BaseChat.tsx
* Override ApiKeys if provided in frontend
* Remove obsolete artifact
* Transport api keys from client to server in header
* Cache static provider information
* Restore reading provider settings from cookie
* Reload only a single provider on api key change
* Transport apiKeys and providerSettings via cookies.
While doing this, introduce a simple helper function for cookies
* feat: Integrate AWS Bedrock with Claude 3.5 Sonnet, Claude 3 Sonnet, and Claude 3.5 Haiku
* update Dockerfile for AWS Bedrock configuration
* feat: add new Bedrock model 'Mistral' and update Haiku to version 3
* feat: add new bedrock model Nova Lite and Nova Pro
* Update README documentation to reflect the latest changes
* Add the icon for aws bedrock
* add support for serialized AWS Bedrock configuration in api key
* fix: updated logger and model caching
* usage token stream issue fix
* minor changes
* updated starter template change to fix the app title
* starter template bigfix
* fixed hydretion errors and raw logs
* removed raw log
* made auto select template false by default
* more cleaner logs and updated logic to call dynamicModels only if not found in static models
* updated starter template instructions
* browser console log improved for firefox
* provider icons fix icons