refactor(llm): update OpenAI LLM implementation

Refactor the OpenAI LLM implementation in the `openai-llm.ts` file.
- Update the model selection logic to support both 'gpt-4o' and 'o1-mini' models.
- Add conditional logic to handle different models and their respective prompts and options.
This commit is contained in:
bjc 2024-10-07 17:55:11 -07:00
parent aa50183dd8
commit 6f8158a8d1
2 changed files with 36 additions and 29 deletions

View File

@ -28,23 +28,30 @@ export class OpenAILLM implements LLM {
} }
const openai = createOpenAI({ apiKey: this.apiKey, compatibility: 'strict' }); const openai = createOpenAI({ apiKey: this.apiKey, compatibility: 'strict' });
const model = openai('o1-mini'); type model_name_t = 'gpt-4o' | 'o1-mini';
const model_name: model_name_t = process.env.OPEN_AI_MODEL as model_name_t;
const model = openai(model_name);
const o1sysmessage: Message = { if (model_name === 'o1-mini') {
role: 'user', const o1sysmessage: Message = {
content: this.getPrompts().getSystemPrompt() role: 'user',
}; content: this.getPrompts().getSystemPrompt()
};
// this is just some jank to get o1 working, proof of concept. return _streamText({
// for 4o, update the model above, remove the o1sysmessage, and set the system prompt and maxTokens model: model as any, // Use type assertion to bypass strict type checking
messages: [o1sysmessage, ...convertToCoreMessages(messages)],
return _streamText({ ...options,
model: model as any, // Use type assertion to bypass strict type checking });
// system: this.getPrompts().getSystemPrompt(), } else {
messages: [o1sysmessage, ...convertToCoreMessages(messages)], return _streamText({
// maxTokens: MAX_TOKENS, model: model as any, // Use type assertion to bypass strict type checking
...options, system: this.getPrompts().getSystemPrompt(),
}); messages: convertToCoreMessages(messages),
maxTokens: MAX_TOKENS,
...options,
});
}
} }
getPrompts(): Prompts { getPrompts(): Prompts {

View File

@ -36,7 +36,7 @@ export class OpenAIPrompts implements Prompts {
</system_constraints> </system_constraints>
<code_formatting_info> <code_formatting_info>
Use 2 spaces for code indentation Use 2 spaces for code indentation.
</code_formatting_info> </code_formatting_info>
<message_formatting_info> <message_formatting_info>
@ -46,8 +46,8 @@ export class OpenAIPrompts implements Prompts {
<diff_spec> <diff_spec>
For user-modified files, a \`<${MODIFICATIONS_TAG_NAME}>\` section will appear at the start of the user message, containing either \`<diff>\` or \`<file>\` elements for each modified file: For user-modified files, a \`<${MODIFICATIONS_TAG_NAME}>\` section will appear at the start of the user message, containing either \`<diff>\` or \`<file>\` elements for each modified file:
- \`<diff path="/some/file/path.ext">\`: Contains GNU unified diff format changes - \`<diff path="/some/file/path.ext">\`: Contains GNU unified diff format changes.
- \`<file path="/some/file/path.ext">\`: Contains the full new content of the file - \`<file path="/some/file/path.ext">\`: Contains the full new content of the file.
The system opts for \`<file>\` if the diff exceeds the new content size; otherwise, it uses \`<diff>\`. The system opts for \`<file>\` if the diff exceeds the new content size; otherwise, it uses \`<diff>\`.
@ -55,13 +55,13 @@ export class OpenAIPrompts implements Prompts {
- The header with original and modified file names is omitted! - The header with original and modified file names is omitted!
- Changed sections start with @@ -X,Y +A,B @@ where: - Changed sections start with @@ -X,Y +A,B @@ where:
- X: Original file starting line - X: Original file starting line.
- Y: Original file line count - Y: Original file line count.
- A: Modified file starting line - A: Modified file starting line.
- B: Modified file line count - B: Modified file line count.
- (-) lines: Removed from the original - (-) lines: Removed from the original.
- (+) lines: Added in the modified version - (+) lines: Added in the modified version.
- Unmarked lines: Unchanged context - Unmarked lines: Unchanged context.
Example: Example:
@ -84,15 +84,15 @@ export class OpenAIPrompts implements Prompts {
<file path="/home/project/package.json"> <file path="/home/project/package.json">
// full file content here // full file content here
</file> </file>
</</${MODIFICATIONS_TAG_NAME}> </${MODIFICATIONS_TAG_NAME}>
</diff_spec> </diff_spec>
<artifact_info> <artifact_info>
Bolt generates a SINGLE, comprehensive artifact for each project. This artifact includes all necessary steps and components, such as: Bolt generates a SINGLE, comprehensive artifact for each project. This artifact includes all necessary steps and components, such as:
- Shell commands to execute, including dependencies to install via a package manager (NPM) - Shell commands to execute, including dependencies to install via a package manager (NPM).
- Files to create along with their contents - Files to create along with their contents.
- Folders to create if required - Folders to create if required.
<artifact_instructions> <artifact_instructions>
1. CRITICAL: Think HOLISTICALLY and COMPREHENSIVELY BEFORE creating an artifact. This means: 1. CRITICAL: Think HOLISTICALLY and COMPREHENSIVELY BEFORE creating an artifact. This means: