Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
31 changes: 31 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -484,6 +484,37 @@ docker run -p 3000:3000 my-mock-openai-api
- `PORT` - Server port (default: 3000)
- `HOST` - Server host (default: 0.0.0.0)
- `VERBOSE` - Enable verbose logging (default: false)
- `MODEL_MAPPING_CONFIG` - Path to model mapping configuration file (default: model-mapping.json)

### Model Mapping Configuration

You can customize the model names displayed to users by creating a `model-mapping.json` file. This allows you to map internal model names to external names for better user experience.

**Example model-mapping.json:**
```json
{
"mock-gpt-thinking": "gpt-4o-mini",
"gpt-4-mock": "gpt-4-turbo",
"mock-gpt-markdown": "gpt-4o",
"gpt-4o-image": "dall-e-3",
"mock-claude-markdown": "claude-3-opus-20240229",
"gemini-1.5-pro": "gemini-2.0-pro-exp-2025-01-15",
"gemini-1.5-flash": "gemini-2.0-flash-exp-2025-01-15",
"gemini-pro": "gemini-pro-1.0",
"gemini-pro-vision": "gemini-pro-vision-1.0"
}
```

**CLI Usage:**
```bash
# Use custom model mapping configuration
npx mock-openai-api -c custom-mapping.json

# Or set via environment variable
MODEL_MAPPING_CONFIG=custom-mapping.json npx mock-openai-api
```

The server will automatically load the configuration and display mapped model names in the console output and API responses.
Comment on lines +489 to +517
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Update the CLI options table to include --config.

The new option exists in src/cli.ts but is missing from the README table; users will miss it.

-| Option             | Short | Description                       | Default   |
-| ------------------ | ----- | --------------------------------- | --------- |
-| `--port <number>`  | `-p`  | Server port                       | `3000`    |
-| `--host <address>` | `-H`  | Server host address               | `0.0.0.0` |
-| `--verbose`        | `-v`  | Enable request logging to console | `false`   |
-| `--version`        |       | Show version number               |           |
-| `--help`           |       | Show help information             |           |
+| Option               | Short | Description                             | Default               |
+| -------------------- | ----- | --------------------------------------- | --------------------- |
+| `--port <number>`    | `-p`  | Server port                             | `3000`                |
+| `--host <address>`   | `-H`  | Server host address                     | `0.0.0.0`             |
+| `--verbose`          | `-v`  | Enable request logging to console       | `false`               |
+| `--config <path>`    | `-c`  | Path to model mapping config file       | `./model-mapping.json`|
+| `--version`          |       | Show version number                     |                       |
+| `--help`             |       | Show help information                   |                       |
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
### Model Mapping Configuration
You can customize the model names displayed to users by creating a `model-mapping.json` file. This allows you to map internal model names to external names for better user experience.
**Example model-mapping.json:**
```json
{
"mock-gpt-thinking": "gpt-4o-mini",
"gpt-4-mock": "gpt-4-turbo",
"mock-gpt-markdown": "gpt-4o",
"gpt-4o-image": "dall-e-3",
"mock-claude-markdown": "claude-3-opus-20240229",
"gemini-1.5-pro": "gemini-2.0-pro-exp-2025-01-15",
"gemini-1.5-flash": "gemini-2.0-flash-exp-2025-01-15",
"gemini-pro": "gemini-pro-1.0",
"gemini-pro-vision": "gemini-pro-vision-1.0"
}
```
**CLI Usage:**
```bash
# Use custom model mapping configuration
npx mock-openai-api -c custom-mapping.json
# Or set via environment variable
MODEL_MAPPING_CONFIG=custom-mapping.json npx mock-openai-api
```
The server will automatically load the configuration and display mapped model names in the console output and API responses.
| Option | Short | Description | Default |
| -------------------- | ----- | --------------------------------------- | --------------------- |
| `--port <number>` | `-p` | Server port | `3000` |
| `--host <address>` | `-H` | Server host address | `0.0.0.0` |
| `--verbose` | `-v` | Enable request logging to console | `false` |
| `--config <path>` | `-c` | Path to model mapping config file | `./model-mapping.json`|
| `--version` | | Show version number | |
| `--help` | | Show help information | |
🧰 Tools
🪛 LanguageTool

[grammar] ~491-~491: There might be a mistake here.
Context: ... internal model names to external names for better user experience. **Example mode...

(QB_NEW_EN)

🤖 Prompt for AI Agents
In README.md around lines 489 to 517, the CLI options table is missing the new
--config option introduced in src/cli.ts; add a table row describing --config
(and its short form if present), a concise description like "Path to custom
model-mapping JSON file", the default behavior, and an example usage line
showing npx mock-openai-api -c custom-mapping.json (and the equivalent
environment variable usage). Ensure the table formatting matches the existing
CLI options table style and that the example references MODEL_MAPPING_CONFIG as
shown elsewhere.


## 🧪 Testing

Expand Down
31 changes: 31 additions & 0 deletions README.zh.md
Original file line number Diff line number Diff line change
Expand Up @@ -326,6 +326,37 @@ docker run -p 3000:3000 mock-openai-api

- `PORT` - 服务器端口(默认:3000)
- `HOST` - 服务器主机(默认:0.0.0.0)
- `MODEL_MAPPING_CONFIG` - 模型映射配置文件路径(默认:model-mapping.json)

### 模型映射配置

您可以通过创建 `model-mapping.json` 文件来自定义显示给用户的模型名称。这允许您将内部模型名称映射到外部名称,以提供更好的用户体验。

**示例 model-mapping.json:**
```json
{
"mock-gpt-thinking": "gpt-4o-mini",
"gpt-4-mock": "gpt-4-turbo",
"mock-gpt-markdown": "gpt-4o",
"gpt-4o-image": "dall-e-3",
"mock-claude-markdown": "claude-3-opus-20240229",
"gemini-1.5-pro": "gemini-2.0-pro-exp-2025-01-15",
"gemini-1.5-flash": "gemini-2.0-flash-exp-2025-01-15",
"gemini-pro": "gemini-pro-1.0",
"gemini-pro-vision": "gemini-pro-vision-1.0"
}
```

**CLI 使用:**
```bash
# 使用自定义模型映射配置
npx mock-openai-api -c custom-mapping.json

# 或通过环境变量设置
MODEL_MAPPING_CONFIG=custom-mapping.json npx mock-openai-api
```

服务器将自动加载配置并在控制台输出和 API 响应中显示映射后的模型名称。

## 🧪 测试

Expand Down
11 changes: 11 additions & 0 deletions custom-mapping.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
{
"mock-gpt-thinking": "custom-gpt-mini",
"gpt-4-mock": "custom-gpt-pro",
"mock-gpt-markdown": "custom-gpt-markdown",
"gpt-4o-image": "custom-dalle",
"mock-claude-markdown": "custom-claude-pro",
"gemini-1.5-pro": "custom-gemini-pro",
"gemini-1.5-flash": "custom-gemini-flash",
"gemini-pro": "custom-gemini-basic",
"gemini-pro-vision": "custom-gemini-vision"
}
12 changes: 12 additions & 0 deletions model-mapping.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
{
"mock-gpt-thinking": "gpt-4o-mini",
"mock-gpt-thinking-tag": "gpt-4o-mini",
"gpt-4-mock": "gpt-4-turbo",
"mock-gpt-markdown": "gpt-4o",
"gpt-4o-image": "dall-e-3",
"mock-claude-markdown": "claude-3-opus-20240229",
"gemini-1.5-pro": "gemini-2.0-pro-exp-2025-01-15",
"gemini-1.5-flash": "gemini-2.0-flash-exp-2025-01-15",
"gemini-pro": "gemini-pro-1.0",
"gemini-pro-vision": "gemini-pro-vision-1.0"
}
27 changes: 17 additions & 10 deletions src/cli.ts
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@
import { Command } from 'commander';
import app from './app';
import { version } from '../package.json'
import { loadModelMapping, getMappedModelName } from './config/modelMapping';
// 扩展全局对象类型
declare global {
var verboseLogging: boolean;
Expand All @@ -17,6 +18,7 @@ program
.option('-p, --port <number>', 'Server port', '3000')
.option('-H, --host <address>', 'Server host address', '0.0.0.0')
.option('-v, --verbose', 'Enable request logging to console', false)
.option('-c, --config <path>', 'Path to model mapping config file', './model-mapping.json')
.parse();

const options = program.opts();
Expand All @@ -27,13 +29,17 @@ const HOST = options.host || '0.0.0.0';
// 设置全局变量控制日志输出
global.verboseLogging = options.verbose;

// Load model mapping configuration
loadModelMapping(options.config);

app.listen(PORT, HOST, () => {
console.log(`🚀 Mock OpenAI API server started successfully!`);
console.log(`📍 Server address: http://${HOST}:${PORT}`);
console.log(`⚙️ Configuration:`);
console.log(` • Port: ${PORT}`);
console.log(` • Host: ${HOST}`);
console.log(` • Verbose logging: ${options.verbose ? 'ENABLED' : 'DISABLED'}`);
console.log(` • Config file: ${options.config}`);
console.log(` • Version: ${version}`);
console.log(`📖 API Documentation:`);
console.log(` • GET /health - Health check`);
Expand All @@ -47,27 +53,28 @@ app.listen(PORT, HOST, () => {
console.log(` • POST /v1beta/models/{model}:streamGenerateContent - Gemini streaming generation`);
console.log(`\n✨ Available models:`);
console.log(` OpenAI Compatible:`);
console.log(` - mock-gpt-thinking: Model supporting thought process`);
console.log(` - gpt-4-mock: Model supporting function calls`);
console.log(` - mock-gpt-markdown: Model outputting standard Markdown`);
console.log(` - gpt-4o-image: Model specifically for image generation`);
console.log(` - ${getMappedModelName('mock-gpt-thinking')}: Model supporting thought process`);
console.log(` - ${getMappedModelName('gpt-4-mock')}: Model supporting function calls`);
console.log(` - ${getMappedModelName('mock-gpt-markdown')}: Model outputting standard Markdown`);
console.log(` - ${getMappedModelName('gpt-4o-image')}: Model specifically for image generation`);
console.log(` Anthropic Compatible:`);
console.log(` - mock-claude-markdown: Claude markdown sample model`);
console.log(` - ${getMappedModelName('mock-claude-markdown')}: Claude markdown sample model`);
console.log(` Gemini Compatible:`);
console.log(` - gemini-1.5-pro: Advanced multimodal AI model`);
console.log(` - gemini-1.5-flash: Fast and efficient model`);
console.log(` - gemini-pro: Versatile model for various tasks`);
console.log(` - gemini-pro-vision: Multimodal model for text and images`);
console.log(` - ${getMappedModelName('gemini-1.5-pro')}: Advanced multimodal AI model`);
console.log(` - ${getMappedModelName('gemini-1.5-flash')}: Fast and efficient model`);
console.log(` - ${getMappedModelName('gemini-pro')}: Versatile model for various tasks`);
console.log(` - ${getMappedModelName('gemini-pro-vision')}: Multimodal model for text and images`);
console.log(`\n🔗 Usage example:`);
console.log(` curl -X POST http://localhost:${PORT}/v1/chat/completions \\`);
console.log(` -H "Content-Type: application/json" \\`);
console.log(` -d '{`);
console.log(` "model": "gpt-4-mock",`);
console.log(` "model": "${getMappedModelName('gpt-4-mock')}",`);
console.log(` "messages": [{"role": "user", "content": "Hello"}]`);
console.log(` }'`);
console.log(`\n💡 CLI Options:`);
console.log(` • Use --help to see all available options`);
console.log(` • Use -v or --verbose to enable request logging`);
console.log(` • Use -p <port> to specify custom port`);
console.log(` • Use -H <host> to specify custom host address`);
console.log(` • Use -c <path> to specify custom config file`);
});
60 changes: 60 additions & 0 deletions src/config/modelMapping.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
import fs from 'fs';
import path from 'path';

interface ModelMappingConfig {
[originalModel: string]: string;
}

let modelMapping: ModelMappingConfig = {};
let configLoaded = false;

const CONFIG_FILE_PATH = process.env.MODEL_MAPPING_CONFIG || path.join(process.cwd(), 'model-mapping.json');

export function loadModelMapping(configPath?: string): void {
if (configLoaded) {
return;
}

const configFilePath = configPath || CONFIG_FILE_PATH;

try {
if (fs.existsSync(configFilePath)) {
const configContent = fs.readFileSync(configFilePath, 'utf-8');
const config = JSON.parse(configContent);

if (typeof config === 'object' && config !== null) {
modelMapping = config;
console.log(`✅ Loaded model mapping configuration from ${configFilePath}`);
console.log(`📋 Model mappings: ${Object.keys(modelMapping).length} mappings configured`);

if (Object.keys(modelMapping).length > 0) {
Object.entries(modelMapping).forEach(([original, mapped]) => {
console.log(` • ${original}${mapped}`);
});
}
}
} else {
Comment on lines +20 to +36
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Validate mapping shape; reject/skip non-string values to avoid runtime surprises.

Currently any JSON object is accepted. Guard against arrays and non-string values; log warnings for skipped entries.

-      if (typeof config === 'object' && config !== null) {
-        modelMapping = config;
+      if (typeof config === 'object' && config !== null && !Array.isArray(config)) {
+        const next: ModelMappingConfig = {};
+        for (const [k, v] of Object.entries(config as Record<string, unknown>)) {
+          if (typeof k === 'string' && typeof v === 'string') {
+            next[k] = v;
+          } else {
+            console.warn(`⚠️  Skipping invalid mapping entry: ${String(k)} → ${String(v)}`);
+          }
+        }
+        modelMapping = next;
🤖 Prompt for AI Agents
In src/config/modelMapping.ts around lines 20 to 36, the code currently accepts
any JSON object as modelMapping which can include arrays or non-string values;
update the loader to validate the mapping shape by ensuring the parsed config is
an object whose keys map to string values only, skip or reject any entries where
the mapped value is not a string (or where a value is an array/object), and log
a warning for each skipped entry including the key and the unexpected type; set
modelMapping to the filtered object (or empty if none valid) and keep the
existing success logs but report the count of valid mappings.

console.log(`ℹ️ No model mapping configuration found at ${configFilePath}`);
}
} catch (error) {
console.error(`❌ Failed to load model mapping configuration: ${error}`);
}

configLoaded = true;
}
Comment on lines +13 to +44
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Don’t set configLoaded=true on failure; allows second attempt with another path.

If the first call points to a non-existent or invalid file, configLoaded is still set to true, blocking subsequent loads (e.g., a later call with a valid default path). Move the flag to the successful load path.

 export function loadModelMapping(configPath?: string): void {
   if (configLoaded) {
     return;
   }
 
   const configFilePath = configPath || CONFIG_FILE_PATH;
 
   try {
     if (fs.existsSync(configFilePath)) {
       const configContent = fs.readFileSync(configFilePath, 'utf-8');
       const config = JSON.parse(configContent);
       
-      if (typeof config === 'object' && config !== null) {
-        modelMapping = config;
+      if (typeof config === 'object' && config !== null) {
+        modelMapping = config;
         console.log(`✅ Loaded model mapping configuration from ${configFilePath}`);
         console.log(`📋 Model mappings: ${Object.keys(modelMapping).length} mappings configured`);
         
         if (Object.keys(modelMapping).length > 0) {
           Object.entries(modelMapping).forEach(([original, mapped]) => {
             console.log(`   • ${original} → ${mapped}`);
           });
         }
+        configLoaded = true;
       }
     } else {
       console.log(`ℹ️  No model mapping configuration found at ${configFilePath}`);
     }
   } catch (error) {
     console.error(`❌ Failed to load model mapping configuration: ${error}`);
   }
-  
-  configLoaded = true;
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
export function loadModelMapping(configPath?: string): void {
if (configLoaded) {
return;
}
const configFilePath = configPath || CONFIG_FILE_PATH;
try {
if (fs.existsSync(configFilePath)) {
const configContent = fs.readFileSync(configFilePath, 'utf-8');
const config = JSON.parse(configContent);
if (typeof config === 'object' && config !== null) {
modelMapping = config;
console.log(`✅ Loaded model mapping configuration from ${configFilePath}`);
console.log(`📋 Model mappings: ${Object.keys(modelMapping).length} mappings configured`);
if (Object.keys(modelMapping).length > 0) {
Object.entries(modelMapping).forEach(([original, mapped]) => {
console.log(` • ${original}${mapped}`);
});
}
}
} else {
console.log(`ℹ️ No model mapping configuration found at ${configFilePath}`);
}
} catch (error) {
console.error(`❌ Failed to load model mapping configuration: ${error}`);
}
configLoaded = true;
}
export function loadModelMapping(configPath?: string): void {
if (configLoaded) {
return;
}
const configFilePath = configPath || CONFIG_FILE_PATH;
try {
if (fs.existsSync(configFilePath)) {
const configContent = fs.readFileSync(configFilePath, 'utf-8');
const config = JSON.parse(configContent);
if (typeof config === 'object' && config !== null) {
modelMapping = config;
console.log(`✅ Loaded model mapping configuration from ${configFilePath}`);
console.log(`📋 Model mappings: ${Object.keys(modelMapping).length} mappings configured`);
if (Object.keys(modelMapping).length > 0) {
Object.entries(modelMapping).forEach(([original, mapped]) => {
console.log(` • ${original}${mapped}`);
});
}
configLoaded = true;
}
} else {
console.log(`ℹ️ No model mapping configuration found at ${configFilePath}`);
}
} catch (error) {
console.error(`❌ Failed to load model mapping configuration: ${error}`);
}
}
🤖 Prompt for AI Agents
In src/config/modelMapping.ts around lines 13 to 44, configLoaded is currently
set to true unconditionally at the end of the function which prevents retrying
with another path after a failed or missing config load; move the assignment so
it only runs after a successful parse and assignment of modelMapping (i.e.,
after modelMapping = config and successful logs), and remove the unconditional
configLoaded = true from the function exit so that if the file is missing or
JSON parsing throws, subsequent calls can attempt loading again.


export function getMappedModelName(originalModel: string): string {
return modelMapping[originalModel] || originalModel;
}

export function getOriginalModelName(mappedModel: string): string | undefined {
return Object.keys(modelMapping).find(key => modelMapping[key] === mappedModel);
}

export function getAllMappings(): ModelMappingConfig {
return { ...modelMapping };
}

export function hasMappings(): boolean {
return Object.keys(modelMapping).length > 0;
}
24 changes: 14 additions & 10 deletions src/index.ts
Original file line number Diff line number Diff line change
@@ -1,13 +1,17 @@
#!/usr/bin/env node

import app from './app';
import { loadModelMapping, getMappedModelName } from './config/modelMapping';

const PORT = process.env.PORT || 3000;
const HOST = process.env.HOST || '0.0.0.0';

// Enable verbose logging by default in development or when VERBOSE is set
global.verboseLogging = process.env.NODE_ENV !== 'production' || process.env.VERBOSE === 'true';

// Load model mapping configuration
loadModelMapping();

app.listen(PORT, () => {
console.log(`🚀 Mock OpenAI API server started successfully!`);
console.log(`📍 Server address: http://${HOST}:${PORT}`);
Expand All @@ -24,22 +28,22 @@ app.listen(PORT, () => {
console.log(` • POST /v1beta/models/{model}:streamGenerateContent - Gemini streaming generation`);
console.log(`\n✨ Available models:`);
console.log(` OpenAI Compatible:`);
console.log(` - mock-gpt-thinking: Model supporting thought process`);
console.log(` - gpt-4-mock: Model supporting function calls with tool calls format`);
console.log(` - mock-gpt-markdown: Model outputting standard Markdown`);
console.log(` - gpt-4o-image: Model specifically for image generation`);
console.log(` - ${getMappedModelName('mock-gpt-thinking')}: Model supporting thought process`);
console.log(` - ${getMappedModelName('gpt-4-mock')}: Model supporting function calls with tool calls format`);
console.log(` - ${getMappedModelName('mock-gpt-markdown')}: Model outputting standard Markdown`);
console.log(` - ${getMappedModelName('gpt-4o-image')}: Model specifically for image generation`);
console.log(` Anthropic Compatible:`);
console.log(` - mock-claude-markdown: Claude markdown sample model`);
console.log(` - ${getMappedModelName('mock-claude-markdown')}: Claude markdown sample model`);
console.log(` Gemini Compatible:`);
console.log(` - gemini-1.5-pro: Advanced multimodal AI model`);
console.log(` - gemini-1.5-flash: Fast and efficient model`);
console.log(` - gemini-pro: Versatile model for various tasks`);
console.log(` - gemini-pro-vision: Multimodal model for text and images`);
console.log(` - ${getMappedModelName('gemini-1.5-pro')}: Advanced multimodal AI model`);
console.log(` - ${getMappedModelName('gemini-1.5-flash')}: Fast and efficient model`);
console.log(` - ${getMappedModelName('gemini-pro')}: Versatile model for various tasks`);
console.log(` - ${getMappedModelName('gemini-pro-vision')}: Multimodal model for text and images`);
console.log(`\n🔗 Usage example:`);
console.log(` curl -X POST http://localhost:${PORT}/v1/chat/completions \\`);
console.log(` -H "Content-Type: application/json" \\`);
console.log(` -d '{`);
console.log(` "model": "gpt-4-mock",`);
console.log(` "model": "${getMappedModelName('gpt-4-mock')}",`);
console.log(` "messages": [{"role": "user", "content": "Hello"}]`);
console.log(` }'`);
console.log(`\n💡 Use CLI for more options: npm run build && npx mock-openai-api --help`);
Expand Down
3 changes: 2 additions & 1 deletion src/services/openaiService.ts
Original file line number Diff line number Diff line change
Expand Up @@ -18,13 +18,14 @@ import {
randomChoice,
formatErrorResponse,
} from "../utils/helpers";
import { getMappedModelName } from "../config/modelMapping";
import { ImgData } from "../data/base64Img";
/**
* Get model list
*/
export function getModels(): ModelsResponse {
const models: Model[] = mockModels.map((mockModel) => ({
id: mockModel.id,
id: getMappedModelName(mockModel.id),
object: "model",
created: getCurrentTimestamp(),
owned_by: "mock-openai",
Expand Down
4 changes: 3 additions & 1 deletion src/utils/anthropicHelpers.ts
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
import { MockModel } from "../types/index";
import { anthropicMockModels } from "../data/anthropicMockData";
import { ErrorResponse, StreamingEvent } from "../types/anthropic";
import { getMappedModelName } from "../config/modelMapping";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Anthropic model lookup also uses the wrong mapping direction

Same issue as Gemini: external IDs won’t resolve. Switch to reverse lookup first, keep direct and forward as fallbacks.

-import { getMappedModelName } from "../config/modelMapping";
+import { getMappedModelName, getOriginalModelName } from "../config/modelMapping";
@@
 export function findModelById(modelId: string): MockModel | undefined {
-  const mappedModelId = getMappedModelName(modelId);
-  return anthropicMockModels.find(model => model.id === mappedModelId);
+  // 1) direct
+  const direct = anthropicMockModels.find(model => model.id === modelId);
+  if (direct) return direct;
+  // 2) external -> internal
+  const originalId = getOriginalModelName(modelId);
+  if (originalId) return anthropicMockModels.find(model => model.id === originalId);
+  // 3) internal -> external (unlikely to hit)
+  const mappedId = getMappedModelName(modelId);
+  return anthropicMockModels.find(model => model.id === mappedId);
 }

Also applies to: 31-33

🤖 Prompt for AI Agents
In src/utils/anthropicHelpers.ts around lines 4 and 31-33, the model lookup uses
the wrong mapping direction causing external IDs not to resolve; change the
lookup order to attempt a reverse mapping lookup first (map external ID ->
internal name), then fall back to the existing direct lookup, and finally the
forward mapping as last resort, returning the resolved internal model name;
update any helper functions or calls in those lines to perform reverse lookup
first and keep the current direct/forward logic as fallbacks.


/**
* Get current timestamp
Expand All @@ -27,7 +28,8 @@ export function calculateTokens(text: string): number {
* Find model by ID
*/
export function findModelById(modelId: string): MockModel | undefined {
return anthropicMockModels.find(model => model.id === modelId);
const mappedModelId = getMappedModelName(modelId);
return anthropicMockModels.find(model => model.id === mappedModelId);
}

/**
Expand Down
4 changes: 3 additions & 1 deletion src/utils/geminiHelpers.ts
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
import { geminiMockModels } from '../data/geminiMockData';
import { getMappedModelName } from '../config/modelMapping';
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Wrong mapping direction causes lookups to fail for external names

getMappedModelName maps internal->external. For requests that send external (mapped) IDs, this returns the external again, so the lookup misses. Use getOriginalModelName first, with sensible fallbacks.

-import { getMappedModelName } from '../config/modelMapping';
+import { getMappedModelName, getOriginalModelName } from '../config/modelMapping';
@@
 export function findGeminiModelById(modelId: string) {
   // Remove 'models/' prefix if present
   const cleanModelId = modelId.replace('models/', '');
-  const mappedModelId = getMappedModelName(cleanModelId);
-  return geminiMockModels.find(model => model.id === mappedModelId);
+  // 1) direct hit
+  const direct = geminiMockModels.find(model => model.id === cleanModelId);
+  if (direct) return direct;
+  // 2) external -> internal
+  const originalId = getOriginalModelName(cleanModelId);
+  if (originalId) return geminiMockModels.find(model => model.id === originalId);
+  // 3) last resort: internal -> external (unlikely to match, but harmless)
+  const mappedId = getMappedModelName(cleanModelId);
+  return geminiMockModels.find(model => model.id === mappedId);
 }

Also applies to: 33-35

🤖 Prompt for AI Agents
In src/utils/geminiHelpers.ts around lines 2 and also affecting lines 33-35, the
code imports and uses getMappedModelName (which maps internal->external) when
handling incoming external/mapped IDs, causing lookups to fail; change the logic
to call getOriginalModelName first to translate external names back to internal
names (with sensible fallbacks to the incoming value if no mapping exists), and
only use getMappedModelName where you need to produce external names for
outgoing responses; update any conditional branches on lines 33-35 to attempt
getOriginalModelName(inputName) before using getMappedModelName so lookups use
internal model names.


/**
* Get current timestamp
Expand Down Expand Up @@ -29,7 +30,8 @@ export function generateModelName(): string {
export function findGeminiModelById(modelId: string) {
// Remove 'models/' prefix if present
const cleanModelId = modelId.replace('models/', '');
return geminiMockModels.find(model => model.id === cleanModelId);
const mappedModelId = getMappedModelName(cleanModelId);
return geminiMockModels.find(model => model.id === mappedModelId);
}

/**
Expand Down
18 changes: 17 additions & 1 deletion src/utils/helpers.ts
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
import { MockModel, MockTestCase } from '../types';
import { mockModels } from '../data/mockData';
import { getMappedModelName, getOriginalModelName } from '../config/modelMapping';

/**
* Generate unique chat completion ID
Expand All @@ -26,7 +27,22 @@ export function getCurrentTimestamp(): number {
* Find model by ID
*/
export function findModelById(modelId: string): MockModel | undefined {
return mockModels.find(model => model.id === modelId);
// First check if it's a direct match with original model ID
let foundModel = mockModels.find(model => model.id === modelId);

if (foundModel) {
return foundModel;
}

// If not found, check if it's a mapped model name, get the original ID
const originalModelId = getOriginalModelName(modelId);
if (originalModelId) {
return mockModels.find(model => model.id === originalModelId);
}

// Finally, try mapping the input and finding the model
const mappedModelId = getMappedModelName(modelId);
return mockModels.find(model => model.id === mappedModelId);
}

/**
Expand Down