This plugin provides a simple way to chat with AI in JMeter. Feather Wand serves as your intelligent assistant for JMeter test plan development, optimization, and troubleshooting.
πͺ About the name: The name "Feather Wand" was suggested by my children who were inspired by an episode of the animated show Bluey. In the episode, a simple feather becomes a magical wand that transforms the ordinary into something special (heavy) - much like how this plugin aims to transform your JMeter experience with a touch of AI magic!
- Chat with AI directly within JMeter using either Claude or OpenAI models
- Get suggestions for JMeter elements based on your needs
- Ask questions about JMeter functionality and best practices
- Command intellisense with auto-completion for special commands in the chat input box
- Use
@this
command to get detailed information about the currently selected element - Use
@code
command to extract code blocks from AI responses into the JSR223 editor - Use
@usage
command to get usage examples for JMeter components - Use
@lint
command to automatically rename elements in your test plan for better organization and readability - Use
@optimize
command to get optimization recommendations for the currently selected element in your test plan - Use
@wrap
command to intelligently group HTTP samplers under Transaction Controllers for better organization and reporting - Use right click context menu to refactor code, format code, and add functions in JSR223 script editor
- Customize AI behavior through configuration properties
- Switch between Claude and OpenAI models based on your preference or specific needs
- Install the JMeter Plugins Manager from Plugins Manager.
- Restart JMeter.
- Launch Plugins Manager.
- Search for
feather wand
underAvailable Plugins
tab. - Select it and click
Apply Changes and Restart JMeter
button.
- Download the latest release JAR file from the Releases page.
- Place the JAR file in the
lib/ext
directory of your JMeter installation. - Copy the contents of
jmeter-ai-sample.properties
into yourjmeter.properties
file (located in thebin
directory of your JMeter installation) or into youruser.properties
file. - Configure your API key(s) for Anthropic and/or OpenAI in the properties file.
- Restart JMeter.
- The Feather Wand plugin will appear as a new component in the right-click menu under "Add" > "Non-Test Elements" > "Feather Wand".
The Feather Wand plugin can be configured through JMeter properties. Copy the jmeter-ai-sample.properties
file content to your jmeter.properties
or user.properties
file and modify the properties as needed.
Property | Description | Default Value |
---|---|---|
anthropic.api.key |
Your Claude API key | Required |
claude.default.model |
Default Claude model to use | claude-3-sonnet-20240229 |
claude.temperature |
Temperature setting (0.0-1.0) | 0.7 |
claude.max.tokens |
Maximum tokens for AI responses | 1024 |
claude.max.history.size |
Maximum conversation history size | 10 |
claude.system.prompt |
System prompt that guides Claude's responses | See sample properties file |
anthropic.log.level |
Logging level for Anthropic API requests ("info" or "debug") | Empty (disabled) |
Property | Description | Default Value |
---|---|---|
openai.api.key |
Your OpenAI API key | Required |
openai.default.model |
Default OpenAI model to use | gpt-4o |
openai.temperature |
Temperature setting (0.0-1.0) | 0.5 |
openai.max.tokens |
Maximum tokens for AI responses | 1024 |
openai.max.history.size |
Maximum conversation history size | 10 |
openai.system.prompt |
System prompt that guides OpenAI's responses | See sample properties file |
openai.log.level |
Logging level for OpenAI API requests ("INFO" or "DEBUG") | Empty (disabled) |
Property | Description | Default Value |
---|---|---|
jmeter.ai.refactoring.enabled |
Enable code refactoring for JSR223 script editor | true |
jmeter.ai.service.type |
The AI service to use for code refactoring ("openai" or "anthropic") | "openai" |
The system prompt defines how the AI (Claude or OpenAI) responds to your queries. You can customize this in the properties file to focus on specific aspects of JMeter or add your own guidelines.
Both claude.system.prompt
and openai.system.prompt
can be configured separately in the properties file. The default prompts are designed to provide helpful, JMeter-specific responses tailored to each AI model's capabilities.
Use the @usage
command to view detailed token usage information for your AI interactions:
-
How to Use:
- Simply type
@usage
in the chat - The command will show usage statistics for either OpenAI or Anthropic depending on which service you're using
- Simply type
-
Information Provided:
- Overall summary of total conversations and tokens used
- Detailed breakdown of recent conversations (last 10)
- Token usage per conversation (input and output tokens)
- Timestamps and model information
- Link to official pricing pages for cost information
-
Example Output:
# Usage Summary ## Overall Summary - Total Conversations: 5 - Total Input Tokens: 1500 - Total Output Tokens: 2000 - Total Tokens: 3500 ## Recent Conversations - Conversation 1: 300 input, 400 output tokens - Conversation 2: 250 input, 350 output tokens ...
-
Benefits:
- Track your API usage and costs
- Monitor token consumption patterns
- Identify potential optimization opportunities
- Keep track of conversation history
Use the @this
command in your message to get detailed information about the currently selected element in your test plan. For example:
- "Tell me about @this element"
- "How can I optimize @this?"
- "What are the best practices for @this?"
Feather Wand will analyze the selected element and provide tailored information and advice.
Use the @optimize
command (or simply type "optimize") to get optimization recommendations for the currently selected element in your test plan. This command will:
- Analyze the selected element's configuration
- Identify potential performance bottlenecks
- Suggest specific, actionable improvements
- Provide best practices for that element type
For example, if you have an HTTP Request sampler selected, the optimization recommendations might include:
- Connection and timeout settings adjustments
- Proper header management
- Efficient parameter handling
- Encoding settings optimization
- Redirect handling improvements
Simply select an element in your test plan and type @optimize
or optimize
in the chat to receive tailored optimization recommendations.
Use the @lint
command to automatically rename elements in your test plan for better organization and readability:
-
How to Use:
- Type
@lint
in the chat to analyze your test plan structure - The AI will suggest better names for elements based on their function and context
- Review the suggestions and confirm to apply the changes
- Use the undo/redo buttons to revert or reapply changes if needed
- e.g.
@lint rename the elements based on the URL
or@lint rename the elements in pascal case
- Type
-
Benefits:
- Improves test plan readability and maintenance
- Applies consistent naming conventions across your test plan
- Helps identify elements with generic or unclear names
- Makes test plans more understandable for team members
- Undo it if you don't like the changes
- Redo it if you like the changes
-
Best Practices:
- Run
@lint
after creating a new test plan to establish good naming from the start - Use it before sharing test plans with team members
- Apply it to imported test plans to make them conform to your naming standards
- Run
This feature is particularly valuable for large test plans or when working in teams where consistent naming is essential for collaboration.
Use the @wrap
command to intelligently group HTTP samplers under Transaction Controllers for better organization and reporting:
-
How to Use:
- Select a Thread Group in your test plan
- Type
@wrap
in the chat - The AI will analyze your HTTP samplers and group similar ones under Transaction Controllers
- Use the undo button to revert changes if needed
-
Benefits:
- Improves test plan organization and readability
- Enhances test reports with meaningful transaction metrics
- Groups related HTTP requests logically
- Preserves the original order and hierarchy of samplers
- Maintains all child elements (like assertions and post-processors) with their parent samplers
-
How It Works:
- Analyzes sampler names and paths to identify logical groupings
- Creates appropriately named Transaction Controllers
- Moves samplers under their respective Transaction Controllers
- Preserves the original order and hierarchy
- Uses pattern matching and structural analysis (not AI) for its grouping logic
This feature is especially useful for imported or recorded test plans that contain many individual HTTP samplers without proper organization.
Feather Wand supports both Anthropic (Claude) and OpenAI APIs. You can configure either or both in your properties file.
- Go to Anthropic API website
- Sign up for an account
- Create a new API key
- Copy the API key and paste it into the
anthropic.api.key
property in yourjmeter.properties
file - For more information about the API key, visit the API Key documentation
- Go to OpenAI API website
- Sign up for an account
- Create a new API key
- Copy the API key and paste it into the
openai.api.key
property in yourjmeter.properties
file - For more information about the API key, visit the API Key documentation
Feather Wand automatically filters available models to show only chat-compatible models. By default, it excludes audio, TTS, transcription, and other non-chat models. You can select your preferred model from the dropdown in the UI, or set default models in the properties file:
- For Claude:
claude.default.model
(e.g.,claude-3-7-sonnet-20250219
) - For OpenAI:
openai.default.model
(e.g.,gpt-4o
)
Feather Wand applies intelligent filtering to the available models to ensure you only see relevant chat models in the dropdown:
- OpenAI Models: Filters out audio, TTS, whisper, davinci, search, transcribe, realtime, and instruct models to show only GPT chat models.
- Claude Models: Shows only the latest available Claude models.
This filtering ensures that you only see models that are compatible with the chat interface and appropriate for JMeter-related tasks.
If you encounter any issues or have suggestions for improvement, please open an issue on the GitHub repository.
Please check the roadmap for more details.
While the Feather Wand plugin aims to provide helpful assistance, please keep the following in mind:
- AI Limitations: The AI can make mistakes or provide incorrect information. Always verify critical suggestions before implementing them in production tests.
- Backup Your Test Plans: Always backup your test plans before making significant changes, especially when implementing AI suggestions.
- Test Verification: After making changes based on AI recommendations, thoroughly verify your test plan functionality in a controlled environment before running it against production systems.
- Performance Impact: Some AI-suggested configurations may impact test performance. Monitor resource usage when implementing new configurations.
- Security Considerations: Do not share sensitive information (credentials, proprietary code, etc.) in your conversations with the AI.
- API Costs: Be aware that using the Claude API or OpenAI API incurs costs based on token usage. The plugin is designed to minimize token usage, but excessive use may result in higher costs.
This plugin is provided as a tool to assist JMeter users, but the ultimate responsibility for test plan design, implementation, and execution remains with the user.