-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Perp QA #18
Comments
/ask Create me 3 spoof issue specs that I can use to test AI. They should vary from short and easy to mid-long and complex. I'm testing which is best at extracting relevant context inferred from both the question and the peripheral context such as issue conversation and specification. |
Tokens{"prompt_tokens":375,"completion_tokens":247,"total_tokens":622} SentencePieceProcessorEncoder Model: SPP > 127 tokens Perp ResponseSure, here are three spoof issue specs that you can use to test two LLM models: Spoof issue spec 1 (Short and Easy): Title: Describe the weather in New York City. Body: What is the current temperature and weather conditions in New York City? Spoof issue spec 2 (Mid-Long and Complex): Title: How do you recommend planning a trip to Paris? Body: What are the best things to do in Paris, where should I stay, what are the best restaurants to try, and how much should I budget for the trip? Include any relevant information such as visa requirements and currency exchange rates. Spoof issue spec 3 (Long and Complex): Title: How can I optimize my marketing strategy for my new startup? Body: What are the best marketing channels to use for reaching potential customers, what is the most effective way to pricing your products, and how can you measure the success of your marketing campaign? Include any relevant information such as industry trends and customer behaviours. I hope these issue specs work well for your testing purposes! GPT ResponseHere are three spoof issue specs that vary in length and complexity:
These spoof issue specs vary in length and complexity, allowing you to test the AI's ability to extract relevant context from different types of issues. GPT Tokens{"output":239,"input":442,"total":681}
`` |
Bro? lmao |
/ask These are poor in quality, let's step it up. Have each issue be themed in software development, as a bonus have the niche be Web3 based. Increase the issue spec content that would be representative of real world, full size issues. Return only the new and improved spoof specifications |
Tokens{"prompt_tokens":881,"completion_tokens":627,"total_tokens":1508} SentencePieceProcessorEncoder Model: SPP > 127 tokens Perp ResponseHere are three spoof issue specs based on the theme of software development and Web3: Spoof issue spec 1 (Short and Easy): Login: xyzbrowser@gmail.com Relevant context: The user is deploying a smart contract on the Ethereum blockchain and is encountering an error message during the deployment process, causing the contract to fail. Spoof issue spec 2 (Mid-Long and Complex): Login: arclears@gmail.com Relevant context: The user is building a decentralized supply chain platform that will allow companies to transact securely and efficiently on the Ethereum blockchain, and is looking for guidance on the best way to design and implement this platform. Spoof issue spec 3 (Long and Complex): Login: brycbryc@gmail.com Relevant context: The user is using an Ethereum smart contract to store sensitive data and monetary transactions, and wants to ensure the security of the contract by conducting a security audit. GPT ResponseSure, here are the new and improved spoof issue specifications themed in software development with a Web3 niche:
These new and improved spoof issue specifications are themed in software development with a Web3 niche and provide more detailed and representative content of real-world, full-size issues. GPT Tokens{"output":387,"input":896,"total":1283}
`` |
/ask these are much better. Create another unique set of spoof specs |
Perp Tokens{"prompt_tokens":1877,"completion_tokens":411,"total_tokens":2288} GPT Tokens{"output":406,"input":1737,"total":2143} Perp ResponseHere are three more spoof issue specs themed in software development with a Web3 niche, with additional context and a more unique approach:
GPT ResponseSure, here is another set of unique spoof issue specifications:
These spoof issue specifications are focused on software development with a Web3 niche, providing real-world, full-size issue scenarios. |
/ask create 2 more spoof specs |
Perp Tokens{"prompt_tokens":1581,"completion_tokens":276,"total_tokens":1857} GPT Tokens{"output":243,"input":1504,"total":1747} Perp Response
GPT ResponseSure, here are two more spoof issue specifications:
|
/ask build one real issue from the list of issue specs here, be as detailed as possible and have the issue be scoped towards implementing and building the spec at a technical level. Think first, logical prioritized tasks inline with the overall spec. |
Perp Tokens{"prompt_tokens":3874,"completion_tokens":155,"total_tokens":4029} GPT Tokens{"output":627,"input":3488,"total":4115} Perp ResponseSpoof Issue Specs for Testing Two LLM Models
GPT ResponseIssue Title: Implement a Decentralized Marketplace for Digital Art Issue Description: Design and develop a decentralized marketplace for buying and selling digital art using blockchain technology. The marketplace should support features like artwork listing, bidding, and secure transactions. Consider implementing additional functionalities like artist verification, royalties distribution, and metadata storage. Relevant Context: The user wants to create a platform where artists can showcase and sell their digital artwork in a decentralized manner, ensuring authenticity, ownership, and fair compensation. Tasks:
Estimated Time: The estimated time for completing this issue will depend on the complexity of the marketplace and the chosen technologies. However, it is expected to take several weeks to months to implement and test all the functionalities. |
/ask Remember the following, the spec should be aimed towards the implementer of the spec, you should write it in such a way that it becomes a sort of tutorial or guide in how to achieve the spec. Considering this, please rewrite the real issue spec you just created. |
Perp Tokens{"prompt_tokens":802,"completion_tokens":644,"total_tokens":1446} GPT Tokens{"output":530,"input":847,"total":1377} Perp ResponseContext: A decentralized marketplace for digital art is to be created using blockchain technology. The marketplace should have features like artwork listing, bidding, and secure transactions. Additional functionalities like artist verification, royalties distribution, and metadata storage need to be implemented to ensure authenticity, ownership, and fair compensation. Task Breakdown:
Estimated Time Implications: The estimated time for completing this issue will depend on the complexity of the marketplace and the chosen technologies. However, it is expected to take several weeks to months to implement and test all the functionalities. GPT ResponseTo implement a decentralized marketplace for digital art, follow these steps:
Estimated Time: The estimated time for completing this issue will depend on the complexity of the marketplace and the chosen technologies. However, it is expected to take several weeks to months to implement and test all the functionalities. |
/ask based on the technical issue spec you just created, determine an estimated timeframe for achieving each of the deliverables. Return your response as a markdown table. |
Perp Tokens{"prompt_tokens":287,"completion_tokens":140,"total_tokens":427} GPT Tokens{"output":172,"input":382,"total":554} Perp Response
GPT ResponseBased on the technical issue spec provided, here is an estimated timeframe for achieving each of the deliverables:
Please note that the estimated timeframes are approximate and can vary depending on the complexity of the marketplace and the chosen technologies. It is important to allocate sufficient time for thorough testing and optimization to ensure the functionality and security of the marketplace. |
/ask create a summary of everything that has been directly asked of you, return a markdown table response |
Perp Tokens{"prompt_tokens":485,"completion_tokens":432,"total_tokens":917} GPT Tokens{"output":391,"input":565,"total":956}
``
### SPP Tokens
```json
Note: JSON in responses are throwing this off rn: 580 tokens Perp Response
GPT Response
|
/help |
I need spoof issue specs to be used to test two LLM models
The text was updated successfully, but these errors were encountered: