Replies: 17 comments 15 replies
-
Appending |
Beta Was this translation helpful? Give feedback.
-
I am getting this Sorry, the response was filtered by the Responsible AI Service. Please rephrase your prompt and try again. response way too many times when trying to work with copilot...on trial atm. |
Beta Was this translation helpful? Give feedback.
-
It seems that the "Responsible AI Service" needs to show that it exists. It will force you to rephrase a perfectly correct question instead of just avoiding to generate "sensitive" content. That is a strong argument to go check other products. |
Beta Was this translation helpful? Give feedback.
-
I'm getting nearly every other response filtered by "Responsible AI Service" today in my code with golang channels (very, very far from a sensitive topic afaik). This is my personal account and I have "Suggestions matching public code" Allowed. This is the first time I'm seeing these errors and it is frustrating given the nature of the conversation to be flagged. WORKAROUND: Creating a new chat (the + in the upper-right-hand corner of the Github Copilot pane) will reset context. |
Beta Was this translation helpful? Give feedback.
-
What kind of nonsense is this now... It doesn't even explain what my code does. It just refuses to do anything. ChatGPT is very happy to do the exact same thing for me, but Copilot is cranky. I'm not writing code for a bomb. I just want to merge some videos with ffmpeg. I guess that's a very big responsibility, so Copilot refuses to help me. |
Beta Was this translation helpful? Give feedback.
-
Am getting the same.. false positives for it. It is nearly every chat right now. Note I am literally asking it to rewrite a list with 3 objects to include fps, id and length as paramaters... have tried rewording multiple times. I am unsure how to give feedback on this matter as besides the thumbs down there is nothing I can really do... it starts answering then blanks it out. |
Beta Was this translation helpful? Give feedback.
-
Also getting this for the lines like "Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Vestibulum tortor quam, feugiat vitae, ultricies eget, tempor sit amet, ante. Donec eu libero sit amet quam egestas semper. Aenean ultricies mi vitae est. Mauris placerat eleifend leo. Quisque sit amet est et sapien ullamcorper pharetra. Vestibulum erat wisi, condimentum sed, commodo vitae, ornare sit amet, wisi. Aenean fermentum, elit eget tincidunt condimentum, eros ipsum rutrum orci, sagittis tempus lacus enim ac dui. Donec non enim in turpis pulvinar facilisis. Ut felis. Praesent dapibus, neque id cursus faucibus, tortor neque egestas augue, eu vulputate magna eros eu erat. Aliquam erat volutpat. Nam dui mi, tincidunt quis, accumsan porttitor, facilisis luctus, metus" |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
I'm hopping on here to say that I just got my first "Sorry, the response was filtered by the Responsible AI Service." message. The workaround of creating a new thread and re-prompting is fine if your query doesn't rely upon the context of the current conversation. Otherwise, this can be rather problematic. Here is my current case: Me:
Copilot:
Me:
Copilot:
Granted, in this case, I am just using Copilot to reduce the time it would take to read the documentation for myself, but this is just an example of a conversation context-dependent query. Update
Me:
Copilot:I apologize for the confusion earlier. You're correct. According to the
Here's an example of how you can use these keyword arguments: ... |
Beta Was this translation helpful? Give feedback.
-
I got this twice from the prompt: The Learn More link doesn't even link to anything about the Responsible AI Service, it just links to code duplication settings at: https://docs.github.com/en/copilot/configuring-github-copilot/configuring-github-copilot-settings-on-githubcom#enabling-or-disabling-duplication-detection There's also no clear way to mark it as a false-positive (although because it puts the gradient over the answer, you have trouble seeing the answer anyway, and so technically can't confirm), but apparently it used to tell you to use the downvote button? Not keen on it getting updated to be less clear. |
Beta Was this translation helpful? Give feedback.
-
Its quite annonying to be honest. I pay for this product, at least give me a reason why something is flagged. I currently try to translate files and if the selected context is too large it starts to spill out this message. It hurts the workflow a lot as I now have to select 100 lines, translate it, select the next 100 lines and translate it and so on. |
Beta Was this translation helpful? Give feedback.
-
Played around a bit and got it to flag this prompt: |
Beta Was this translation helpful? Give feedback.
-
I'm encountering a similar issue, but it's not my prompt, it's the answer that gets filtered, I think it's because it contains an actual CLI command named |
Beta Was this translation helpful? Give feedback.
-
These prompts were filtered for me:
Frustrating that they've already kneecapped and censored the tool. I won't be surprised if Copilot becomes as useless as Google Search in the next few years. |
Beta Was this translation helpful? Give feedback.
-
I'm working on a map feature. I need to use the bindPopup to a marker, and Copilot keeps flagging the response with this message. I guess it's due to the word "popup". |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
Flagging what should be a totally normal response as inappropriate or suspicious can create unnecessary obstacles in communication and workflow. This issue often arises in automated systems or platforms that use algorithms to monitor content for compliance with community guidelines or security policies. When these systems mistakenly flag benign responses, it can lead to frustration for users, disrupt conversations, and hinder productivity. Such false positives may result from overly strict filters, lack of context understanding, or algorithmic biases. Addressing this problem requires refining the algorithms, incorporating user feedback, and ensuring a balance between maintaining safety standards and allowing normal, constructive interactions. Improving the accuracy of content moderation systems can enhance user experience and foster a more seamless and effective communication environment. this very important forum ireland . |
Beta Was this translation helpful? Give feedback.
-
Select Topic Area
Bug
Body
This results in a false-positive. Not sure what Copilot generated but it shouldn't be flagged at all. I thought maybe the word
dissect
was somehow too spicy for Copilot, so I tried "describe" and a few others with no luck.EDIT: A month later and still no response from Github. Sigh.
Beta Was this translation helpful? Give feedback.
All reactions