You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The Azure Content Safety for OpenAI Workshop teaches prompt engineering techniques, detecting jailbreaks, offensive or inappropriate content in text and images quickly and efficiently. Includes identifying sexual, violent, hate, and self-harm content.
Use this forum for any discussions related to that workshop, either at an instructor-led event or from your own self-guided learning journey online.
Ideas For Discussions
This forum is for free-form discussions, but it helps to add a subject header that has some context for others. Here are some suggestions:
Need Help: Stuck on some issue or want to get a different perspective? Ask the community for help. Chances are, someone else has the same issue!
Cool Feature: Did you explore other aspects of the Azure Content Safety with OpenAI and discover something new? Share it with the community.
New Dataset: Did you try using the Azure Content Safety for OpenAI process with your own dataset or use case? Tell us about it!! That is awesome!
Collaboration Time: Have some idea you want to work on? Chances are others may be interested. Share it and see if you can get a discussion going.
Providing Feedback or Bug Reports
Found something not working as you expected? Or have suggestions on how to improve the documentation or provide additional resources to support learners? We want your feedback!!
The best way to do this is to Submit An Issue to our Responsible Hub page, and tag it with a relevant label:
tag it bug if reporting a problem
tag it enhancement if requesting a new feature or workshop
tag it documentation if providing feedback on improving content
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
About This Workshop
The Azure Content Safety for OpenAI Workshop teaches prompt engineering techniques, detecting jailbreaks, offensive or inappropriate content in text and images quickly and efficiently. Includes identifying sexual, violent, hate, and self-harm content.
Use this forum for any discussions related to that workshop, either at an instructor-led event or from your own self-guided learning journey online.
Ideas For Discussions
This forum is for free-form discussions, but it helps to add a subject header that has some context for others. Here are some suggestions:
Providing Feedback or Bug Reports
Found something not working as you expected? Or have suggestions on how to improve the documentation or provide additional resources to support learners? We want your feedback!!
The best way to do this is to Submit An Issue to our Responsible Hub page, and tag it with a relevant label:
bug
if reporting a problemenhancement
if requesting a new feature or workshopdocumentation
if providing feedback on improving contentBeta Was this translation helpful? Give feedback.
All reactions