Skip to content

apolloapi/modsys-text-detection

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 

Repository files navigation

modsys-text-detection

modsys-text-detection (enhanced) - set of automation scripts for various use-cases for detection of toxicity in text

from modsys.client import Modsys
sdk = Modsys()

def evaluate():
    sdk.use("google_perspective:analyze", google_perspective_api_key="#API-KEY")
    return sdk.evaluate(
        [
            {
                "item": "This is hate speech",
                "__expected": {"TOXICITY": {"value": "0.78"}},
                "__trend": "lower",
            },
            {
                "item": "You suck at this game.",
                "__expected": {"TOXICITY": {"value": "0.50"}},
                "__trend": "higher",
            },
        ]
    )

def detectTextApollo():
  sdk.use("google_perspective:analyze", google_perspective_api_key="#API-KEY")
  return sdk.detectText(
    prompt="This is spam",
    community_id="your-app-name-or-username",
    content_id="id-of-content"
  )


if __name__ == "__main__":
    evaluate()
    

About

modsys-text-detection (enhanced) - set of automation scripts for various use-cases for detection of toxicity in text

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published