Skip to content

Timmyae/gallery

 
 

Repository files navigation

TikTok Automation Planner (Dry-Run)

This workspace includes a minimal, code-first automation scaffold to plan daily TikTok content from CSV inputs and optionally notify a Slack channel via webhook. It does not post to TikTok; it focuses on planning and reporting while you evaluate official TikTok API access paths.

Contents

  • cli.py: Command-line entrypoint for generating a daily plan
  • automation/: Planner, CSV loaders, and Slack notifier
  • content_ideas_detailed.csv: Sample ideas
  • hashtag_strategy.csv: Sample hashtags per category
  • posting_strategy_western.csv: Sample posting times (LA timezone)

Requirements

  • Python 3.10+
  • No external packages required (standard library only)

CSV Formats

  • content_ideas_detailed.csv

    • Headers: category,title,hook,notes
    • Categories expected by default split: medical, BTS, lifestyle
  • hashtag_strategy.csv

    • Headers: category,hashtags
    • hashtags can be comma- or space-separated; # prefix is optional
  • posting_strategy_western.csv

    • Headers: time,tz
    • time in HH:MM 24h format; tz informational only

Usage

Run a dry-run plan for 5 videos and print to console:

python3 cli.py --count 5

Specify custom CSV paths:

python3 cli.py \
  --ideas /path/to/content_ideas_detailed.csv \
  --hashtags /path/to/hashtag_strategy.csv \
  --times /path/to/posting_strategy.csv \
  --count 5

Send the plan to Slack via Incoming Webhook (optional):

export SLACK_WEBHOOK_URL="https://hooks.slack.com/services/XXX/YYY/ZZZ"
python3 cli.py --count 5

You can also pass the webhook explicitly:

python3 cli.py --count 5 --slack-webhook "https://hooks.slack.com/services/XXX/YYY/ZZZ"

Notes on TikTok API Posting

  • TikTok offers content posting endpoints through specific programs (e.g., for Business accounts) and requires app registration, review, and appropriate scopes.
  • This scaffold intentionally avoids posting and focuses on planning + notifications while you secure official access and credentials.

Scheduling (optional)

Use cron to run daily at 08:00 local time and send to Slack:

0 8 * * * cd /workspace && SLACK_WEBHOOK_URL="https://hooks.slack.com/services/XXX/YYY/ZZZ" /usr/bin/python3 cli.py --count 5 >/tmp/tiktok_plan.log 2>&1

Google AI Edge Gallery ✨

License GitHub release (latest by date)

Explore, Experience, and Evaluate the Future of On-Device Generative AI with Google AI Edge.

The Google AI Edge Gallery is an experimental app that puts the power of cutting-edge Generative AI models directly into your hands, running entirely on your Android (available now) and iOS (coming soon) devices. Dive into a world of creative and practical AI use cases, all running locally, without needing an internet connection once the model is loaded. Experiment with different models, chat, ask questions with images, explore prompts, and more!

Overview Overview

Ask Image Ask Image

Prompt Lab Prompt Lab

AI Chat AI Chat

✨ Core Features

  • 📱 Run Locally, Fully Offline: Experience the magic of GenAI without an internet connection. All processing happens directly on your device.
  • 🤖 Choose Your Model: Easily switch between different models from Hugging Face and compare their performance.
  • 🖼️ Ask Image: Upload an image and ask questions about it. Get descriptions, solve problems, or identify objects.
  • ✍️ Prompt Lab: Summarize, rewrite, generate code, or use freeform prompts to explore single-turn LLM use cases.
  • 💬 AI Chat: Engage in multi-turn conversations.
  • 📊 Performance Insights: Real-time benchmarks (TTFT, decode speed, latency).
  • 🧩 Bring Your Own Model: Test your local LiteRT .task models.
  • 🔗 Developer Resources: Quick links to model cards and source code.

🏁 Get Started in Minutes!

  1. Download the App: Grab the latest APK.
  2. Install & Explore: For detailed installation instructions (including for corporate devices) and a full user guide, head over to our Project Wiki!

🛠️ Technology Highlights

  • Google AI Edge: Core APIs and tools for on-device ML.
  • LiteRT: Lightweight runtime for optimized model execution.
  • LLM Inference API: Powering on-device Large Language Models.
  • Hugging Face Integration: For model discovery and download.

🤝 Feedback

This is an experimental Alpha release, and your input is crucial!

📄 License

Licensed under the Apache License, Version 2.0. See the LICENSE file for details.

🔗 Useful Links

About

A gallery that showcases on-device ML/GenAI use cases and allows people to try and use models locally.

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

  • Kotlin 98.0%
  • Python 1.2%
  • Shell 0.8%