Skip to content

fshot/google-doc-experiments

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Google Workspace + AI Agents: v1 Raw Artifacts

Status: abandoned in place. This repo contains the unedited output of a failed "let Claude manage the research" experiment from April 2026. The results are real but disorganized, inconsistently formatted, and probably hard to reproduce. I'm keeping them as-is for honesty, not as a reference.

The write-up of what I actually learned is on the blog: Research Slop: Driving Google Workspace from the Terminal.

What happened

I spent a few days testing AI tools for programmatic Google Workspace access (Claude Code, Claude Cowork, Gemini CLI, GWS CLI, ChatGPT). I let Claude Code drive the process with minimal structure: no test protocol before testing, no consistent file conventions, no separation of observation from interpretation.

The result was 12 markdown files with no coherent relationship to each other. A research plan written after testing had already started. Test results that vary wildly in format. A workflow playbook that references features that turned out not to exist (gws docs pull/push). And a first blog draft that smoothed over the rough edges into something that sounded thorough but wasn't honest.

What's in here

  • TEST-RESULTS-*.md: Raw session logs from each tool test. Some have structured tables, some are narrative. The observations are real; the organization is not.
  • RESEARCH-PLAN.md: Written after two tools had already been tested. Retrofitted planning.
  • HDCA-WORKFLOW-PLAYBOOK.md: Workflow designs that reference features that don't exist. Aspirational, not validated.
  • capability-matrix.html: Interactive comparison chart. Roughly accurate but not maintained.
  • screenshots/: 16 screenshots from auth flows and tool operations. These are the most reliable artifacts in the repo.
  • blog-post-ai-google-workspace.md: The original Claude-drafted blog post. Sanitized and not honest about the rough edges. The published version on the blog is a rewrite.

Why I'm not cleaning this up

The mess is the point. This is what you get when you hand Claude Code a vague research brief and let it figure out the structure as it goes. The v2 experiment (coming separately) will use a proper CLAUDE.md with defined protocols, a test matrix checklist, and structured session logs. The comparison between v1 and v2 process is part of the story.

If you're looking for actual guidance

Read the blog post. It has the condensed, honest version of what works, what doesn't, and why Google makes you create a GCP project just to read your own calendar.


Part of an ongoing experiment at cruxcapacity.com.

April 2026

About

What happens when you test every AI tool for Google Workspace automation — and realize the answer might not be Google Workspace at all.

Resources

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages