Graph-powered talent intelligence in < 10 min
Turn scattered career notes into a live knowledge graph — and one-click, ATS-ready résumés.
| Problem | SkillSphere's answer |
|---|---|
| Career data everywhere — LinkedIn, slides, docs | Markdown → Neo4j hypergraph (one source of truth) |
| Generic CVs don't win roles | Graph-query → Job-specific résumé PDF |
| LLM privacy & cost nerves | Runs fully local on Ollama, no API keys |
| Need proof of my graph/AI chops | This repo is the demo — explore the live graph or read the code |
git clone https://github.com/bprager/SkillSphere.git
cd SkillSphere && ./scripts/quick_start.sh # builds graph + sample résumé
open output/resume_google.pdfFull install & config instructions live in
docs/installation.md.
- Hypergraph‑of‑Thought model → Neo4j + Node2Vec embeddings
- Gleaning loop wrings 25 % extra facts from each chunk
- Graph→Markdown→PDF pipeline for recruiter‑ready résumés
- 100 % unit‑tested core modules
Deep‑dive architecture and research notes are in docs/architecture.md.
This repository is structured to systematically capture and organize technical skills from GitHub repositories. Use these instructions to analyze repositories and extract valuable skill information.
When analyzing a GitHub repository, your goal is to:
- Identify Technical Skills: Extract concrete technical competencies demonstrated in the codebase
- Categorize by Domain: Organize skills into appropriate categories (software, hardware, creative, research)
- Document Systematically: Create consistent, comprehensive skill records
- Maintain Quality: Ensure all records follow the established template structure
SkillSphere/
├── ingestion_docs/
│ ├── skills/ # All skill documentation
│ │ ├── software/ # Programming & technical skills
│ │ ├── hardware/ # Physical systems & electronics
│ │ ├── creative/ # Design & multimedia skills
│ │ └── research/ # Self-directed learning & experiments
│ ├── jobs/ # Professional experience records
│ ├── certs/ # Certification documentation
│ └── extras/ # Additional context files
├── templates/
│ ├── complementary_skills_template.md # Main skill documentation template
│ └── complementary_skills_quick_reference.md # YAML metadata reference
└── docs/ # Architecture & system documentation
- Analyze README.md, architecture documentation, and pyproject.toml/package.json
- Identify programming languages, frameworks, databases, and deployment technologies
- Look for unique or advanced technical implementations
- Note any custom solutions or sophisticated integrations
Use this decision matrix:
| Category | Criteria | Examples |
|---|---|---|
| software/ | Programming, APIs, databases, cloud, DevOps | Python, FastAPI, Neo4j, Docker, Kubernetes |
| hardware/ | Physical systems, electronics, embedded | PCB design, 3D printing, IoT, embedded systems |
| creative/ | Design, multimedia, content creation | Video editing, graphic design, documentation |
| research/ | Experiments, prototypes, learning projects | ML research, proof-of-concepts, explorations |
- High Priority: Core technologies central to the project's functionality
- Medium Priority: Supporting technologies that enable the main features
- Low Priority: Configuration tools and standard development utilities
For each identified skill, create a comprehensive record using the template structure.
Primary Template: templates/complementary_skills_template.md
- Use this for full skill documentation
- Include all sections: Overview, Learning Journey, Projects, Competencies
- Focus on transferable skills and professional relevance
Quick Reference: templates/complementary_skills_quick_reference.md
- Contains YAML metadata structure for each skill
- Use for consistent categorization and tagging
- Update when adding new skills
Use descriptive filenames that include the main technology:
Software Skills:
neo4j-hypergraphs.md(database + specific application)fastapi-web-services.md(framework + application type)docker-orchestration.md(tool + specific use case)
Hardware Skills:
pcb-design-kicad.md(skill + primary tool)3d-printing-bambulab.md(process + specific equipment)
Creative Skills:
video-editing-davinci.md(skill + software)technical-writing-documentation.md(skill + application)
Required Information:
- Technical depth and complexity level
- Specific tools and technologies used
- Practical projects or implementations
- Professional relevance and transferability
- Learning progression and current proficiency
Metadata Requirements:
title: "Clear, Descriptive Skill Name"
type: "software_skill|hardware_skill|creative_skill|research_skill"
category: "specific_domain"
entity_id: "skill_unique_identifier"
primary_tools: ["Tool1", "Tool2", "Tool3"]
technologies: ["Tech1", "Tech2", "Tech3"]
competencies: ["skill1", "skill2", "skill3"]Use this prompt structure for repository analysis:
Analyze this GitHub repository and extract technical skills for documentation:
1. **Repository Assessment**: Examine the codebase, documentation, and dependencies
2. **Skill Identification**: Identify 3-5 high-priority technical skills demonstrated
3. **Categorization**: Determine the appropriate category (software/hardware/creative/research)
4. **Documentation**: Create comprehensive skill records using the provided template
5. **Integration**: Update the quick reference with new skill metadata
Focus on:
- Advanced or specialized technical implementations
- Technologies central to the project's core functionality
- Skills that demonstrate professional-level competency
- Unique combinations of technologies or innovative approaches
Create detailed documentation that showcases technical depth and practical application.Documented skills are processed by the hypergraph system to:
- Create semantic connections between related competencies
- Generate job-specific résumé content
- Identify skill gaps and learning opportunities
- Demonstrate continuous learning and technical growth
The hypergraph processes markdown files to extract entities, relationships, and competency mappings for intelligent career intelligence.
I design & build graph‑driven AI solutions that make talent, knowledge and content searchable & actionable. If that sparks ideas for your team:
- Book a 30‑min chat
- Connect on LinkedIn
- Say hi via email:
bernd@prager.ws
Let's turn your data into an unfair advantage.
© 2025 Bernd Prager — Apache 2.0 Clone it, fork it, improve it — and tell me what you build!
