Core Concepts
Foundational understanding of GitHub Agentic Workflows
What is GitHub Agentic Workflows?
AI-powered automations that understand context, make decisions, and take action — from natural language instructions in Markdown.
Unlike fixed if-then rules, agentic workflows use coding agents (Copilot, Claude Code, OpenAI Codex) to interpret instructions and handle tasks requiring judgment — triaging issues, fixing CI failures, updating docs.
How It Works
gh aw compileKey Properties
| Platform | Runs inside GitHub Actions as standard workflows |
| Authoring | Natural language in Markdown with YAML frontmatter |
| Security | Read-only by default; writes only through “safe outputs” |
| Agents | Copilot CLI (default), Claude Code, OpenAI Codex |
| License | MIT — fully open source |
| Language | CLI written in Go (70.8%), JavaScript (27.6%) |
| Homepage | gh.io/gh-aw |
History & Origins
Created at GitHub Next by Don Syme and Peli de Halleux — from private R&D to 4,000+ stars in under a year.
Timeline
Creators
Don Syme
Principal Researcher at Microsoft Research, creator of F#, now at GitHub Next. Lead author of the official blog posts and the 19-part “Meet the Workflows” blog series.
Peli de Halleux
Researcher at GitHub Next. Creator of “Peli’s Agent Factory” — a collection of 100+ automated agentic workflows run continuously in the gh-aw repository and across GitHub internally.
Collaboration
A collaboration between GitHub, Microsoft Research, and Azure Core Upstream. Top contributors include Copilot (AI), dsyme, pelikhan, and 40 others.
”Waking Up to a Healthier Repository”
The founding vision: what does repository automation with strong guardrails look like in the era of AI coding agents?
”Imagine visiting your repository in the morning and feeling calm because you see:”
🏷️ Issues Triaged & Labelled
New issues automatically analyzed, categorized, and assigned to the right teams — semantically, not just by keywords.
🔧 CI Failures Investigated
Failed builds analyzed and correlated with recent changes, with proposed fixes ready for your review.
📝 Documentation Updated
Docs automatically aligned with recent code changes — no more stale READMEs or outdated API references.
🧪 Testing PRs Await Review
New pull requests that improve test coverage are ready for human review — small, focused, and incremental.
”All of it visible, inspectable, and operating within the boundaries you’ve defined.”
”A useful mental model: if repetitive work in a repository can be described in words, it might be a good fit for an agentic workflow.”
🏭 Peli’s Agent Factory — The Living Proof
A collection of 100+ automated agentic workflows running continuously in the gh-aw repository. These specialized agents handle documentation, style, cleanup, security, triage, and culture.
”These agents never take a day off, quietly working to make our codebase better.”
Continuous AI
The systematic, automated application of AI to software collaboration — augmenting CI/CD with judgment-based, context-dependent automation.
”GitHub Agentic Workflows hosts coding agents in GitHub Actions, to perform complex, multi-step tasks automatically. This enables Continuous AI — systematic, automated application of AI to software collaboration.”
Not a Replacement for CI/CD
”GitHub Agentic Workflows and Continuous AI are designed to augment existing CI/CD rather than replace it. They do not replace build, test, or release pipelines, and their use cases largely do not overlap with deterministic CI/CD workflows.”
The Gap Continuous AI Fills
| CI/CD (Deterministic) | Continuous AI (Judgment-Based) |
|---|---|
| Tests pass or fail | Does documentation match implementation? |
| Builds succeed or don’t | Is this issue properly triaged? |
| Linter flags violations | Are there performance anti-patterns? |
| Rules-based automation | Intent-based automation |
| YAML configuration | Natural language instructions |
”CI isn’t failing. It’s doing exactly what it was designed to do. But many of the hardest and most time-consuming parts of engineering are judgment-heavy and context-dependent.”
The 30+ Year Vision
”The first era of AI for code was about code generation. The second era involves cognition and tackling the cognitively heavy chores off of developers.”
”This is the first harbinger of the new phase of AI. We’re moving from generation to reasoning.”
Humans Stay in the Loop
”Agentic workflows create an agent-only, sub-loop that’s able to be autonomous because agents are acting under defined terms. But it’s important that humans stay in the broader loop of forward progress in the repository, through reports, issues, and pull requests. With GitHub Agentic Workflows, pull requests are never merged automatically, and humans must always review and approve.”
The Six Pillars of Continuous AI
Six categories of repository automation that showcase what’s possible when AI handles judgment-heavy, context-dependent tasks — things that “would be difficult or impossible to accomplish with traditional YAML workflows alone.”
Pillar 1: Continuous Triage
Automatically summarize, label, and route new issues. The agent reads incoming issues, understands their content semantically, applies appropriate labels, assigns to the right team members, and adds helpful summaries or requests for clarification.
Example workflows: Issue Clarifier, Auto-labeler, Priority Router
Pillar 2: Continuous Documentation
Keep READMEs and documentation aligned with code changes. The agent reads function docstrings, compares them to implementations, detects mismatches, and opens PRs to update either the code or documentation.
Pillar 3: Continuous Code Simplification
Repeatedly identify code improvements and open pull requests. Pre-compiling regex patterns, removing dead code, simplifying complex conditionals, and extracting reusable functions.
Pillar 4: Continuous Test Improvement
Assess test coverage and add high-value tests. Small PRs daily, developers review incrementally.
Pillar 5: Continuous Quality Hygiene
Proactively investigate CI failures and propose targeted fixes. Analyzes failed CI runs, correlates with recent changes, proposes fixes for flaky tests, identifies dependency conflicts, and suggests configuration improvements.
Pillar 6: Continuous Reporting
Create regular reports on repository health, activity, and trends. Pulls from issues, PRs, commits, and CI results to synthesize cross-source insights.
”The value isn’t the report itself. It’s the synthesis across multiple data sources that would otherwise require manual analysis.”
📊 Real Metrics from Peli’s Agent Factory — Documentation Pillar
| Workflow | Merged PRs | Proposed PRs | Merge Rate |
|---|---|---|---|
| Daily Documentation Updater | 57 | 59 | 96% |
| Glossary Maintainer | 10 | 10 | 100% |
| Documentation Unbloat | 88 | 103 | 85% |
| Documentation Noob Tester | 9 | 21 | 43% |
| Slide Deck Maintainer | 2 | 5 | 40% |
| Multi-device Docs Tester | 2 | 2 | 100% |
| Blog Auditor | 5 passed, 1 flagged | 6 audits | — |
Markdown-First Authoring
A fundamental shift from “programming your automation” to “describing your automation” — write your intent in Markdown, compile it to a hardened workflow.
”The concept behind GitHub Agentic Workflows is straightforward: you describe the outcomes you want in plain Markdown, add this as an automated workflow to your repository, and it executes using a coding agent in GitHub Actions.”
Workflow File Structure
📋 YAML Frontmatter
Configuration between --- markers:
- Triggers (
on:) — schedule, events, manual - Permissions — read-only by default
- Tools — github, browser, web-search
- Safe Outputs — pre-approved write operations
- Engine configuration — copilot, claude, codex
📝 Markdown Instructions
Natural language task description:
- What to analyze or investigate
- Style and format guidance
- Process steps (guidance, not rigid procedures)
- Scope and specificity controls
The Compilation Flow
Both the .md source file and the .lock.yml compiled file are committed to the repository. The .md is the editable source of truth, while the .lock.yml is the compiled, security-hardened version that actually executes.
”Your workflows can range from very general (‘Improve the software’) to very specific (‘Check that all technical documentation and error messages for this educational software are written in a style suitable for an audience of age 10 or above’).”
Agentic vs. Traditional Workflows
A side-by-side comparison across 10 dimensions showing how agentic workflows differ from — and complement — traditional GitHub Actions.
| Dimension | ⚙️ Traditional GitHub Actions | ⚡ GitHub Agentic Workflows |
|---|---|---|
| Authoring | YAML with explicit step definitions | Markdown with natural language intent |
| Logic | Deterministic if/then rules | AI-driven contextual reasoning |
| Behavior | Fixed — same input = same output | Adaptive — interprets context flexibly |
| Decision Making | Pre-programmed steps | Agent makes autonomous decisions |
| Outputs | Direct writes with granted permissions | Controlled “safe outputs” with validation |
| Security Model | Permissions granted to workflow | Read-only by default, explicit safe outputs |
| Use Cases | Builds, tests, deployments, linting | Triage, documentation, code quality, reporting |
| Execution | Steps run sequentially as coded | Agent reasons through instructions |
| Error Handling | Explicit error handlers | Agent adapts based on context |
| Review | Automated outputs applied directly | Human review required for all mutations |
”Traditional workflows execute pre-programmed steps with fixed if/then logic. They do exactly what you tell them, every time, in the same way. Agentic workflows use AI to understand context, make decisions, and generate content by interpreting natural language instructions flexibly.”
”Don’t use agentic workflows as a replacement for GitHub Actions YAML workflows for CI/CD. This approach extends continuous automation to more subjective, repetitive tasks that traditional CI/CD struggle to express.”
Supported Coding Agents
The same workflow format works across all supported engines — choose the coding agent that best fits your needs.
GitHub Copilot CLI
Default EngineGitHub’s native coding agent. Each run typically incurs two premium requests: one for agentic work, one for guardrail checks. Models can be configured to manage costs.
engine: copilot
Claude Code
SupportedAnthropic’s Claude agent. Requires an ANTHROPIC_API_KEY secret added to the repository.
engine: claude
OpenAI Codex
SupportedOpenAI’s coding agent. Requires an OPENAI_API_KEY secret added to the repository.
engine: codex
Tools & Model Context Protocol (MCP)
All engines interact with repository data through the Model Context Protocol (MCP) — a standardized protocol for connecting AI agents to external tools and services.
tools: github: # GitHub MCP server browser: # Playwright-based browser automation web-search: # Web search capabilities
Repositories
Code, files, branches, commits
Issues & PRs
Create, read, comment, label
GitHub Actions
Workflow runs, jobs, logs
Key Terminology
Essential terms and concepts for working with GitHub Agentic Workflows.
Core Terms
--- markers at the top of a workflow Markdown file. Defines triggers, permissions, tools, engine, and safe outputs.gh aw compile. Both files are committed to the repository.Design Patterns
ChatOps
Triggered by comment commands (/fix, /review)
DailyOps
Scheduled daily workflows (reports, cleanup)
IssueOps
Triggered by issue events
DataOps
Workflows that process and analyze data
ProjectOps
Project management automation
MultiRepoOps
Workflows spanning multiple repositories
Complete Workflow Examples
Real workflow files from official sources — copy, adapt, and run them in your own repository.
📊 Example 1: Daily Repository Status Report
The canonical example from the official GitHub Blog — generates a daily maintainer report as a GitHub issue.
--- on: schedule: daily permissions: contents: read issues: read pull-requests: read safe-outputs: create-issue: title-prefix: "[repo status] " labels: [report] tools: github: --- # Daily Repo Status Report Create a daily status report for maintainers. Include - Recent repository activity (issues, PRs, discussions, releases, code changes) - Progress tracking, goal reminders and highlights - Project status and recommendations - Actionable next steps for maintainers Keep it concise and link to the relevant issues/PRs.
🔍 Example 2: Issue Clarifier
Automatically asks for additional details when new issues are unclear.
--- on: issues: types: [opened] permissions: read-all safe-outputs: add-comment: add-labels: --- # Issue Clarifier Analyze the current issue and ask for additional details if the issue is unclear. - Read the issue's title and body. - If the report is missing key details (reproduction steps, environment, expected/actual results), politely comment and request clarification. - Add label `needs clarification` if applicable. - If the issue is well-formed, add a `triaged` label and a brief acknowledgment comment.
📖 Example 3: Documentation Updater
From Peli’s Agent Factory — achieved a 96% merge rate across 59 proposed PRs.
--- description: | Reviews and updates documentation to ensure accuracy and completeness. Compares docstrings with implementations and proposes fixes. on: schedule: interval: daily push: branches: [main] paths: ['src/**', 'docs/**'] permissions: contents: read pull-requests: read safe-outputs: create-pull-request: title-prefix: "[docs] " labels: [documentation, automated] protected-files: fallback-to-issue tools: github: --- # Daily Documentation Updater Review the repository documentation and ensure it accurately reflects the current codebase. ## Tasks 1. Compare function docstrings with their implementations 2. Check that README sections match actual behavior 3. Verify code examples still work 4. Identify outdated references or deprecated features 5. Propose specific, targeted fixes via pull request ## Guidelines - Make small, focused changes (one topic per PR) - Preserve the original author's voice and style - Only fix genuine inaccuracies, not stylistic preferences - Include a clear description of what changed and why
📰 Example 4: Continuous AI News Report
A minimal example from the Continuous AI in Practice blog — shows how simple a workflow can be.
--- on: daily permissions: read safe-outputs: create-issue: title-prefix: "[news] " --- Analyze the recent activity in the repository and: - create an upbeat daily status report about the activity - provide an agentic task description to improve the project based on the activity. Create an issue with the report.
Key Quotes from Official Sources
Words from the creators and leaders behind GitHub Agentic Workflows.
”Custom agents for offline tasks, that’s what Continuous AI is. Anything you couldn’t outsource before, you now can.”
”Any task that requires judgment goes beyond heuristics. Any time something can’t be expressed as a rule or a flow chart is a place where AI becomes incredibly helpful.”
”In the future, it’s not about agents running in your repositories. It’s about being able to presume you can cheaply define agents for anything you want off your plate permanently.”
”Designing for safety and control is non-negotiable. GitHub Agentic Workflows implements a defense-in-depth security architecture that protects against unintended behaviors and prompt-injection attacks.”
”The PR is the existing noun where developers expect to review work. It’s the checkpoint everyone rallies around.”
”Treat the workflow Markdown as code. Review changes, keep it small, and evolve it intentionally.”
”Documentation is where we challenged conventional wisdom. Can AI agents write good documentation?”
”AI-generated docs need human/agent review, but they’re dramatically better than no docs (which is often the alternative).”
”Test coverage went from ~5% to near 100%. 1,400+ tests were written across 45 days for about ~$80 worth of tokens.”