Skip to content

Configuration

Most users won't need this page — vidpipe init handles setup automatically. This is the full reference for when you want fine-grained control.


The Simple Version

Most users just need a .env file with one key:

env
OPENAI_API_KEY=sk-your-key-here

Everything else has sensible defaults. Run vidpipe init to generate a complete .env interactively.

TIP

Configuration is resolved in priority order: CLI flags → environment variables → .env file → defaults.


CLI Parameters

Positional Argument

ArgumentDescription
[video-path]Path to a video file to process. Implies --once mode.

Options

FlagDescriptionDefault
--watch-dir <path>Folder to watch for new .mp4 recordings$WATCH_FOLDER or ./watch
--output-dir <path>Base directory for processed output$OUTPUT_DIR or ./recordings
--openai-key <key>OpenAI API key (Whisper + agents)$OPENAI_API_KEY
--exa-key <key>Exa AI API key for web search in social posts$EXA_API_KEY
--onceProcess a single video (or next arrival) and exitOff
--brand <path>Path to brand.json config file$BRAND_PATH or ./brand.json
-v, --verboseEnable debug-level loggingOff
-V, --versionPrint version and exit

Skip Flags

Disable individual pipeline stages:

FlagSkips
--no-gitGit commit/push after processing
--no-silence-removalDead-silence detection and removal
--no-shortsShort clip extraction
--no-socialSocial media post generation
--no-medium-clipsMedium clip (1–3 min) extraction
--no-captionsCaption generation and burning
--no-social-publishSocial media queue-build stage

Additional Flags

FlagDescription
--late-api-key <key>Override Late API key

Examples:

bash
# Process without generating shorts or social posts
vidpipe --no-shorts --no-social /path/to/video.mp4

# Skip git (useful during testing)
vidpipe --no-git --watch-dir ./watch

# Transcription + summary only (skip everything optional)
vidpipe \
  --no-silence-removal \
  --no-shorts \
  --no-social \
  --no-captions \
  --no-git \
  /path/to/video.mp4

Environment Variables

Set these in your shell or in a .env file in the working directory.

VariableRequiredDescriptionDefault
OPENAI_API_KEYOpenAI API key for Whisper transcription (and agents when LLM_PROVIDER=openai)
LLM_PROVIDERLLM provider: copilot, openai, or claudecopilot
LLM_MODELOverride the default model for the selected providerProvider default
ANTHROPIC_API_KEYAnthropic API key (required when LLM_PROVIDER=claude)
WATCH_FOLDERDirectory to monitor for new video files./watch
OUTPUT_DIRBase output directory for processed videos./recordings
REPO_ROOTRepository root for git operationsCurrent working directory
FFMPEG_PATHAbsolute path to ffmpeg binaryffmpeg (from PATH)
FFPROBE_PATHAbsolute path to ffprobe binaryffprobe (from PATH)
EXA_API_KEYExa AI API key for web search in social posts
BRAND_PATHPath to brand.json./brand.json

Social Publishing

VariableRequiredDescriptionDefault
LATE_API_KEYLate API key for social media publishing
LATE_PROFILE_IDLate profile ID (auto-detected if not set)

LLM Provider

VidPipe supports multiple LLM providers for AI agent features. Configure via environment variables:

VariableRequiredDescriptionDefault
LLM_PROVIDERLLM provider to use: copilot, openai, or claudecopilot
LLM_MODELOverride the default model for the selected providerProvider default
ANTHROPIC_API_KEYAnthropic API key (required when LLM_PROVIDER=claude)

Per-Provider Setup

  • Copilot (default): No extra config needed — uses your GitHub Copilot subscription. Requires an active GitHub Copilot subscription.
  • OpenAI: Set LLM_PROVIDER=openai. Uses the same OPENAI_API_KEY already required for Whisper transcription.
  • Claude: Set LLM_PROVIDER=claude and ANTHROPIC_API_KEY=sk-ant-.... Get a key at console.anthropic.com.

Cost Tracking

The pipeline automatically tracks token usage and estimated cost for every LLM call. At the end of each run, a summary is printed showing total tokens, cost (USD for OpenAI/Claude, premium requests for Copilot), and breakdowns by provider, agent, and model. No configuration is needed — cost tracking is always on.


Example .env file

env
OPENAI_API_KEY=sk-your-api-key-here
WATCH_FOLDER=/home/you/Videos/Recordings
OUTPUT_DIR=/home/you/Content/processed
REPO_ROOT=/home/you/repos/vidpipe

# Optional: explicit FFmpeg paths
# FFMPEG_PATH=/usr/local/bin/ffmpeg
# FFPROBE_PATH=/usr/local/bin/ffprobe

# Optional: Exa AI for web search links in social posts
# EXA_API_KEY=your-exa-api-key-here

A .env.example file is included in the repository — copy it to get started:

bash
cp .env.example .env

Brand Customization

The brand.json file controls the visual identity and voice of all generated content — captions, social media posts, blog posts, summaries, and short clip descriptions. Customize it to match your personal or company brand.

Location

Place brand.json in your project root. The tool looks for it in the current working directory by default. Override the path with:

  • CLI flag: --brand /path/to/brand.json
  • Environment variable: BRAND_PATH=/path/to/brand.json

If no brand.json exists, sensible defaults are used automatically (name: "Creator", handle: "@creator", neutral professional tone).

Example brand.json

json
{
  "name": "Your Name",
  "handle": "@yourhandle",
  "tagline": "Your tagline here",
  "voice": {
    "tone": "professional, friendly",
    "personality": "A knowledgeable content creator.",
    "style": "Clear and concise."
  },
  "advocacy": {
    "primary": ["Technology A", "Technology B"],
    "interests": ["Topic 1", "Topic 2"],
    "avoids": ["Negative comparisons", "Overly salesy language"]
  },
  "customVocabulary": [
    "ProperNoun",
    "TechTermThatWhisperMightMisspell"
  ],
  "hashtags": {
    "always": ["#AlwaysInclude"],
    "preferred": ["#Often", "#Used"],
    "platforms": {
      "tiktok": ["#TechTok"],
      "linkedin": ["#Innovation"],
      "instagram": ["#CodeLife"]
    }
  },
  "contentGuidelines": {
    "shortsFocus": "Highlight key moments and insights.",
    "blogFocus": "Educational and informative content.",
    "socialFocus": "Engaging and authentic posts."
  }
}

Field Descriptions

FieldTypeDescriptionDefault
namestringYour display name — used in content attribution"Creator"
handlestringSocial media handle — included in generated posts"@creator"
taglinestringShort bio/tagline for intros""
voice.tonestringComma-separated tone descriptors for AI writing style"professional, friendly"
voice.personalitystringDescription of your public persona"A knowledgeable content creator."
voice.stylestringHow generated content should read"Clear and concise."
advocacy.primarystring[]Core technologies/brands you champion[]
advocacy.interestsstring[]Broader topics the AI can reference[]
advocacy.avoidsstring[]Things the AI should never include[]
customVocabularystring[]Proper nouns and jargon sent to Whisper as a prompt hint to improve transcription accuracy[]
hashtags.alwaysstring[]Included on every post, every platform[]
hashtags.preferredstring[]Commonly used — AI picks the most relevant[]
hashtags.platformsobjectPlatform-specific hashtags (keys: tiktok, youtube, instagram, linkedin, x){}
contentGuidelines.shortsFocusstringWhat moments to extract as short clips"Highlight key moments and insights."
contentGuidelines.blogFocusstringBlog post structure and angle"Educational and informative content."
contentGuidelines.socialFocusstringSocial media writing strategy"Engaging and authentic posts."

For full examples (developer, corporate, educator templates) and additional tips, see the Brand Customization Guide.


Output Directory Structure

The --output-dir (default ./recordings) is the base directory. Each video creates a subdirectory named after a slugified version of the original filename:

<output-dir>/
└── <video-slug>/
    ├── <video-slug>.mp4              # Original video copy
    ├── <video-slug>-edited.mp4       # After silence removal
    ├── <video-slug>-captioned.mp4    # With burned-in captions
    ├── README.md                     # AI summary with screenshots
    ├── transcript.json               # Word-level transcript
    ├── transcript-edited.json        # Adjusted transcript (post-silence-removal)
    ├── blog-post.md                  # Long-form blog post
    ├── thumbnails/
    │   └── snapshot-*.png            # Key-moment screenshots
    ├── shorts/
    │   ├── <short-slug>.mp4          # Extracted short clips
    │   ├── <short-slug>-captioned.mp4
    │   ├── <short-slug>-portrait.mp4 # 9:16 platform variant
    │   ├── <short-slug>.ass          # Caption file
    │   └── <short-slug>.md           # Clip description & metadata
    ├── medium-clips/
    │   ├── <clip-slug>.mp4           # 1–3 minute topic clips
    │   ├── <clip-slug>-captioned.mp4
    │   ├── <clip-slug>.ass
    │   └── <clip-slug>.md
    ├── chapters/
    │   ├── chapters.json             # Canonical chapter data
    │   ├── chapters-youtube.txt      # YouTube description timestamps
    │   ├── chapters.md               # Markdown table
    │   └── chapters.ffmetadata       # FFmpeg metadata format
    └── social-posts/
        ├── tiktok.md
        ├── youtube.md
        ├── instagram.md
        ├── linkedin.md
        └── x.md

Common Configurations

Content creator (full pipeline)

bash
vidpipe \
  --watch-dir ~/Videos/Recordings \
  --output-dir ~/Content/processed \
  --brand ./brand.json \
  --verbose

Quick transcription only

bash
vidpipe \
  --no-silence-removal \
  --no-shorts \
  --no-social \
  --no-captions \
  --no-git \
  /path/to/meeting.mp4

CI/CD or automation (no interactive, no git)

bash
OPENAI_API_KEY=sk-... vidpipe \
  --once \
  --no-git \
  --output-dir /tmp/output \
  /path/to/video.mp4

Schedule Configuration

The schedule.json file defines when social media posts are published. It is generated automatically by the pipeline and can be managed via vidpipe schedule. For full details on scheduling and the review workflow, see the Social Publishing Guide.


Troubleshooting

"Missing required: OPENAI_API_KEY"

The tool requires an OpenAI API key. Provide it via:

  • --openai-key sk-... flag
  • OPENAI_API_KEY environment variable
  • .env file in the working directory

"ffmpeg: command not found"

FFmpeg is not on your system PATH. Either:

  • Install FFmpeg (see FFmpeg Setup)
  • Set FFMPEG_PATH and FFPROBE_PATH to the absolute paths of the binaries

Verbose mode shows too much output

Verbose mode (-v) sets the log level to debug. If you only need it temporarily, pass the flag on the command line rather than setting it in .env.

Watch mode doesn't detect files

Ensure the --watch-dir path exists and is writable. The watcher monitors for new .mp4 files only. Files that already exist when the watcher starts are not processed — only newly created files trigger the pipeline.