# piped.sh Pipeline API - AI Agent Guide **Purpose:** This document provides concise technical documentation for AI agents to programmatically use the piped.sh pipeline API. **Base URL:** `https://piped.sh` **Authentication:** All requests require an API key via header: - `X-API-Key: ` OR - `Authorization: Bearer ` **API Tokens:** API tokens contain configuration including quota limits, SMTP settings, and metadata. **API Documentation:** - [OpenAPI Specification](/sdk/openapi.yaml) - Complete API reference in OpenAPI 3.1 format (machine-readable, supports Swagger UI, Postman import, SDK generation) **SDK Libraries:** - [JavaScript/TypeScript SDK](/sdk/piped.js) - [Ruby SDK](/sdk/piped.rb) - [Python SDK](/sdk/piped.py) ## Pipeline Endpoint **POST** `https://piped.sh/` or **GET** `https://piped.sh/?pipeline=` Two ways to provide the pipeline: 1. **POST body (recommended):** `POST /` with pipeline string in body - Pipeline string goes directly in POST body as `text/plain` - No query parameter needed - perfect for posting scripts! 2. **Query parameter:** `POST /?pipeline=` or `GET /?pipeline=` - Pipeline string is URL-encoded in query parameter - Initial input (if any) goes in POST body (GET requests require pipeline in query param) ## Pipeline Syntax Commands are chained with `|` (pipe character). Each command can have: - **Positional arguments:** Values without flags (e.g., URLs, patterns) - **Flags:** Short (`-f`) or long (`--flag`) format - **Key-value arguments:** `--key=value` format ## History Expansion (Bash-style `!` commands) You can re-execute pipelines from history using bash-style `!` syntax. History expansion happens automatically before pipeline execution: ```bash # Re-execute pipeline by ID !123 # Execute pipeline from history entry ID 123 # Re-execute most recent pipeline !! # Execute last pipeline (same as !-1) # Re-execute by relative position !-1 # Execute last pipeline (most recent) !-2 # Execute second-to-last pipeline !-3 # Execute third-to-last pipeline # Re-execute by prefix match !echo # Execute most recent pipeline starting with "echo" !cat # Execute most recent pipeline starting with "cat" !cat /tmp # Execute most recent pipeline starting with "cat /tmp" ``` **Expansion Rules:** - `!` - Expand to history entry with ID `` - `!!` - Expand to most recent history entry (same as `!-1`) - `!-` - Expand to Nth most recent entry (1 = most recent, 2 = second most recent, etc.) - `!` - Expand to most recent entry whose pipeline string starts with `` - Expansion only occurs if the pipeline string (after trimming whitespace) **starts** with `!` - If history entry not found, returns error: `"History entry not found: !123"` - Expansion is non-recursive (expanded pipelines don't get re-expanded) - History must be enabled (not disabled via `history_limit: 0`) **Examples:** ```bash # View history to find an entry history -n 5 # Re-execute entry 123 !123 # Re-execute last successful pipeline !-1 # Re-execute most recent pipeline starting with "cat" !cat # Combine with other commands !123 | wc -w # Re-execute !123 and count words in output ``` **Note:** History expansion only works if the pipeline string starts with `!` (after trimming leading whitespace). This simplifies the logic and improves performance. ## Available Commands **All Commands:** `ai`, `alias`, `api`, `awk`, `base64`, `cat`, `cp`, `crontab`, `csv`, `curl`, `cut`, `date`, `deno`, `diff`, `echo`, `exit`, `export`, `grep`, `head`, `history`, `html`, `jq`, `ls`, `mail`, `man`, `mcp`, `md5`, `mv`, `project`, `rm`, `save`, `sed`, `sha256`, `sleep`, `sort`, `source`, `subscription`, `tail`, `tee`, `tool`, `touch`, `test`, `tr`, `uniq`, `wc`, `xargs`, `xpath` **Operators:** `&&` (AND), `||` (OR), `|` (pipe), `;` or newline (statement separator) **Escaping `$`:** Dollar variables (`$NAME`, `$1`, `$@`, etc.) are expanded during pipeline execution. To use a literal `$`, escape it with a backslash: `\$NAME`, `\$1`, `\$BLAH`. Single-quoted strings also prevent expansion: `'$NAME'` stays literal. ### `alias` - Token-Scoped Command Aliases Aliases let you define reusable command prefixes (often for HTTP headers/tokens) that expand during pipeline execution. **Pipeline usage:** - `alias` - list aliases (Linux-like lines: `alias name=command`) - `alias name` - print one alias (one line) - `alias name=command` - set/update alias - `alias name=` - unset/delete alias **Direct HTTP management endpoints (recommended for agents):** - `GET /aliases` → `text/plain` list of `alias name=command` - `POST /aliases/{alias}` with `text/plain` body (command) → `alias name=command` - `DELETE /aliases/{alias}` → `alias name=` **Important rules/limits:** - Aliases are **token-scoped** (private to the API key). - Aliases are **not available** in the public playground. - Alias RHS must be a **single command** only: - Rejects `|`, `&&`, `||`, `;`, `(`, `)`, `$(`, `!`, and newlines. - No chaining: alias RHS cannot start with another alias. - Positional arguments are supported with shell-like variables: - `$1`–`$9` refer to the first nine arguments when the alias is invoked (e.g. `thatthis a b` makes `$1=a`, `$2=b`). - `$@` expands to all arguments as separate words; `$*` expands to all arguments as a single space-joined string. - Substitution happens outside single quotes only (e.g. `'$1'` remains the literal text `$1`). - To use a literal `$`, escape with a backslash: `\$1` stays literal. - Limits: **max 200 aliases per token**, and **max 2048 characters** per alias command. **Examples:** ```bash # Create an alias (stores command securely; use env vars like $TOKEN if desired) curl -X POST "https://piped.sh/aliases/grep-kevin" \ -H "X-API-Key: " \ -H "Content-Type: text/plain" \ --data-binary "grep -i kevin" # Use it in a pipeline (expands only when first token matches) curl -X POST "https://piped.sh/" \ -H "X-API-Key: " \ -H "Content-Type: text/plain" \ --data-binary "curl https://example.com | grep-kevin" ``` ### `export` - Token-Scoped Environment Variables Exports let you define reusable environment variables that expand during pipeline execution. **Pipeline usage:** - `export` - list exports (Linux-like lines: `export NAME="value"`) - `export NAME` - print one export (one line) - `export NAME=value` - set/update export (quotes optional) - `export NAME=` - unset/delete export **Direct HTTP management endpoints (recommended for agents):** - `GET /exports` → `text/plain` list of `export NAME="value"` - `POST /exports/{name}` with `text/plain` body (value) → `export NAME="value"` - `DELETE /exports/{name}` → `export NAME=""` **Important rules/limits:** - Exports are **token-scoped** (private to the API key). - Exports are **not available** in the public playground. - Variable names must be **uppercase** (`^[A-Z][A-Z0-9_]*$`). - Variables are expanded as `$NAME` or `${NAME}` during pipeline execution: - Expanded inside **double quotes** and **unquoted text**. - **Not expanded** inside single quotes. - To use a literal `$`, escape with a backslash: `\$NAME` stays literal. - Expansion happens **after alias expansion** (aliases can reference `$VAR`). - Limits: **max 200 exports per token**, and **max 2048 characters** per export value. - Exports are **encrypted at rest**. **Examples:** ```bash # Create an export (stores value securely) curl -X POST "https://piped.sh/exports/API_TOKEN" \ -H "X-API-Key: " \ -H "Content-Type: text/plain" \ --data-binary "secret123" # Use it in a pipeline (expands as $API_TOKEN or ${API_TOKEN}) curl -X POST "https://piped.sh/" \ -H "X-API-Key: " \ -H "Content-Type: text/plain" \ --data-binary 'cat --header="Authorization: Bearer $API_TOKEN" https://api.example.com' ``` ### `ai` - AI-Powered Text Processing **Usage:** `ai ` or `ai "prompt"` or `ai --prompt="prompt"` **Parameters:** - `prompt` - The AI prompt/question (required) - Can be unquoted: `ai Find the best closer pitcher` - Or quoted: `ai "Find the best closer pitcher"` - Or flag format: `ai --prompt="Find the best closer pitcher"` **Flags:** - `--provider=` - Select AI provider (optional) - Valid values: `xai`, `openai`, `google`, `gemini` (alias for google), `anthropic`, or `claude` (alias for anthropic) - Overrides the token's default provider setting - In pipeline form, parsed and sent as the `provider` query param (so the correct provider is used and recorded in history) - Example: `ai --provider=openai "Analyze this data"` or `ai --provider claude what's the weather?` - `--platform=` - Select API key source (optional) - Valid values: `user` (use your own API keys) or `piped` (use Piped's service keys) - Default: `user` if you have API keys configured, otherwise `piped` - Example: `ai --platform=piped "Use Piped service"` **Behavior:** - Takes POST body as context - Supports multiple AI providers: xAI (Grok), OpenAI (GPT), Google (Gemini), and Anthropic (Claude) - The AI does **not** execute pipeline commands directly - The AI can suggest pipeline commands by reading the full language specification documentation - Returns AI response with analysis/results and suggested pipeline commands that can be executed separately - Costs 1 quota per request **API Key Selection:** - By default, uses your own API keys if configured, otherwise uses Piped's service keys - Use `--platform=user` to explicitly use your own API keys - Use `--platform=piped` to explicitly use Piped's service keys (AI-as-a-Service) - Configure your own keys via Settings UI or `bun scripts/token.ts` **How the AI Works:** - The AI performs analysis on piped input data according to the provided prompt - The AI has access to the full pipeline language specification and can suggest appropriate commands based on your prompt and the input data - Suggested pipelines can be executed separately by the user **Examples:** ``` # Analyze CSV data (uses default provider and key source) cat /tmp/test-data.csv | ai "Find the best closer pitcher based on ERA, WHIP, and saves" # Use specific provider cat /tmp/data.csv | ai --provider=openai "Analyze this data" # Use Anthropic Claude cat /tmp/data.csv | ai --provider=claude "Analyze this data" # Use Piped's service keys cat /tmp/data.csv | ai --platform=piped "Use Piped AI service" # Combine flags cat /tmp/data.csv | ai --provider=google --platform=piped "Use Piped's Gemini service" # Process text with AI cat /tmp/logs.txt | ai "Summarize the errors found in this log file" ``` **Note:** The AI suggests pipeline commands (like `csv`, `sort`, `grep`) that you can execute separately. The POST body becomes the context/data for the AI to analyze. ### `history` - Pipeline Execution History **Usage:** `history [flags]` **Flags:** - `-f`, `--failed` - Show only failed pipelines - `-s`, `--success` - Show only successful pipelines - `-n `, `--limit ` - Limit number of results - `--since ` - Show entries since timestamp (ISO 8601 or Unix timestamp) - `--until ` - Show entries until timestamp (ISO 8601 or Unix timestamp) - `--order ` - Sort order (asc = oldest first, desc = newest first) - `-v`, `--verbose` - Show detailed output (single-line format with ` | ` separators for grep compatibility) - `-t`, `--time` - Include execution time in default format - `-b`, `--network` - Include network bytes transferred in default format - `-c`, `--count` - Show only count of matching entries **Behavior:** - Displays history of executed pipelines in **sequential order (oldest first)** by default - Use query param `order=desc` for newest first (e.g. UI History tab) - Pipeline strings are automatically stored after each execution - Default limit: 1000 entries (configurable via `history_limit` in token config) - Entries older than 30 days are automatically cleaned up - History can be disabled by setting `history_limit` to `0` in token config **Default Output Format:** ``` [ID] [STATUS] [TIME] [NETWORK] [TIMESTAMP] pipeline_string ``` **Verbose Output Format (`-v`):** ``` ID: 1 | Status: success | Execution Time: 0.125s | Output Size: 11 bytes | Network Bytes: 0 bytes | Created: 2025-01-15T10:30:00Z | Pipeline: echo hello | grep hello ``` **History Expansion (Bash-style `!` commands):** You can re-execute pipelines from history using bash-style expansion: - `!123` - Execute pipeline from history entry ID 123 - `!!` - Execute most recent pipeline (same as `!-1`) - `!-1` - Execute last pipeline (most recent) - `!-2` - Execute second-to-last pipeline - `!prefix` - Execute most recent pipeline starting with `prefix` **Expansion Rules:** - Expansion only occurs if the pipeline string (after trimming whitespace) starts with `!` - If history entry not found, returns error: `"History entry not found: !123"` - If history is disabled (`history_limit: 0`), expansion is skipped (returns original string) - Expansion preserves leading whitespace and remaining pipeline content **Examples:** ``` # Show all history history # Show last 10 entries history -n 10 # Show only failures history -f # Show successful pipelines with execution time history -s -t # Show pipelines with network bytes history -b -n 20 # Count total pipelines history -c # Show failures from last hour history -f --since $(date -u -d '1 hour ago' +%s) # Search for pipelines containing "grep" history | grep grep # Show verbose output for last entry history -v -n 1 # Re-execute pipeline by ID !123 # Re-execute most recent pipeline !! # Re-execute by prefix match !cat # Combine expansion with other commands !123 | wc -w ``` **Note:** History is automatically recorded after each pipeline execution. Use `grep` for searching pipeline strings in history output. **Deleting History Entries:** History entries can be deleted via HTTP DELETE requests: - **Delete all history:** `DELETE /history` - Deletes all history entries for the authenticated user - Returns JSON: `{"message": "All history entries deleted", "deleted_count": }` - Example: `curl -X DELETE https://piped.sh/history -H "X-API-Key: "` - **Delete single entry:** `DELETE /history/{id}` - Deletes a specific history entry by ID - Returns JSON: `{"message": "History entry deleted", "id": }` - Example: `curl -X DELETE https://piped.sh/history/123 -H "X-API-Key: "` **Note:** Deletion operations cannot be undone. Use the `history` command to view entries and find IDs before deleting. ### `save` - Manage Saved Pipelines **Usage:** `save [name]` or `echo "pipeline" | save ` Without arguments: lists saved pipeline names (one per line). With name and no stdin: shows existing pipeline content, or saves from history if pipeline doesn't exist. With stdin: saves piped content as the named pipeline (overwrites if exists). **Saved pipeline names cannot contain** `/`, `*`, `?`, or `%` (reserved for temp path resolution and date expansion). Same auth and quota as `POST /api/saved`. **Examples:** ``` save save | grep my-pipeline save my-pipeline echo "curl https://example.com | jq .title" | save my-pipeline save BBC News Summary ``` ### `crontab` - List, Install, or Remove Cron Schedules for Named Pipelines **Usage:** `crontab -l` | `crontab --list` | `crontab -r` | `crontab --remove` | `crontab -tz [pattern]` | `crontab --timezone [pattern]` | `cat file | crontab` | `crontab file` Manages cron schedules for named (saved) pipelines. Piped only talks to the cron server over HTTP. - **List:** `crontab -l` or `crontab --list` — Lists pipelines that have a cron schedule and are confirmed **active** on the cron server. Output format: one line per pipeline, `CRON_SPEC TIMEZONE PIPELINE_NAME` (usable as install input). Pipelines with cron in notes but not confirmed by the server appear as comment lines (e.g. `# Name: not active (status: needs_auth)`). - **Remove:** `crontab -r` or `crontab --remove` — Removes cron schedules from all saved pipelines. Can be combined with install (stdin or file) to remove first, then install fresh. - **Timezone listing:** `crontab -tz` or `crontab --timezone` or `crontab --timezones` — Lists all IANA timezone names. Optionally provide a pattern for case-insensitive filtering (e.g. `crontab -tz americas` lists all timezones containing "americas"). - **Install from stdin:** Send crontab lines in the body. Blank lines and lines starting with `#` are ignored. Each data line: `CRON_SPEC TIMEZONE PIPELINE_NAME` (5 cron fields, IANA timezone, pipeline name; quote the name if it contains spaces). Empty cron spec removes the schedule. Strict: first error fails the whole install. - **Install from file:** `crontab file` — `file` is a temp path (e.g. `/tmp/crontab.txt`) or a saved pipeline name (that pipeline’s body is the crontab text). Same behavior as install from stdin. **Examples:** ``` crontab -l crontab --list crontab -r crontab -tz crontab -tz americas crontab --timezone europe cat newcron.txt | crontab --remove echo "5 * * * * Europe/London Prayer" | crontab cat /tmp/my-crontab.txt | crontab crontab MyCrontabPipeline ``` ### `api` - API Key Information and Configuration **Usage:** `api` (GET/POST to view, PATCH to update config) **Viewing API Key Info (GET/POST):** - Returns information about the authenticated API key - Outputs pretty-formatted JSON (2-space indentation) - Requires authentication via `X-API-Key` header - Works with both GET and POST methods **Updating Config (PATCH):** - Update SMTP configuration and/or xAI API key - Send JSON body with `smtp` and/or `xai_api_key` fields - Merges with existing config (preserves other fields) - Returns updated config (excluding sensitive fields like passwords) **PATCH Request Body:** ```json { "smtp": { "host": "smtp.example.com", "port": 587, "user": "user@example.com", "pass": "password", "from": "noreply@example.com" }, "xai_api_key": "xai-..." } ``` **Examples:** ``` # View API key info curl -X GET https://piped.sh/api -H "X-API-Key: " # Set xAI API key curl -X PATCH https://piped.sh/api \ -H "X-API-Key: " \ -H "Content-Type: application/json" \ -d '{"xai_api_key": "xai-..."}' # Set SMTP config curl -X PATCH https://piped.sh/api \ -H "X-API-Key: " \ -H "Content-Type: application/json" \ -d '{"smtp": {"host": "smtp.example.com", "port": 587, "user": "user@example.com", "pass": "password", "from": "noreply@example.com"}}' # Update both curl -X PATCH https://piped.sh/api \ -H "X-API-Key: " \ -H "Content-Type: application/json" \ -d '{"smtp": {"host": "smtp.example.com", "port": 587}, "xai_api_key": "xai-..."}' # Remove xAI API key (set to empty string) curl -X PATCH https://piped.sh/api \ -H "X-API-Key: " \ -H "Content-Type: application/json" \ -d '{"xai_api_key": ""}' ``` **Response Fields:** - `api_key` - Truncated API key (first 12 chars + "...") - `email` - Email address (if set) - `label` - Label (if set) - `quota` - Quota information object: - `limit` - Quota limit - `used` - Current usage - `remaining` - Remaining quota - `reset_at` - ISO timestamp when quota resets (only present if quota_reset_at is set) - `reset_at_timestamp` - Unix timestamp (only present if quota_reset_at is set) - `created_at` - ISO timestamp when token was created - `created_at_timestamp` - Unix timestamp - `is_active` - Boolean indicating if token is active - `smtp` - SMTP configuration object (if configured): - `host` - SMTP hostname - `port` - SMTP port - `user` - SMTP username - `from` - From email address - Note: Password is never included for security **Examples:** ``` api ``` **Example Response:** ```json { "api_key": "pk_test1234...", "email": "user@example.com", "label": "production", "quota": { "limit": 1000, "used": 42, "remaining": 958 }, "created_at": "2025-12-28T09:00:00.000Z", "created_at_timestamp": 1735380000, "is_active": true, "smtp": { "host": "smtp.example.com", "port": 587, "user": "user@example.com", "from": "noreply@example.com" } } ``` **Note:** Useful for checking quota status, token metadata, and SMTP configuration. API key is truncated for security. ### `subscription` - Subscription status and usage **Usage:** `subscription` or `subscription -j` or `subscription --json` Displays subscription status for the authenticated token: plan name, price, status, renewal date, and (for AI subscription plans) current month input/output token usage. Requires authentication. **Flags:** - `-j`, `--json` - Output machine-readable JSON instead of human-readable text **Examples:** ``` subscription subscription --json subscription -j | jq .subscription.plan_name ``` **Note:** Returns "No active subscription" (or `{"has_subscription":false}` with `--json`) when the token has no linked subscription. Pipeline-only; use `GET /api/proxy/subscription` for session-based UI. ### `base64` - Base64 Encode/Decode **Usage:** `base64` or `base64 -d` or `base64 --decode` **Flags:** - `-d`, `--decode` - Decode base64 (default: encode) **Examples:** ``` cat data.txt | base64 echo "hello world" | base64 echo "aGVsbG8gd29ybGQ=" | base64 -d cat encoded.txt | base64 --decode ``` **Note:** Encodes input to base64 by default. Use `-d` or `--decode` to decode base64 back to original text. ### `curl` - Fetch from URLs **Usage:** `curl [url2 ...] [flags]` Fetches content from one or more HTTP(S) URLs. **Flags:** `--header "key:value"` (or `-H "key:value"`), `--bearer ` (shortcut for Authorization: Bearer header), `--cookie "value"`, `--method METHOD` (or `-X METHOD`), `--data "body"` (or `-d "body"`), `--content-type `, `--retry N`, `--retry-delay MS`, `--timeout MS`, `--no-redirects`, `--modified` (append Last-Modified to output). **Behavior:** - When used in a pipeline with POST/PUT/PATCH/etc methods, piped input is automatically used as the request body if no explicit `--data`/`-d` flag is provided. - Example: `echo '{"key":"value"}' | curl https://api.com -X POST` sends the JSON as the request body. **Examples:** ``` curl https://example.com curl https://api.com --bearer token | jq .result curl https://api.com --header "Authorization: Bearer token" | jq .result # equivalent curl https://api.com -H "Authorization: Bearer token" | jq .result # -H alias for --header curl https://a.com https://b.com curl https://httpbin.org/post --method POST --data "key=value" curl https://httpbin.org/post -X POST --data "key=value" # -X alias for --method curl https://httpbin.org/post -X POST -d "key=value" # -d alias for --data echo '{"key":"value"}' | curl https://api.com -X POST # piped input becomes request body curl https://example.com | cat -n # number lines from URL ``` ### `cat` - Output/Concatenate (temp files, saved pipelines, stdin) **Usage:** `cat [path1] [path2] ... [flags]` or `cat /tmp/` or `cat /tmp/` or `cat ` or `cat -` **Resolution (per argument):** 1) Temp file (wildcards and date expansion supported), 2) Saved pipeline by name (exact, case-sensitive). Paths starting with `/tmp/` are always temp-only. Saved pipeline names cannot contain `/`, `*`, `?`, or `%` (reserved); cannot start with `/tmp` (reserved). **Stdin:** Use `-` or `stdin` as an argument to read from piped input. Example: `cat /tmp/a - /tmp/b` (stdin between two files). With no arguments or only paths, piped input is ignored unless `-` or `stdin` is given. **Flags:** `-n`, `--number`, `-b`, `--nonblank`, `-s`, `--squeeze`, `-S`, `--silent`, `--timestamp`, `--sha256`. **Wildcards:** `*` and `?` in `/tmp/` paths. **Examples:** ``` cat cat - cat /tmp/step1 cat /tmp/step1 /tmp/step2 cat /tmp/*.txt cat /tmp/test? cat MySavedPipeline cat /tmp/step1 - /tmp/step2 ``` **Squeeze flag (`-s`, `--squeeze`):** - Collapses multiple consecutive empty lines into a single empty line - Useful for cleaning up output with excessive blank lines - Example: `"line1\n\n\n\nline2"` becomes `"line1\n\nline2"` - Works before line numbering (if `-n` is also used) ### `csv` - CSV Parser **Usage:** `csv [flags]` **Flags:** - `--select ` / `--columns ` - Select columns by name, index, or range (e.g., "name,age" or "1,3" or "1-3") - `--filter ` - Filter rows with enhanced expressions (e.g., "age > 18 AND status == 'active'") - `--to-json`, `-j` - Convert to JSON - `--delimiter ` - Custom delimiter (default: comma) - `--header-row N` - Specify which row is header (default: 0, first row) - `--no-header` - Treat first row as data (no header) - `--limit N` - Maximum number of rows to return - `--offset N` - Number of rows to skip - `--sort ` - Sort by column(s) (e.g., "age", "age:desc", "dept:asc,age:desc") - `--aggregate ` - Aggregation functions (e.g., "sum(price),avg(age),count(),median(score)") - `--group-by ` - Group by column for aggregations - `--distinct ` - Get unique values for specified column - `--rename ` - Rename columns (e.g., "old:new,col1:name") - `--types ` - Type conversion (e.g., "age:int,price:float,active:bool") - `--skip-empty` - Skip rows where all fields are empty - `--pretty` - Pretty-print JSON output **Filter Expressions:** - Comparison operators: `>`, `<`, `>=`, `<=`, `==`, `!=` - Logical operators: `AND`, `OR` (with parentheses for grouping) - String functions: `contains(col, 'value')`, `startsWith(col, 'value')`, `endsWith(col, 'value')` - Empty checks: `isEmpty(col)`, `isNotEmpty(col)` - Regex matching: `col ~= 'pattern'` - Case-insensitive string comparisons - Nested expressions: `(age > 18 AND status == 'active') OR role == 'admin'` **Aggregation Functions:** - `sum(column)` - Sum of numeric values - `avg(column)` or `average(column)` - Average of numeric values - `min(column)` - Minimum value - `max(column)` - Maximum value - `count()` - Count of rows - `median(column)` - Median value - `first(column)` - First value - `last(column)` - Last value **Type Conversion:** - `int` - Integer - `float` - Floating point number - `bool` - Boolean (true/1/yes = true) - `string` - String (default) **Format Options:** - Flags can use `=` format: `--select=name,email` - Or quoted string format: `--select "name,email"` (with space) - Same applies to `--filter` and `--delimiter` **Examples:** ``` csv csv --select "name,email" csv --select=name,email # Alternative format csv --columns "1,3" # By index (columns is alias for select) csv --select "1-3" # Range csv --filter "age > 18" --to-json csv --filter "age > 18 AND status == 'active'" --to-json csv --filter "contains(name, 'John')" --to-json csv --filter "email ~= 'example\\.com$'" --to-json # Regex csv --delimiter "|" --select "1,2" csv --delimiter=| --select=1,2 # Alternative format csv --limit 10 --offset 5 # Pagination csv --sort "age:desc" # Sort descending csv --sort "dept:asc,age:desc" # Multi-column sort csv --aggregate "sum(price),avg(age),median(score)" --to-json csv --aggregate "sum(price)" --group-by "category" --to-json csv --distinct "category" # Get unique values csv --rename "old:new,col1:name" csv --types "age:int,price:float" --to-json csv --skip-empty # Skip empty rows csv --filter "isEmpty(email)" # Filter empty values csv --filter "isNotEmpty(phone)" # Filter non-empty values csv --filter "(age > 18 AND status == 'active') OR role == 'admin'" # Nested filter csv --to-json --pretty # Pretty-print JSON csv -j # Short alias for --to-json ``` ### `cut` - Extract Fields/Characters **Usage:** `cut -d -f ` or `cut -c ` **Flags:** - `-d`, `--delimiter` - Field delimiter (default: tab) - `-f`, `--fields` - Field numbers/ranges (e.g., "1,3" or "1-3" or "-3" or "3-") - `-c`, `--chars` - Character range (e.g., "1-10" or "-5" or "5-") - `--output-delimiter` - Delimiter for output (default: same as input delimiter) - `--complement` - Output fields NOT selected (invert selection) - `--only-delimited` - Skip lines that don't contain delimiter **Format Options:** - Flags can be attached: `-d','` or `-f4` (no space) - Or separate: `-d ','` or `-f 4` (with space) - Open ranges: `-f "-3"` (first 3 fields), `-f "3-"` (from field 3 to end) **Examples:** ``` cut -d "," -f 1,3 cut -d',' -f4 # Attached format cut -c 1-10 cut --delimiter="|" --fields="1-2" cut -d',' -f"1,3" # Attached with quotes cut -d "," -f "-2" # First 2 fields cut -d "," -f "3-" # From field 3 to end cut -d "," -f 1,3 --output-delimiter="|" # Custom output delimiter cut -d "," -f 1,3 --complement # All fields except 1 and 3 cut -d "," -f 1 --only-delimited # Skip lines without comma ``` ### `date` - Format Date/Time Outputs formatted date/time using Unix `date`-style format specifiers. All dates use GMT/UTC timezone (or token timezone when set). **Usage:** `date [format]` or `date +` or `date -d [format]` **Options:** - `-d`, `--date` - Format a specific time by description (e.g. "1 hour ago", "yesterday", ISO or Unix timestamp). Base time is X-User-Time if sent, else token timezone, else UTC. **Format specifiers:** - `%Y` - Full year (4 digits, e.g., 2024) - `%y` - Year (2 digits, e.g., 24) - `%m` - Month (01-12) - `%d` - Day of month (01-31) - `%H` - Hour (00-23) - `%I` - Hour (01-12) for 12-hour format - `%M` - Minute (00-59) - `%S` - Second (00-59) - `%j` - Day of year (001-366) - `%w` - Day of week (0-6, Sunday=0) - `%W` - Week number (00-53) - `%V` - ISO week number (01-53) - `%a` - Abbreviated weekday name (Sun, Mon, Tue, etc.) - `%A` - Full weekday name (Sunday, Monday, Tuesday, etc.) - `%b` - Abbreviated month name (Jan, Feb, Mar, etc.) - `%B` - Full month name (January, February, March, etc.) - `%z` - Timezone offset (e.g., +0500, -0800) - `%Z` - Timezone name/abbreviation (UTC or UTC+offset) - `%p` - AM/PM - `%r` - 12-hour time format (%I:%M:%S %p) - `%R` - 24-hour time format (%H:%M) - `%T` - 24-hour time format (%H:%M:%S) - `%D` - Date format (%m/%d/%y) - `%F` - ISO date format (%Y-%m-%d) - `%s` - Unix timestamp (seconds since epoch) - `%n` - Newline character - `%t` - Tab character - `%%` - Literal % character **Examples:** ``` date # Returns GMT date string (e.g., "Mon, 15 Jan 2024 14:30:22 GMT") date +%Y%m%d # Returns "20240115" date +'%Y-%m-%d' # Returns "2024-01-15" date +"%d-%m-%y" # Returns "15-01-24" date +"%Y-%m-%d-%H%M%S" # Returns "2024-01-15-143022" date +"%A %B %d, %Y" # Returns "Monday January 15, 2024" date +"%I:%M %p" # Returns "02:30 PM" date +"%F %T" # Returns "2024-01-15 14:30:45" date +"%s" # Returns Unix timestamp (e.g., "1705327845") # -d / --date: format a specific time by description date -d "1 hour ago" date -d "yesterday" +%F date --date="2 days ago" +%s ``` **Notes:** - When no format is provided, returns a GMT date string using `toUTCString()` format - Format strings can be quoted (single or double) or unquoted - All date calculations use GMT/UTC timezone (or token timezone) for consistency - Same format specifiers as used in temp file path expansion - With `-d`/`--date`, unparseable descriptions return 400 ### `diff` - Compare Files **Usage:** `diff /tmp/ /tmp/` or `diff ` or `diff /tmp/` (compares POST body with file) **Flags:** - `--context N`, `-C N` - Number of context lines (default: 3) - `--ignore-whitespace`, `-w` - Ignore whitespace differences **Examples:** ``` diff /tmp/file1 /tmp/file2 diff MySavedPipeline /tmp/file.txt cat data | diff /tmp/stored # Compare POST body with stored file diff /tmp/file1 /tmp/file2 --context 5 diff /tmp/file1 /tmp/file2 --ignore-whitespace ``` **Note:** Outputs unified diff format. Compares temp files and saved pipelines only (use `curl` to fetch URLs first). If 2 paths provided, compares them. If 1 path provided, compares POST body with that file. ### `echo` - Output Text **Usage:** `echo [text]` **Query params:** - `text` - Optional text to output (if not provided, outputs input) **Examples:** ``` echo echo "hello world" echo hello world cat | echo ``` **Note:** If text argument provided, outputs that text. Otherwise, outputs its input (like Unix echo when piped). ### `grep` - Filter Lines **Usage:** `grep [flags]` (works on stdin, no filename argument) **Dedicated Endpoint:** `POST /grep?match=` - Filter text directly without pipelines **Flags:** - `-v`, `--invert` - Invert match (show non-matching lines) - `-i`, `--ignore` - Case-insensitive matching - `-c`, `--count` - Return count instead of lines - `-o`, `--only-matching` - Output only the matched part of the line (not compatible with -v or -c) - `-n`, `--line-number` - Prefix each output line with its line number - `-w`, `--word` - Match whole words only (word boundaries) - `-x`, `--line` - Match whole lines only - `-m N`, `--max-count=N` - Stop after N matches - `-A N` - Show N lines after each match (context) - `-B N` - Show N lines before each match (context) - `-C N` - Show N lines before and after each match (context) **Examples:** ``` cat file.txt | grep hello grep "error" -i grep "test" -v -c cat file.txt | grep -o "https?://[^\\s]+" ``` **Direct API Example:** ```bash curl -X POST "https://piped.sh/grep?match=hello&i=1" \ -H "X-API-Key: " \ -H "Content-Type: text/plain" \ -d "Hello World hello there goodbye" ``` **Note:** Works on piped input only (no filename argument). Use `cat /tmp/file | grep pattern`. The `-q`/`--quiet` flag is not supported; use `test`/`exit` with `&&`/`||` for conditional logic. ### `history` - View Pipeline Execution History **Usage:** `history [flags]` **Flags:** - `-f`, `--failed` - Show only failed pipelines - `-s`, `--success` - Show only successful pipelines - `-n `, `--limit ` - Limit number of results - `--since ` - Show entries since timestamp (ISO 8601 or Unix timestamp) - `--until ` - Show entries until timestamp (ISO 8601 or Unix timestamp) - `--order ` - Sort order (asc = oldest first, desc = newest first) - `-v`, `--verbose` - Show detailed output (single-line format with ` | ` separators) - `-t`, `--time` - Include execution time in default format - `-b`, `--network` - Include network bytes in default format - `-c`, `--count` - Show only count of matching entries **Behavior:** - Displays history of executed pipelines in **sequential order (oldest first)** by default; use `order=desc` for newest first - History is automatically stored when pipelines are executed - Default format: `[ID] [STATUS] [TIME] [OUTPUT_SIZE] [TIMESTAMP] pipeline_string` - Verbose format: Single line with ` | ` separators for grep compatibility - History entries are encrypted and scoped to your API key - Default limit: 1000 entries (configurable via `history_limit` in token config) - Entries older than 30 days are automatically cleaned up **Examples:** ``` history history -n 10 history -f history -s --since 2025-01-15T00:00:00Z history -v -n 1 history -t -b -n 20 history -c history | grep "grep" ``` **Note:** History is automatically recorded for all pipeline executions. Use `history_limit: 0` in token config to disable history recording. ### `head` - Output First N Lines or Bytes **Usage:** `head [-n N]` or `head -N` or `head --lines=N` (works on stdin, no filename argument) **Flags:** - `-n N`, `--lines=N` - Number of lines to output (default: 10) - `-n -N` - All but last N lines (e.g., `-n -3` excludes last 3 lines) - `-c N`, `--bytes=N` - Number of bytes instead of lines - `-c -N` - All but last N bytes **Examples:** ``` cat file.txt | head cat file.txt | head -5 cat file.txt | head -n 10 cat file.txt | head -n -3 cat file.txt | head -c 100 cat file.txt | head --lines=20 ``` **Note:** Works on piped input only (no filename argument). Outputs first N lines. Default is 10 lines if not specified. ### `html` - HTML Parser **Usage:** `html [flags]` or `html --text` **Flags:** - `--selector` - CSS selector (tag, class, id, attribute) - `--attr ` - Extract attribute value instead of text content - `--all` - Return all matches (not just first) - `--text` - Extract all text content from HTML (removes all tags, scripts, styles) - `--html` - Return outer HTML of matched element(s) instead of text content - `--inner` - Return inner HTML of matched element(s) instead of text content - `--base ` - Prefix relative href URLs with base URL (auto-detected from `` tag if not provided) **Supported Selectors:** - Tag: `div`, `p`, `h1` - Class: `.classname` - ID: `#idname` - Attribute: `[href]`, `[href="value"]`, `[href^="prefix"]`, `[href$="suffix"]`, `[href*="contains"]` - Descendant: `div p` - Child: `div > p` - Sibling: `h1 + p` (adjacent), `h1 ~ p` (general) - Pseudo-classes: `:first-child`, `:last-child`, `:nth-child(n)`, `:not(selector)` - Multiple: `div, p, h1` - Combined: `div.classname`, `div#id`, `div[attr]` **Examples:** ``` html "div.content" html "#main" html "a.link" --attr href --all html --text # Extract all text from page html "article" --html # Get outer HTML (includes
tag) html "article" --inner # Get inner HTML (excludes
tag) html "li:nth-child(2)" # Select second list item html "p:not(.intro)" # Select paragraphs without .intro class html "a[href$='.pdf']" --attr href --all # All PDF links html "a" --html --base=https://example.com/ # Extract links with absolute URLs ``` ### `jq` - JSON Processor **Usage:** `jq [filter]` Processes JSON input using the full [jq](https://jqlang.github.io/jq/) filter language (powered by jq-web/WASM). Supports pipes, functions, conditionals, object construction, array slicing, and more. Output is pretty-printed by default. **Examples:** ``` jq . jq ".users[0].name" jq ".users | length" jq "[.[] | .name]" jq ".items | map(select(.active))" jq ".users | sort_by(.age) | reverse" jq '{name: .first, age: .years}' ``` ### `ls` - List Temp Files **Usage:** `ls [pattern] [flags]` **Flags:** - `-l`, `--long` - Show detailed info (size, created, expires) - `-S`, `--sort=` - Sort by `name`, `size`, `time`, or `expires` (default: time) - `-r`, `--reverse` - Reverse sort order - `-j`, `--json` - Output as JSON array **Behavior:** - Lists temp files in `/tmp/` storage (scoped to your API key) - If pattern provided: pattern is date-expanded (same as tmp/tee; use `X-User-Time` header) then matched (supports `*` and `?` wildcards) - If no pattern: lists all temp files - Default: one file path per line (format: `/tmp/`) - Long format: path, size, created, expires (tab-separated) - JSON format: array of objects with path, size, created_at, expires_at **Wildcards:** - `*` - Matches any sequence of characters - `?` - Matches a single character **Examples:** ``` ls ls --long ls --sort=size --reverse ls --json ls "*.txt" ls "/tmp/test?.log" ls "/tmp/backup-*" ``` **Note:** Only works with `/tmp/` files (scoped to your API key). Pattern can include `/tmp/` prefix (stripped). Wildcards on filename only. Empty output if no match. ### `mv` - Rename Temp File or Saved Pipeline **Usage:** `mv ` **Behavior:** - **Resolution:** First argument is resolved in order: 1) temp file (wildcards `*`, `?` and date expansion supported; exactly one match required for mv), 2) saved pipeline by name (exact, case-sensitive), else fail. Temp file wins when both a temp file and a saved pipeline match the same name. - **Temp branch:** Dest = temp path (date expansion only; overwrites if exists). - **Saved-pipeline branch:** Dest = new pipeline name (validated like save; renames the saved pipeline). **Examples:** ``` mv draft.txt report-%Y-%m-%d.txt mv report-%Y-%m-%d.txt archive-%Y-%m-%d.txt mv report-%Y-%m-%d-*.log archive-%Y-%m-%d.log mv MyPipeline RenamedPipeline ``` **API:** `POST /mv` - dedicated endpoint (source/dest as query params; temp files and saved pipelines). For temp-only: `PATCH /tmp/:path` with **text/plain** body = new filename (date expansion via `X-User-Time`). No body = extend expiration. Add **`?cmd=cp`** to copy instead of rename (source remains). Pipeline form (e.g. `mv foo bar` in a pipeline) supports both temp files and saved pipelines by name. ### `cp` - Copy Temp File or Saved Pipeline **Usage:** `cp ` **Behavior:** Same resolution as `mv`: temp file first (wildcards, date expansion; exactly one match), then saved pipeline by name. Source file or pipeline remains; dest is temp path or new pipeline name. Copying a saved pipeline creates a new pipeline (new ULID) with the same content. **Examples:** ``` cp draft.txt report-%Y-%m-%d.txt cp report-%Y-%m-%d.txt archive-%Y-%m-%d.txt cp report-%Y-%m-%d-*.log archive-%Y-%m-%d.log cp MyPipeline MyPipelineBackup ``` **API:** `POST /cp` - dedicated endpoint (source/dest as query params; temp files and saved pipelines). For temp-only: `PATCH /tmp/:path?cmd=cp` with **text/plain** body = new filename (copy; source remains). Pipeline form supports both temp files and saved pipelines by name. ### `rm` - Remove Temp File or Saved Pipeline **Usage:** `rm ` **Behavior:** - **Resolution:** First argument is resolved in order: 1) temp file (wildcards `*`, `?` and date expansion supported; deletes all matches), 2) saved pipeline by name (exact, case-sensitive), else fail. Temp file wins when both exist. - Returns the deleted path(s) or name as plaintext; 404 with JSON error if neither temp nor saved pipeline matches. **Examples:** ``` rm oldfile.txt rm backup-%Y-%m-%d.log rm "*.tmp" rm MySavedPipeline ``` **API:** `POST /rm?path=` - dedicated endpoint (temp files and saved pipelines; supports wildcards and date expansion for temp). Pipeline form supports both. For temp-only: `DELETE /tmp/`. **Note:** Pipeline form supports both temp files and saved pipelines. For bulk delete of temp files, use `ls` + `xargs rm` or multiple `rm` commands. ### `mail` - Send Email via SMTP **Usage:** `mail -s "subject" ` or `mail --subject "subject" --to ` **Flags:** - `-s`, `--subject` - Email subject - `--to ` - Recipient email address (required) - `--html` - Transform piped input to HTML (detects markdown/text/HTML and converts markdown to HTML) - `--test` - Output raw SMTP message (RFC 5322 format) instead of sending (for debugging) **Behavior:** - Takes piped input as email body - Sends email via SMTP (configured on the API token) - With `--html` flag: Automatically detects if input is text, markdown, or HTML - If markdown: Converts to HTML - If HTML: Leaves as-is - If text: Leaves as-is (plain text email) - With `--test` flag: Outputs the raw email message with all headers (From, To, Subject, Date, Message-ID, MIME-Version, Content-Type) without sending **SMTP Configuration:** SMTP settings are stored on each API token, allowing each customer to configure their own email service: - `smtp_host` - SMTP server hostname (required) - `smtp_port` - SMTP server port (default: 25) - `smtp_user` - SMTP username (optional) - `smtp_pass` - SMTP password (optional) - `smtp_from` - From email address (default: piped@localhost) **Examples:** ``` echo "Alert: System is down" | mail -s "System Alert" admin@example.com cat report.txt | mail --subject "Daily Report" --to team@example.com echo "# Hello\nThis is **markdown**" | mail --to=user@example.com --subject="Report" --html echo "Test body" | mail --to=user@example.com --subject="Test" --test ``` **Note:** Requires SMTP configuration to be set on the API token (except with `--test` flag). Email body comes from piped input. ### `project` - Export/Import Project as YAML **Usage:** `project [name] [--export=]` (export) or `cat project.yaml | project [flags]` (import) **Flags:** - `[name]` - Project name to include in export (optional) - `--export=` - Export only specified sections. Values: `all`, `pipelines` (or `saved`), `exports`, `aliases`, `tools`, or comma-separated (e.g., `tools,exports`) - `--replace=` - Clear before import. Values: `all`, `pipelines` (or `saved`), `exports`, `aliases`, `tools`, or comma-separated (e.g., `exports,aliases`) **Behavior:** - Without input: Exports current project (pipelines, exports, aliases, tools) as YAML - With `--export`: Exports only the specified sections - Without input + `--replace`: Clears specified data (fresh blank environment) - With YAML input: Imports project data (additive by default, merges with existing) - With YAML input + `--replace`: Clears specified data types before import **YAML Format:** ```yaml project: name: optional-name pipelines: - name: my-pipeline pipeline: echo hello | grep h exports: - name: MY_VAR value: my-value aliases: - alias: greet command: echo hello tools: - name: piped type: mcp spec: '{"mcpServers":{"piped":{"type":"http","url":"https://piped.sh/mcp","headers":{"X-API-Key":"$PIPED_API_KEY"}}}}' ``` **Examples:** ``` # Export project as YAML project project "My Project" # Export only specific sections project --export=tools project --export=pipelines,exports # Import project (additive) cat backup.yaml | project # Import with replace cat backup.yaml | project --replace=all cat backup.yaml | project --replace=pipelines cat backup.yaml | project --replace=saved,aliases # Clear all data (fresh blank environment) project --replace=all # Clear only tools project --replace=tools # Export to file project | tee /tmp/backup.yaml ``` **Note:** Useful for backup/restore, sharing configurations, and programmatic project management. The `replace` and `export` options support `saved` as an alias for `pipelines`. ### `awk` - Text Processing **Usage:** `awk [program]` or `awk -F [program]` Process text line-by-line using awk programs. Supports field splitting, pattern matching, built-in variables (NR, NF, $0, $1, etc.), and text transformations. **Flags:** - `-F `, `--field-separator=` - Set field separator (default: whitespace) **Examples:** ``` awk '{print $1}' awk -F, '{print $2}' awk '/^error/' awk '{print NR, $0}' awk -F: '{print $1, $3}' echo "alice,30\nbob,25" | awk -F, '$2 > 27 {print $1}' ``` ### `deno` - Execute TypeScript/JavaScript **Usage:** `deno /tmp/` or `deno ` or `deno /tmp/ --allow-net=` Execute a TypeScript or JavaScript script stored in temp file storage or as a saved pipeline using Deno. The script receives piped input via stdin and returns output via stdout. `deno.land` is always accessible so scripts can import from the Deno module registry. No filesystem access or subprocess execution. **Parameters:** - `/tmp/` or `` - Temp file path or saved pipeline name of the script to execute (required). Resolution: temp file first, then saved pipeline by name. - `--allow-net=` - Comma-separated additional hosts to allow outbound network (e.g. `api.openai.com`). `deno.land` is always included. Localhost and private IPs are always blocked. **Script pattern:** ```ts const input = await new Response(Deno.stdin.readable).text(); // transform input... await Deno.stdout.write(new TextEncoder().encode(result)); ``` **Examples:** ``` echo 'hello world' | deno /tmp/transform.ts cat /tmp/data.json | deno /tmp/process.ts cat /tmp/data.json | deno /tmp/fetch.ts --allow-net=api.openai.com echo 'hello world' | deno my-transform.ts ``` **Workflow:** ``` # 1. Write your script to temp storage echo 'const input = await new Response(Deno.stdin.readable).text(); console.log(input.toUpperCase())' | tee /tmp/upper.ts # 2. Run it echo "hello world" | deno /tmp/upper.ts # Output: HELLO WORLD ``` ### `sed` - Stream Editor **Usage:** `sed [expression]` (expression required, e.g. `s/old/new/` or `/pattern/d`) **Expressions:** - `s/pattern/replacement/flags` - Substitution (g = global, i = case-insensitive, p = print) - `/pattern/d` - Delete lines matching pattern - `Nd` - Delete line N (e.g., `1d` deletes line 1) - `N,Md` - Delete lines N through M inclusive (e.g., `1,3d` deletes lines 1-3) - `/pattern/p` - Print lines matching pattern (use with `-n` flag) - `Np` - Print line N - `N, Mp` - Print lines N through M - `Na/text` - Append text after line N - `N,Ma/text` - Append text after lines N through M - `Ni/text` - Insert text before line N - `N,Mi/text` - Insert text before lines N through M - `Nc/text` - Change/replace line N with text - `N,Mc/text` - Change/replace lines N through M with text **Flags:** - `-n`, `--quiet` - Suppress default output (only print explicitly requested lines) **Substitution Special Characters:** - `&` - Represents the matched text - `\1`, `\2`, etc. - Backreferences to capture groups - `\n` - Newline - `\t` - Tab **Examples:** ``` sed "s/old/new/g" sed "s/\\d+/NUMBER/g" sed "/error/d" sed "1d" # Delete first line sed "1,3d" # Delete lines 1-3 sed "s/hello/[&]/g" # Wrap matches in brackets sed "s/(\\w+) (\\w+)/\\2, \\1/" # Swap two words using backreferences sed "s/old/new/g;s/foo/bar/g" # Multiple expressions sed "2a/INSERTED" # Append text after line 2 sed "2i/INSERTED" # Insert text before line 2 sed "2c/CHANGED" # Replace line 2 with text sed "/pattern/p" --quiet # Print only matching lines ``` ### `sha256` - Compute SHA-256 Hash **Usage:** `sha256` **Behavior:** - Computes SHA-256 hash of input - Outputs hexadecimal hash string - No arguments needed **Examples:** ``` cat data.txt | sha256 echo "hello world" | sha256 cat url | sha256 | tee /tmp/hash ``` **Note:** Useful for change detection, checksums, and monitoring. Compare hashes to detect when content changes. ### `md5` - Compute MD5 Hash **Usage:** `md5` **Behavior:** - Computes MD5 hash of input - Outputs hexadecimal hash string - No arguments needed **Examples:** ``` cat data.txt | md5 echo "hello world" | md5 cat url | md5 | tee /tmp/hash ``` **Note:** Use for compatibility or speed; for integrity/security use `sha256`. ### `sleep` - Delay (Pipeline Only) **Usage:** `sleep ` **Arguments:** - `` (required): Delay in seconds. Integer or decimal (e.g. `2`, `0.5`). Maximum 10 seconds. **Behavior:** - Waits for the given number of seconds (non-blocking), then passes input through unchanged. - Pipeline-only (no standalone HTTP endpoint). - Useful for rate limiting or adding delay between stages. **Examples:** ``` echo hello | sleep 1 | cat cat data | sleep 0.5 | curl https://example.com/webhook --method POST --data "$(cat)" ``` ### `tool` - Manage Tools (OpenAPI Specs, MCP Configs) **Usage:** `tool [name] [--delete] [--type=]` Manage registered tools (OpenAPI specs and MCP server configs) used by the `mcp` command. **Behavior:** - `tool` (no args) — list all tools (tab-separated: name, type) - `tool ` (no stdin) — show tool spec - `cat spec | tool ` (with stdin) — create or update tool (auto-detects type from spec, or use `--type=openapi` / `--type=mcp`) - `tool --delete` — delete tool **Flags:** - `--type=` - Explicit tool type: `openapi` or `mcp`. Auto-detected from spec content if omitted. - `--delete` - Delete the named tool. **Examples:** ``` tool tool my-api cat /tmp/spec.yaml | tool my-api cat /tmp/spec.yaml | tool --type=openapi my-api tool my-api --delete ``` **Note:** Tools are token-scoped and encrypted at rest. Not available in the public playground. Use `mcp` command to call tools registered as MCP servers. ### `mcp` - Call Tools on Remote MCP Servers **Usage:** `mcp [json-args]` **Arguments:** - `` (required): Registered tool name (e.g. `piped`) - `` (required): MCP tool name from the server's tool list (e.g. `execute_pipeline`) - `[json-args]` (optional): JSON object of arguments. If omitted and stdin has content, stdin is passed as the `input` argument. **Behavior:** - Calls a tool on a registered MCP (Model Context Protocol) server - The tool must be registered with `type: "mcp"` via the tools API - Auth credentials (`$EXPORT` references in the MCP server config headers) are resolved internally — they never appear in pipeline text - Uses JSON-RPC protocol internally (`tools/call`) - Returns the tool's text output - On error, returns the MCP server's error message with HTTP 502 **Registering an MCP tool:** ``` POST /tools/piped Content-Type: application/json X-API-Key: { "type": "mcp", "spec": "{ \"mcpServers\": { \"piped\": { \"type\": \"http\", \"url\": \"https://piped.sh/mcp\", \"headers\": { \"X-API-Key\": \"$PIPED_API_KEY\" } } } }" } ``` **Examples:** ``` # Call execute_pipeline on the "piped" MCP server mcp piped execute_pipeline '{"pipeline":"ls"}' # Read a temp file mcp piped read_file '{"path":"/tmp/data.csv"}' # Pipe output to other commands mcp piped execute_pipeline '{"pipeline":"echo hello"}' | grep hello # Pass stdin as the "input" argument (when no json-args given) echo "hello world" | mcp piped execute_pipeline ``` **Note:** Requires authentication. Not available in the public playground. The `mcp` command does not consume AI tokens — it's a direct HTTP call to the MCP server. Only HTTP-based MCP servers are supported (not stdio). Use `$EXPORT` variables for API keys in the server config headers. ### `man` - Show Command Help **Usage:** `man [command]` **Behavior:** - With command: Show manual/help for that pipeline command (content from config/man.yaml). - Without command: List available commands. - **GET** `/man?cmd=` or **POST** `/man` with plaintext body = command name. **Examples:** ``` man sleep man cat man ``` ### `sort` - Sort Lines **Usage:** `sort [flags]` **Flags:** - `-n`, `--numeric` - Numeric sort - `-r`, `--reverse` - Reverse order - `-u`, `--unique` - Remove duplicates **Examples:** ``` sort sort -n sort -r -u ``` ### `source` - Execute Pipeline from Temp Storage or Saved Pipeline **Usage:** `source /tmp/` or `source ` **Arguments:** - `/tmp/` or `` (required): Temp file path or saved pipeline name. Resolution: temp → saved pipeline by name. Paths starting with `/tmp/` are temp-only. Saved pipeline names cannot start with `/tmp` (reserved). **Behavior:** - Reads pipeline definition from a temp file or saved pipeline - Executes the fetched pipeline with stdin as input - Returns the output of the pipeline execution - **JS/TS auto-delegation:** If the path ends in `.ts` or `.js`, `source` automatically delegates to the `deno` command instead of executing the content as a pipeline. This means you can `source my-script.ts` to run TypeScript/JavaScript directly — no hashbangs or special syntax needed. - To execute a pipeline from a URL: `curl | tee /tmp/pipeline.txt && source /tmp/pipeline.txt` **Pipeline Execution:** - The fetched content is treated exactly as if posted directly to the root `/` endpoint - Supports all pipeline features: multiple commands chained with `|`, sequential statements, nested `source` commands - Stdin from previous command becomes the initial input for the fetched pipeline **Examples:** ``` # Execute pipeline from temp storage cat data.txt | source /tmp/my-pipeline.txt # Execute saved pipeline by name cat data.txt | source MyPipeline # Run a TypeScript saved pipeline (auto-delegates to deno) echo "hello" | source my-transform.ts # Run a JS temp file (auto-delegates to deno) echo "hello" | source /tmp/process.js # Store pipeline and reuse it echo "cat | grep pattern | sort" | tee /tmp/my-pipeline.txt cat data.txt | source /tmp/my-pipeline.txt # Fetch pipeline from URL, store it, then execute curl https://example.com/pipelines/process.txt | tee /tmp/process.txt cat data.txt | source /tmp/process.txt ``` **Use Cases:** - **Reusable pipelines**: Store common pipelines in temp storage or as saved pipelines - **Pipeline composition**: Build complex pipelines from simpler ones - **Maintainability**: Update a saved pipeline in one place, affects all uses - **TypeScript/JavaScript scripts**: Save a `.ts` or `.js` script and `source` it — runs via Deno automatically **Note:** Use `curl` to fetch pipelines from URLs, then `tee` to store them, and `source` to execute. ### `tail` - Output Last N Lines or Bytes **Usage:** `tail [-n N]` or `tail -N` or `tail --lines=N` (works on stdin, no filename argument) **Flags:** - `-n N`, `--lines=N` - Number of lines to output (default: 10) - `-n +N` - Start from line N onwards (e.g., `+5` means from line 5 to end) - `-c N`, `--bytes=N` - Number of bytes instead of lines - `-c +N` - Start from byte N onwards **Examples:** ``` cat file.txt | tail cat file.txt | tail -5 cat file.txt | tail -n 10 cat file.txt | tail +5 cat file.txt | tail -c 100 cat file.txt | tail -c +50 cat file.txt | tail --lines=20 ``` **Note:** Works on piped input only (no filename argument). Outputs last N lines. Default is 10 lines if not specified. ### `tee` - Write to Temp Storage **Usage:** `tee /tmp/ [ /tmp/ ... ]` **Flags:** - `--append`, `-a` - Append to existing file (default: overwrite) - `--quiet`, `-q` - Don't output file contents to stdout (quiet mode) - `--expire `, `-e ` - Set expiration time. Supports suffixes: `s` (seconds), `m` (minutes), `h` (hours), `d` (days). A bare number defaults to hours. Max: 168h (7d). Default: 36h. **Examples:** ``` cat data | tee /tmp/step1 cat data | tee /tmp/step1 /tmp/backup cat data | tee /tmp/step1 | grep pattern cat data | tee /tmp/log --append cat data | tee -e 2h /tmp/short-lived cat data | tee --expire 7d /tmp/week-long cat data | tee -e 30m /tmp/brief cat data | tee -q /tmp/quiet # Write to file without outputting to stdout ``` **Note:** - Writes to temp storage AND outputs to stdout (like Unix `tee`) - Use `-q` or `--quiet` flag to suppress stdout output (file is still written) - Files are scoped to your API key (private to your account) - Files are compressed (gzip) and encrypted (AES-256-GCM) before storage using your API key - Files expire after 36 hours by default (configurable per token or per-file with `--expire`) - Maximum expiration is 168 hours (7 days) - Files persist even if you regenerate your API key (ULID-based storage) ### `touch` - Create or Refresh Temp File Expiry **Usage:** `touch /tmp/ [path2 ...] [--expire 36]` Like Unix `touch`: creates an empty temp file if the path does not exist, or refreshes its expiration if it does. Content of existing files is unchanged. **Flags:** - `--expire `, `-e ` - Expiration time. Same format as `tee`: bare number (hours) or suffix `s`/`m`/`h`/`d`. Default: 36h. Max: 168h (7d). **Examples:** ``` touch /tmp/step1 touch /tmp/a /tmp/b --expire 2h touch /tmp/marker --expire 7d touch -e 30m /tmp/brief ``` **Note:** - Path(s) support date expansion (e.g. `/tmp/log-%Y-%m-%d.txt`); use `X-User-Time` header for expansion. - Default expire is 36 hours (same as `tee`). Use `--expire` to set a different TTL for new or refreshed files. ### `find` - Find Temp Files Matching Criteria **Usage:** `find /tmp/ [-name | -regex | -iregex ] [-mtime