run Documentation
The run tool executes functions defined in a Runfile, exposing them both to your terminal and to AI agents via the Model Context Protocol (MCP). It is built for fast startup, polyglot scripts, and discoverable tooling.
Who this is for
- Developers who want a single, typed catalog of project tasks.
- AI users who need MCP-exposed tools without revealing implementation details.
- Teams that mix shell automation with Python/Node snippets in one file.
How this documentation is organised
- Getting started — installation and your first Runfile.
- Runfile syntax — functions, namespaces, and signatures.
- Arguments — how parameters map to shell variables and defaults.
- Variables — environment handling and scope rules.
- Attributes and interpreters —
@desc,@os,@shell, and shebang precedence. - Polyglot commands — mixing languages inside a Runfile.
- Command composition — combining tasks and propagating exit codes.
- CLI usage — flags, output formats, completions, and REPL.
- MCP integration — exposing tools to AI agents safely.
- Recipes — ready-made Runfile snippets.
- Reference — attribute and environment variable quick lookups.
- FAQ — fast answers to common questions.
Getting started
Follow these steps to install run and try your first Runfile.
Install
macOS/Linux (Homebrew)
brew tap nihilok/tap
brew install runtool
Windows (Scoop)
scoop bucket add nihilok https://github.com/nihilok/scoop-bucket
scoop install runtool
All platforms (Cargo)
cargo install run # or: cargo install runtool
Create your first Runfile
- Create a file named
Runfilein your project root.
# @desc Deploy to an environment
# @arg env Target environment (staging|prod)
# @arg version Version to deploy (defaults to "latest")
deploy(env: str, version = "latest") {
echo "Deploying $version to $env..."
./scripts/deploy.sh $env $version
}
- Run it from your terminal:
run deploy staging
run deploy prod v2.1.0
- List available functions to check discovery:
run --list
- Install shell completions (auto-detects your shell):
run --install-completion
Working directory
run searches upward from the current working directory for a Runfile. To point at another project explicitly, use --working-dir /path/to/project.
Next steps
- Learn the Runfile syntax, arguments, and variables.
- Explore polyglot commands and command composition.
Runfile syntax
A Runfile is a catalog of callable functions. run looks for Runfile in the working directory (or parent directories) and injects its functions into an execution scope.
Function forms
- Inline: concise single-line commands
dev() cargo run fmt() cargo fmt build-release() cargo build --release - Block: multi-line bodies without trailing backslashes
ci() { echo "Running CI..." cargo fmt -- --check cargo clippy cargo test } functionkeyword: optional; bothbuild()andfunction build()are accepted.
Signatures
Declare parameters in the function header. They become shell variables during execution.
# @desc Deploy to an environment
deploy(env: str, version = "latest") {
echo "Deploying $version to $env"
}
- Named parameters become
localshell variables inside the function. All standard bash variable features (${var:-default}, single-quote protection, etc.) work as expected. - Parameters are positional; the order in the signature matches CLI arguments.
- Defaults make parameters optional.
- A rest parameter captures the remaining arguments:
echo_all(...args) echo "Args: $args"- When forwarding
...args, usecmd $argsto expand into separate tokens;cmd "$args"collapses them into a single argument. If you need to preserve the original token boundaries (including spaces) safely, prefercmd "$@"instead—for example,cargo test --package run "$@".
- When forwarding
- Type annotations are optional:
param: type. Supported types:str/string,int/integer,float/number,bool/boolean,object/obj/dict. Types drive MCP schema generation and, in polyglot functions, automatic value conversion. - You can still reference legacy positional tokens (
$1,$2) alongside named params. - Named parameters work in polyglot functions too (Python, Node.js, Ruby).
runauto-injects variables so you can usenamedirectly instead ofsys.argv[1]. See Polyglot commands.
See Arguments for mapping/defaults and Variables for scope and environment details.
Naming
Function names must start with a letter or _ and can contain letters, digits, _, :, or -. Hyphens are transparently mapped to underscores in the generated shell script, so build-release and build_release both work and call the same function.
Namespaces
Use colons to group related tasks. Invoke with spaces or colons.
docker:build() docker build -t app .
docker:logs(service = "app") docker compose logs -f $service
Run as run docker build or run docker:build.
Sourcing other files
The source directive merges functions from another file into the current Runfile:
source ./shared.run
source ~/.runfiles/helpers.run
- Paths can be relative (resolved from the Runfile's directory), absolute, or use
~/for the home directory. - Quoted paths are supported:
source "path with spaces.run". - Sourced files can themselves contain
sourcedirectives (circular references are detected and skipped). - If a sourced file doesn't exist, a warning is printed to stderr and execution continues.
- Functions defined later in the file override those from sourced files, so you can import a base set and selectively replace individual commands.
sourcedirectives inside function bodies are passed through to the shell interpreter as normal shellsourcecommands—only top-level directives are expanded byrun.
# Import shared helpers, override one locally
source ./team-defaults.run
deploy() {
echo "Custom deploy for this project"
}
Comments and attributes
Lines beginning with # can hold human comments or attributes (e.g., # @desc). Attributes adjust behavior and metadata; see Attributes and interpreters.
Arguments
run maps CLI arguments into shell variables based on the function signature. This page covers how parameters, defaults, and rest arguments work.
Parameter mapping
- Signature parameters become shell variables with the same name.
greet(name) echo "Hello, $name!" # run greet Alice -> prints "Hello, Alice!" - Parameters stay positional: the first CLI token maps to the first parameter, and so on.
- Legacy
$1,$2,$@work alongside named parameters for backward compatibility.
Defaults and required parameters
- Provide defaults in the signature to make parameters optional:
deploy(env, version = "latest") echo "Deploying $version to $env" # run deploy staging -> uses version "latest" # run deploy staging v1.2.3 -> overrides the default - Calls missing a required argument fail before execution.
- Default values may be quoted; the target shell expands them.
Rest parameters
Capture any remaining arguments into one variable:
echo_all(...args) echo "Args: $args"
# run echo_all foo bar -> Args: foo bar
Polyglot parameter mapping
Named parameters also work in Python, Node.js, and Ruby functions. run auto-generates variable declarations so you can use parameter names directly:
# @shell python
greet(name, greeting = "Hello") {
print(f"{greeting}, {name}!")
}
Type hints apply real conversions in polyglot scripts: int wraps in int()/parseInt()/.to_i, float wraps in float()/parseFloat()/.to_f, bool performs truth-checking, and object parses a JSON string via json.loads()/JSON.parse(). See Polyglot commands for the full mapping table.
Types in signatures
Type hints (str, int, float/number, bool, object/obj/dict) are used for MCP schema generation. In shell functions, conversion is up to your script. In polyglot functions (Python, Node.js, Ruby), typed parameters are automatically converted when injected:
int/integer— parsed as an integerfloat/number— parsed as a floating-point numberbool/boolean— parsed as a boolean (truthy:true,1,yes)object/obj/dict— parsed from a JSON string into a native object/dict
Quoting and spaces
Arguments are passed as plain CLI tokens. Quote values containing spaces or shell-sensitive characters:
run deploy "staging us-east" "v1.2.3"
See Variables for environment handling and variable resolution.
Variables
Learn how variables are resolved when functions execute, including environment inheritance and Runfile-level scope.
Environment variables
Functions inherit the caller environment. Two key variables affect run itself:
RUN_SHELL— override the default shell for execution. Defaults tobashon Unix/macOS (falls back toshif bash is not available),pwsh(orpowershell) on Windows.RUN_MCP_OUTPUT_DIR— directory for MCP output files when responses are truncated.
Built-in variables
run automatically injects the following variables into every function's execution scope:
__RUNFILE_DIR__— absolute path of the directory that contains the Runfile (or~/.runfile) the function was defined in. Useful for constructing paths relative to the Runfile itself:
For polyglot functions the variable is injected with the appropriate syntax for the target language (e.g.build() { # Load config from the same directory as this Runfile source "$__RUNFILE_DIR__/config.sh" echo "Building from $__RUNFILE_DIR__" }__RUNFILE_DIR__ = "/path"for Python/Ruby,const __RUNFILE_DIR__ = "/path";for Node.js).
Runfile scope
- Top-level variables declared in a Runfile are visible to all functions.
- Sibling functions are injected into the execution scope, so you can call them by name.
Parameter variables
- Signature parameters become shell variables with matching names once the function starts.
- Legacy positional tokens (
$1,$2,$@) remain available for backward compatibility.
Polyglot interpreters
When a function uses a shebang or @shell interpreter, arguments are forwarded positionally into that interpreter:
- Python:
sys.argv[1],sys.argv[2], ... - Node.js:
process.argv[2],process.argv[3], ... - Other interpreters receive the same argv array; defaults are applied before forwarding.
For interpreter selection rules, see Attributes and interpreters. For parameter behaviors, see Arguments.
Attributes and interpreters
Attributes live in comments (# @key value) and adjust how a function is exposed or executed. Interpreter selection can be declared via attributes or shebangs.
Descriptions and args
@desc— one-line summary shown in listings and MCP tool schemas.@arg <name> [type] <description>— add human-readable parameter docs. Names should match the signature. Optional type keyword (string,integer,float/number,boolean,object/dict) sets the JSON schema type for MCP.@instructions <text>— top-level MCP guidance line appended to serverinitialize.instructions. This is single-line and repeatable; lines are aggregated in merged/source order.
# @desc Deploy to an environment
# @arg env Target environment (staging|prod)
# @arg version Version to deploy (defaults to "latest")
deploy(env: str, version = "latest") { ... }
Top-level MCP instruction example:
# @instructions Confirm target environment before deploy calls
# @instructions Prefer exact keywords when using memory recall queries
Platform guards
Limit a function to specific operating systems.
Attribute form (separate functions by OS):
# @os windows
clean() del /Q dist
# @os unix
clean() rm -rf dist
Interpreter selection
There are two ways to pick an interpreter for a function body:
- Shebang detection — first line inside the body
script() {
#!/usr/bin/env python3
import sys
print(sys.argv[1])
}
@shellattribute — explicit, and takes precedence over a shebang
# @shell node
serve() {
console.log(process.argv[1] || 3000);
}
Supported interpreters include python, python3, node, ruby, pwsh, bash, and sh.
Precedence and resolution
@shelloverrides a shebang if both exist.- If no interpreter is set, the function uses the default shell (
RUN_SHELLor platform default).
For language-specific behaviors and argument forwarding, see Polyglot commands.
Polyglot commands
Mix shell, Python, Node, and other interpreters in a single Runfile.
How polyglot execution works
- Shebangs (
#!/usr/bin/env python3,#!/usr/bin/env node) tellrunwhich interpreter to use for the body. @shelloverrides or replaces a shebang when you want explicit control.- Named parameters declared in the function signature are auto-injected as variables in the script body. No manual unpacking of
sys.argvorprocess.argvneeded. - Arguments are also forwarded positionally (
sys.argv,process.argv,ARGV), so you can still use manual access when you prefer.
Named parameters in polyglot functions
When a polyglot function declares parameters in its signature, run generates a preamble that creates proper named variables in the target language. This matches how shell functions already get $name substitution.
# @shell python
greet(name, greeting = "Hello") {
print(f"{greeting}, {name}!")
}
run greet World -> Hello, World!
run greet World Hi -> Hi, World!
The same works for Node.js and Ruby:
# @shell node
greet(name, greeting = "Hello") {
console.log(`${greeting}, ${name}!`);
}
# @shell ruby
greet(name, greeting = "Hello") {
puts "#{greeting}, #{name}!"
}
What gets generated
For each named parameter, run prepends variable declarations before your script body:
| Feature | Python | Node.js | Ruby |
|---|---|---|---|
| Required param | name = sys.argv[1] | const name = process.argv[1]; | name = ARGV[0] |
| Default value | name = sys.argv[1] if len(sys.argv) > 1 else "default" | const name = process.argv.length > 1 ? process.argv[1] : "default"; | name = ARGV.length > 0 ? ARGV[0] : "default" |
| Rest param | args = sys.argv[2:] | const args = process.argv.slice(2); | args = ARGV[1..] |
int type | int(sys.argv[1]) | parseInt(process.argv[1], 10) | ARGV[0].to_i |
float type | float(sys.argv[1]) | parseFloat(process.argv[1]) | ARGV[0].to_f |
bool type | sys.argv[1].lower() in ('true', '1', 'yes') | !['false','0',''].includes(...) | !['false','0',''].include?(...) |
object type | json.loads(sys.argv[1]) | JSON.parse(process.argv[1]) | JSON.parse(ARGV[0]) |
When object is used, import json (Python) or require 'json' (Ruby) is added automatically. Node.js needs no extra import since JSON is a global.
Manual access still works
The $name text substitution and positional sys.argv/process.argv/ARGV access continue to work. Named parameters are additive and don't conflict with either approach.
# @desc Analyze a JSON file
analyze(file: str) {
#!/usr/bin/env python3
import sys, json
data = json.load(open(file)) # uses the auto-injected variable
# data = json.load(open(sys.argv[1])) # manual access also works
print(f"Records: {len(data)}")
}
# @desc Start a dev server
# @shell node
dev:server(port = "3000") {
const p = parseInt(port, 10); # uses the auto-injected variable
require('http').createServer((_, res) => res.end('ok')).listen(p);
console.log(`Listening on ${p}`);
}
Best practices
- Prefer named parameters over manual
sys.argv/process.argvunpacking for clarity. - Keep interpreter-specific logic inside dedicated functions; orchestrate with shell functions for portability.
- Use
@descand@argfor clearer MCP schemas.
See Attributes and interpreters for selection rules, and Command composition for combining polyglot functions with others.
Command composition
Compose functions to build richer workflows without duplicating logic.
Calling other functions
Call sibling functions directly; they are injected into the execution scope.
build() cargo build --release
test() cargo test
lint() cargo clippy
ci() {
echo "Running CI..."
lint || exit 1
test || exit 1
build
}
Key behaviors:
- Exit codes propagate; guard dependent steps with
|| exit 1when needed. - Top-level Runfile variables are visible to all functions.
Cross-language patterns
- Use shell functions to orchestrate calls into language-specific helpers.
- Combine platform guards with composition to select the right implementation per OS.
- Keep shared setup (env vars, temp dirs) in one function and reuse it across tasks.
For interpreter mixing patterns, see Polyglot commands.
CLI usage
Most commands follow run <function> [args...]. The CLI also offers discovery, completions, structured output, and an MCP server mode.
Common commands
- Call a function:
run deploy staging v1.2.3 - List available functions:
run --list - Execute a script file directly:
run ./script.run - Start the interactive REPL (no args):
run
Flags
--list— print all callable functions in the current Runfile.--inspect— output the MCP JSON schema for all functions (descriptions, parameters, defaults).--show-script— print the generated shell script that would be executed, without running it. Useful for debugging parameter injection and transpilation.--serve-mcp— start the MCP server so AI agents can call your functions.--working-dir PATH(alias--runfile) — pointrunat a specific project directory.--output-format stream|json|markdown— choose how results are emitted;json/markdownuse structured output when supported by the function.--install-completion [SHELL]— install shell completions (auto-detects if omitted).--generate-completion SHELL— print completion script without installing.
Output formats
stream(default): stream stdout/stderr directly.json: emit structured results when a function returns them (falls back to streamed output otherwise).markdown: format structured results for MCP/AI-friendly rendering.
Completions
run --install-completion # detects shell
run --generate-completion zsh # print script for manual install
Supports bash, zsh, fish, and powershell.
Working with multiple Runfiles
run searches upward from the current directory. Use --working-dir to target a different project, or create a ~/.runfile for global utilities that are searched after the local Runfile.
MCP integration
run ships with a Model Context Protocol (MCP) server so AI agents can discover and invoke your Runfile functions as tools.
Why it works well
- Functions stay in your Runfile; only names, descriptions, and parameters are exposed to the agent.
- Typed signatures produce JSON schemas automatically.
- Output files are saved when responses are large, preventing context overrun while keeping full data accessible.
Start the MCP server
run --serve-mcp
This searches for the nearest Runfile (or ~/.runfile fallback) and exposes its functions.
Inspect the tool schema
run --inspect
Outputs the JSON schema that agents receive—useful for debugging descriptions, parameter types, and defaults. Supported types: str/string, int/integer, float/number, bool/boolean, object/obj/dict.
Configure MCP servers for your agents
Add an entry to your MCP config:
{
"mcpServers": {
"runtool": {
"command": "run",
"args": ["--serve-mcp"]
}
}
}
Runfile-provided server instructions
You can append project-specific guidance to MCP initialize.instructions using top-level @instructions lines:
# @instructions Confirm the active environment before running deploy tools
# @instructions Ask before any destructive action
Rules:
- Use top-level comment lines only (
# @instructions ...or#@instructions ...). - Directives inside function bodies are ignored.
- Lines are appended in merged/source order (global
~/.runfilefirst, then projectRunfile, with sourced content in place).
Built-in MCP tools
Alongside your Runfile functions, three helpers are always available:
set_cwd(path: string)— change the working directory for subsequent calls.get_cwd()— report the current working directory.run_docs(topic?: string)— fetch embedded Runfile/run documentation. Call with no arguments (or"index") to list available topics, or pass a topic slug such as"runfile-syntax"or"attributes-and-interpreters"to retrieve the relevant docs.
Built-in timeout parameter
Every Runfile-derived tool automatically receives an optional timeout parameter (integer, seconds):
{
"name": "deploy",
"inputSchema": {
"properties": {
"env": { "type": "string" },
"timeout": { "type": "integer", "description": "Optional timeout in seconds..." }
},
"required": ["env"]
}
}
- Omit it (or pass
null) for no time limit — previous behaviour is unchanged. - If the command exceeds the limit the process is killed and a JSON-RPC error is returned.
timeoutis never forwarded to the shell function as a positional argument.- If your Runfile already defines a parameter named
timeout, that function will not be exposed via MCP. Rename the parameter to resolve the conflict.
Output files and truncation
- Long outputs are truncated in the MCP response to ~1200 characters (~300 tokens); the full text is saved to
.run-output/next to your Runfile. - Override the output location with
RUN_MCP_OUTPUT_DIRif you need a different directory.
Describing tools for agents
- Always include
@descand@argcomments so the schema is clear. - Keep function names action-oriented (e.g.,
deploy,db:query,docs:build). - Use defaults for optional inputs so agents can call tools with fewer arguments.
Security notes
- Agents see only the schema, never the function body. Secrets embedded in functions are not exposed via MCP.
- Use platform guards (
@os) to avoid serving tools that cannot run on the host, or use polyglot node/python scripts (@shell).
Recipes
Copy-pasteable Runfile snippets for common workflows. Adjust names and paths to fit your project.
Docker workflows
# @desc Build the Docker image
docker:build() {
docker build -t myapp:latest .
}
# @desc Start services
docker:up() {
docker compose up -d
}
# @desc Tail logs for one or more services (defaults to app)
# @arg services Service names (optional)
docker:logs(...services) {
if [ $# -eq 0 ]; then
set -- app
fi
docker compose logs -f "$@"
}
Run with run docker build, run docker up, or run docker logs api web.
CI pipeline
# @desc Lint the project
lint() cargo clippy -- -D warnings
# @desc Run tests
test() cargo test
# @desc Build release binary
build() cargo build --release
# @desc Full CI
ci() {
echo "Running CI..."
lint || exit 1
test || exit 1
build
}
Polyglot data helpers
# @desc Analyze a JSON file
# @arg file Path to the JSON file
analyze(file: str) {
#!/usr/bin/env python3
import sys, json
data = json.load(open(sys.argv[1]))
print(f"Total records: {len(data)}")
}
# @desc Convert CSV to JSON
# @arg input Input CSV file
# @arg output Output JSON file
csv_to_json(input: str, output: str) {
#!/usr/bin/env python3
import sys, csv, json
rows = list(csv.DictReader(open(sys.argv[1])))
json.dump(rows, open(sys.argv[2], 'w'), indent=2)
print(f"Converted {len(rows)} rows -> {sys.argv[2]}")
}
Platform-specific commands
# @desc Clean build artifacts
# @os windows
clean() {
del /Q /S target\*
}
# @desc Clean build artifacts
# @os unix
clean() {
rm -rf target/
}
# @desc Open the project (portable)
open() {
case "$(uname -s)" in
Darwin) open . ;;
Linux) xdg-open . ;;
MINGW*|MSYS*|CYGWIN*) start . ;;
*) echo "Unsupported OS" >&2; return 1 ;;
esac
}
Deploy with dependencies
build() cargo build --release
# @desc Build and deploy
deploy(env: str, version = "latest") {
build || exit 1
echo "Deploying $version to $env"
./scripts/deploy.sh $env $version
}
Ad-hoc memory (SQLite recipe)
If you want memory-like behavior, you can do it with plain Runfile functions and an sqlite3 database. The top-level @instructions lines in this example are appended to MCP initialize.instructions, so the agent gets usage guidance
at session start.
# @instructions Use this SQLite recipe for facts discovered during a session that need to survive context compaction or carry across sessions (for example: resolved environment details, confirmed decisions).
# @instructions Prefer the host's own auto-memory (for example: MEMORY.md) for user preferences and workflow instructions that should be read at conversation start.
# @desc Create ad-hoc memory tables (idempotent)
memory:init(db = ".run-memory.db") {
sqlite3 "$db" "CREATE TABLE IF NOT EXISTS memories (id TEXT PRIMARY KEY, scope TEXT NOT NULL DEFAULT 'session', content TEXT NOT NULL, updated TEXT NOT NULL)"
sqlite3 "$db" "CREATE TABLE IF NOT EXISTS tags (memory_id TEXT NOT NULL, tag TEXT NOT NULL, PRIMARY KEY (memory_id, tag))"
sqlite3 "$db" "CREATE INDEX IF NOT EXISTS idx_memories_scope ON memories(scope)"
sqlite3 "$db" "CREATE INDEX IF NOT EXISTS idx_tags_tag ON tags(tag)"
}
# @desc Store or update a memory note
# @arg content Note to remember
# @arg scope session|project|global (default: session)
# @arg tags Comma-separated tags (optional)
# @arg id Optional existing ID to upsert
memory:store(content: str, scope = "session", tags = "", id = "", db = ".run-memory.db") {
memory:init "$db"
entry_id="${id:-m-$(date +%s)-$RANDOM}"
esc_content=$(printf "%s" "$content" | sed "s/'/''/g")
esc_scope=$(printf "%s" "$scope" | sed "s/'/''/g")
sqlite3 "$db" "INSERT INTO memories (id, scope, content, updated) VALUES ('$entry_id', '$esc_scope', '$esc_content', datetime('now'))
ON CONFLICT(id) DO UPDATE SET scope = excluded.scope, content = excluded.content, updated = datetime('now');"
sqlite3 "$db" "DELETE FROM tags WHERE memory_id = '$entry_id';"
IFS=',' read -ra parts <<< "$tags"
for tag in "${parts[@]}"; do
clean_tag=$(printf "%s" "$tag" | xargs)
[ -z "$clean_tag" ] && continue
esc_tag=$(printf "%s" "$clean_tag" | sed "s/'/''/g")
sqlite3 "$db" "INSERT OR IGNORE INTO tags (memory_id, tag) VALUES ('$entry_id', '$esc_tag');"
done
echo "$entry_id"
}
# @desc Recall notes by substring/scope/id
memory:recall(query = "", scope = "", limit = 20, id = "", db = ".run-memory.db") {
memory:init "$db"
where="1=1"
if [ -n "$query" ]; then
esc_query=$(printf "%s" "$query" | sed "s/'/''/g")
where="$where AND content LIKE '%$esc_query%'"
fi
if [ -n "$scope" ]; then
esc_scope=$(printf "%s" "$scope" | sed "s/'/''/g")
where="$where AND scope = '$esc_scope'"
fi
if [ -n "$id" ]; then
esc_id=$(printf "%s" "$id" | sed "s/'/''/g")
where="$where AND id = '$esc_id'"
fi
match_count=$(sqlite3 "$db" "SELECT COUNT(*) FROM memories WHERE $where;")
if [ "$match_count" = "0" ]; then
echo "No memories found for the provided filters." >&2
return 1
fi
sqlite3 -header -column "$db" "SELECT id, scope, content, updated
FROM memories
WHERE $where
ORDER BY updated DESC
LIMIT $limit;"
echo "Found $match_count matching memory note(s)." >&2
}
# @desc Forget a note by id
memory:forget(id: str, db = ".run-memory.db") {
memory:init "$db"
esc_id=$(printf "%s" "$id" | sed "s/'/''/g")
tags_deleted=$(sqlite3 "$db" "DELETE FROM tags WHERE memory_id = '$esc_id'; SELECT changes();")
memories_deleted=$(sqlite3 "$db" "DELETE FROM memories WHERE id = '$esc_id'; SELECT changes();")
if [ "$memories_deleted" = "0" ]; then
echo "No memory found for id '$esc_id'." >&2
return 1
fi
echo "Deleted memory '$esc_id' (tags removed: $tags_deleted)." >&2
}
# @desc Run store/recall/forget in one verification flow
# @arg content Note to round-trip verify
# @arg scope session|project|global (default: session)
# @arg tags Comma-separated tags (optional)
# @arg id Optional existing ID to upsert
memory:roundtrip(content: str, scope = "session", tags = "", id = "", db = ".run-memory.db") {
memory:init "$db"
stored_id=$(memory:store "$content" "$scope" "$tags" "$id" "$db")
esc_stored_id=$(printf "%s" "$stored_id" | sed "s/'/''/g")
esc_scope=$(printf "%s" "$scope" | sed "s/'/''/g")
esc_content=$(printf "%s" "$content" | sed "s/'/''/g")
recalled_count=$(sqlite3 "$db" "SELECT COUNT(*) FROM memories WHERE id = '$esc_stored_id' AND scope = '$esc_scope' AND content = '$esc_content';")
if [ "$recalled_count" = "0" ]; then
echo "Round-trip recall verification failed for '$stored_id'." >&2
return 1
fi
memory:forget "$stored_id" "$db"
echo "$stored_id"
echo "Round-trip verification succeeded for '$stored_id'." >&2
}
For more patterns, combine these with the guidance in Polyglot commands and Command composition.
Reference
Quick lookups for attributes, environment variables, and discovery rules.
Attribute summary
@desc <text>— short description for listings and MCP tools.@arg <name> [type] <description>— document parameters (names should match the signature). Optional type can bestring,integer,float/number,boolean, orobject/dict.@os <unix|windows|macos|linux>— restrict a function to a platform.- Platform branching: use separate
# @osvariants or branch inside the shell body (inline@macos {}style guards are not supported). @shell <interpreter>— force an interpreter (python3,node,pwsh,bash,sh, etc.). Overrides any shebang.
Source directive
source <path>— merge functions from another file into the current Runfile. Paths are relative to the Runfile's directory, absolute, or~/-prefixed. Only recognised at the top level (not inside function bodies). See Runfile syntax.
Runfile discovery and precedence
--working-dir / --runfileif provided (no merging).- Otherwise, project
Runfileis merged with global~/.runfile(project definitions override globals). - Set
RUN_NO_GLOBAL_MERGE=1to disable merging and use only the project Runfile (or fall back to global if no project file exists).
Environment variables
RUN_SHELL— default shell when no interpreter is specified. Defaults toshon Unix/macOS,pwsh(orpowershell) on Windows.RUN_MCP_OUTPUT_DIR— directory for MCP output files when responses are truncated.RUN_NO_GLOBAL_MERGE— skip merging~/.runfileinto the project Runfile (useful for isolation/tests).
Output handling
--output-format stream|json|markdowncontrols how results are emitted.- Structured output is used when a function returns it (e.g., MCP-aware functions); otherwise output is streamed.
Interpreters
Supported interpreters include python, python3, node, ruby, pwsh, bash, and sh. Use a shebang or @shell to select one; @shell wins if both are present.
FAQ
How are arguments passed?
They follow the order in the function signature and become shell variables with matching names. Legacy $1, $2, $@ still work. See Arguments and Variables.
How do I choose an interpreter?
Add a shebang as the first line of the body or set # @shell <interpreter>. The attribute wins if both are present. Details in Attributes and interpreters.
Can I mix languages in one Runfile?
Yes. Each function can use its own interpreter. Use shell functions to orchestrate calls between Python/Node helpers; see Polyglot commands and Command composition.
How do I document tools for AI agents?
Add @desc and @arg comments. Run run --inspect to confirm the generated schema. More in MCP integration.
Where are Runfiles discovered?
run walks up from the current directory, then falls back to ~/.runfile. Override with --working-dir. See the reference.
What if my output is very large?
In MCP mode, long output is truncated in the response and the full text is written to .run-output/ (configurable via RUN_MCP_OUTPUT_DIR).
How do I install shell completions?
Run run --install-completion (auto-detects shell) or run --generate-completion SHELL for manual setup. See CLI usage.
How do I run tasks from another project?
Use run --working-dir /path/to/project <function> ... so discovery happens in that directory.
Are type hints enforced at runtime?
In shell functions, no — types power documentation and MCP schemas only. In polyglot functions (Python, Node.js, Ruby), int, float, bool, and object types trigger automatic conversion of CLI string arguments. Validate inside your function for anything beyond that.
Is it safe to keep secrets in functions when using MCP?
Theoretically, yes. MCP exposes only the schema (names, descriptions, parameter shapes). Function bodies are not sent to the agent. However, if secrets are in your project Runfile and agents have other tools to read files, there is nothing to stop them from reading the Runfile (as with a plaintext .env file). See MCP integration for security notes.