TL;DR Fern's CLI generator produces a fully functional CLI from the same API spec you already use for Fern. It's available in early access today — reach out or read our docs if you'd like to ship one alongside your Docs and SDKs.
Why a CLI and why now
Like a developer, Claude Code, Codex, and Cursor already run tests, edit files, and invoke git from the shell. The command line is the native interface for that environment.
MCP solves a different problem: it gives shell-less clients like Claude.ai or ChatGPT a way to reach external systems. For agents that already have a terminal, a CLI is the path of least resistance.
The properties that make CLIs work for humans also make them work for agents. --help at every level lets an agent discover capabilities incrementally instead of loading hundreds of tool definitions upfront. Models trained on billions of shell scripts already know the conventions. And piping output through jq or grep keeps irrelevant data out of the context window, which MCPs can't do as effectively.
Most API companies don't have a CLI because building a good one is a multi-quarter project. You define commands, wire up auth, handle pagination and retries, format output for both pipes and humans, build for five platforms, generate shell completions, and keep all of that in sync as the API evolves. Few teams have the headcount to staff that against the rest of their roadmap.
That's exactly the kind of work our customers don't want to staff, so we built it.
What it is
fernapi/fern-cli is a Fern generator. You add it to your generators.yml the same way you'd add any other Fern SDK generator:
# generators.yml
groups:
cli:
generators:
- name: fernapi/fern-cli
config:
cli-name: acme # Top-level command
display-name: "Acme CLI" # Human-readable name
When your spec changes, Fern opens a PR against your CLI repo with the generated source. On release, Fern automatically publishes the CLI to npm, Homebrew, and GitHub Releases so your users can install it with their package manager of choice. The CLI stays in sync with your Docs and SDKs, and you never need to write CLI code by hand.
The generator works against the same set of API specs Fern's SDK generators already support. OpenAPI works today; GraphQL, AsyncAPI, and gRPC are in active development for the CLI.
The architecture underneath is what matters. Most generators on the market are OpenAPI-only — they assume a single protocol and bake that assumption into the command tree. Ours doesn't, which is why adding GraphQL or gRPC is a matter of teaching the generator a new spec format rather than rewriting the runtime.
What your end users get is a single binary that installs through whichever channel they prefer:
npm install -g @acme/cli
export ACME_API_KEY=sk-...
eval "$(acme completions zsh)" # Tab completion
acme users list --format table
acme users create --json '{"name":"Alice"}' --dry-run
acme users create --format json --help # JSON schema for the commandBuilt with Rust
The CLI generator ships through the standard Fern pipeline — regenerated on every spec change — alongside our other SDK generators. The output is a single, statically linked Rust binary. Users drop it onto their PATH and run it: there’s no language runtime to install and no dependencies to resolve.
We picked Rust for two properties that matter for CLI tools: a small static binary that runs anywhere, and a library ecosystem we can expose as a crate.
Single binary. One statically linked file per platform. Our reference build for the Box Platform API — 279 methods, 1.5 MB OpenAPI spec embedded — is 8.1 MB. We produce macOS, Linux, and Windows builds for x86_64 and ARM64, distributed through npm, Homebrew, GitHub Releases, a curl-pipe installer, and cargo install. WinGet, Chocolatey, and APT are coming soon.
Library crate. The same engine that powers the generator is exposed as a Rust crate. A thin main.rs on top of the library gets you the same runtime behavior:
use fern_cli_sdk::CliApp;
fn main() {
CliApp::new("acme")
.spec(include_str!("openapi.yaml"))
.auth_env("ACME_API_TOKEN")
.run()
}
Use the generator if you want a CLI you don't have to think about. Use the library if you want custom commands, custom auth flows, or custom output — all sharing the same runtime.
Designed for agents
Humans and agents both use CLIs to do their work. They differ in how they figure out what to run, how they recover from mistakes, and how they're invoked.
Self-describing at runtime. Beyond standard --help, the CLI exposes machine-readable schemas: --help --format json returns per-endpoint JSON schemas, and a skill.md ships per command as a drop-in context bundle, the same way our SDKs ship reference.md.
Hardened against agent failure modes. Agents make mistakes that don’t look like human typos. That leads to predictable failure modes, including double-encoded URLs, query parameters embedded in IDs, and malformed path segments. The CLI normalizes and validates these inputs before sending a request, making it a safer interface for agent-driven workflows.
CLI and MCP server in one. The same binary covers both interfaces: it runs as a CLI for terminal-based agents and as an MCP server over stdio or streamable HTTP for shell-less clients like Claude.ai and ChatGPT. Both modes share the same command engine and are generated from the same API spec, so the interfaces never drift. You can run it locally as a CLI, configure it as an MCP server in your agent tooling, or deploy it remotely for shared use.
None of this trades against human ergonomics. The same binary still produces colored help, tabular output, shell completions, and --dry-run previews when a person is running it interactively.
Request early access
If you're shipping an API that customers (or their agents) want to script against, reach out or read the docs. We're onboarding early access customers now.



