Testing Node.js CLI Tools with oclif and Jest

Testing Node.js CLI Tools with oclif and Jest

Building a CLI tool with oclif is pleasantly structured. Each subcommand is a class with a run() method. Flags and args are declared statically. The framework handles parsing, help generation, and error formatting. The problem is that most teams treat this structure as incidental and test nothing until something breaks in production.

oclif is testable by design. Because commands are classes, you can instantiate them directly, mock their dependencies, and assert on their output without spawning a subprocess. This guide covers the full testing pyramid for oclif CLIs: unit tests for individual commands, integration tests with execa, and Jest setup to make it all work cleanly.

oclif Architecture Recap

An oclif command looks like this:

// src/commands/upload.ts
import { Command, Flags } from "@oclif/core";
import * as fs from "fs";
import { uploadFile } from "../services/uploader";

export default class Upload extends Command {
  static description = "Upload a file to the server";

  static flags = {
    file: Flags.string({
      char: "f",
      description: "Path to file",
      required: true,
    }),
    bucket: Flags.string({
      char: "b",
      description: "Target bucket",
      default: "default",
    }),
    "dry-run": Flags.boolean({
      description: "Print what would be uploaded without uploading",
      default: false,
    }),
  };

  async run(): Promise<void> {
    const { flags } = await this.parse(Upload);

    if (!fs.existsSync(flags.file)) {
      this.error(`File not found: ${flags.file}`, { exit: 1 });
    }

    if (flags["dry-run"]) {
      this.log(`DRY RUN: Would upload ${flags.file} to ${flags.bucket}`);
      return;
    }

    this.log(`Uploading ${flags.file} to ${flags.bucket}...`);
    await uploadFile(flags.file, flags.bucket);
    this.log("Upload complete.");
  }
}

The run() method is where all the work happens. It calls this.parse() to get validated flags, this.log() to write to stdout, this.error() to write to stderr and exit, and can call any service or module. The testability comes from being able to override these methods.

Unit Testing a Command Directly

You can instantiate a command class and call run() without spawning a process. The trick is redirecting this.log and this.error to capture output:

// src/commands/upload.test.ts
import { expect } from "@jest/globals";
import Upload from "./upload";

jest.mock("../services/uploader", () => ({
  uploadFile: jest.fn().mockResolvedValue({ url: "https://example.com/file" }),
}));

// Helper: run a command and capture output
async function runCommand(
  CommandClass: typeof Upload,
  argv: string[]
): Promise<{ stdout: string[]; stderr: string[]; exitCode: number }> {
  const stdout: string[] = [];
  const stderr: string[] = [];
  let exitCode = 0;

  // Create instance (oclif v3+: pass argv and config)
  const cmd = new CommandClass(argv, {} as any);

  // Override log/error methods to capture output
  cmd.log = (...args: any[]) => {
    stdout.push(args.join(" "));
  };
  cmd.warn = (input: string | Error) => {
    stderr.push(String(input));
  };
  cmd.error = (input: string | Error, options?: { exit?: number }) => {
    stderr.push(String(input));
    exitCode = options?.exit ?? 2;
    throw new Error(String(input)); // mimic oclif behavior
  };

  try {
    await cmd.run();
  } catch (err) {
    // Errors thrown by cmd.error are expected
  }

  return { stdout, stderr, exitCode };
}

describe("Upload command", () => {
  const testFile = "/tmp/test-upload-file.txt";

  beforeEach(() => {
    require("fs").writeFileSync(testFile, "test content");
  });

  afterEach(() => {
    require("fs").unlinkSync(testFile);
    jest.clearAllMocks();
  });

  it("uploads a file successfully", async () => {
    const { stdout, exitCode } = await runCommand(Upload, [
      "--file",
      testFile,
      "--bucket",
      "my-bucket",
    ]);

    expect(exitCode).toBe(0);
    expect(stdout).toContain("Uploading /tmp/test-upload-file.txt to my-bucket...");
    expect(stdout).toContain("Upload complete.");
  });

  it("prints dry run message without uploading", async () => {
    const { uploadFile } = require("../services/uploader");
    const { stdout, exitCode } = await runCommand(Upload, [
      "--file",
      testFile,
      "--dry-run",
    ]);

    expect(exitCode).toBe(0);
    expect(stdout[0]).toMatch(/DRY RUN/);
    expect(uploadFile).not.toHaveBeenCalled();
  });

  it("errors on missing file", async () => {
    const { stderr, exitCode } = await runCommand(Upload, [
      "--file",
      "/nonexistent/file.txt",
    ]);

    expect(exitCode).toBe(1);
    expect(stderr[0]).toMatch(/File not found/);
  });

  it("uses default bucket when not specified", async () => {
    const { uploadFile } = require("../services/uploader");
    await runCommand(Upload, ["--file", testFile]);

    expect(uploadFile).toHaveBeenCalledWith(testFile, "default");
  });
});

Using @oclif/test

@oclif/test provides a chainable interface built on fancy-test that handles stdout/stderr capture automatically:

npm install --save-dev @oclif/test
// src/commands/version.test.ts
import { test, expect } from "@oclif/test";

describe("version command", () => {
  test
    .stdout()
    .command(["version"])
    .it("prints the version", (ctx) => {
      expect(ctx.stdout).to.match(/\d+\.\d+\.\d+/);
    });

  test
    .stdout()
    .stderr()
    .command(["version", "--json"])
    .it("prints JSON output", (ctx) => {
      const parsed = JSON.parse(ctx.stdout);
      expect(parsed).to.have.property("version");
    });
});

Note: @oclif/test uses Chai assertions (expect(...).to.match()), not Jest's expect(...).toMatch(). If you are using Jest as your test runner, you can still use @oclif/test for output capture but switch to Jest assertions by aliasing:

import { test } from "@oclif/test";
import { expect as jestExpect } from "@jest/globals";

test
  .stdout()
  .command(["version"])
  .it("prints the version", (ctx) => {
    jestExpect(ctx.stdout).toMatch(/\d+\.\d+\.\d+/);
  });

HTTP Mocking with nock

Commands that make HTTP requests need their network calls intercepted in tests:

npm install --save-dev nock
// src/commands/deploy.test.ts
import { test } from "@oclif/test";
import nock from "nock";

describe("deploy command", () => {
  test
    .stdout()
    .nock("https://api.example.com", (api) =>
      api
        .post("/deployments", {
          environment: "staging",
          version: "1.2.3",
        })
        .reply(200, { deploymentId: "dep_abc123", status: "queued" })
    )
    .command(["deploy", "--env", "staging", "--version", "1.2.3"])
    .it("queues a deployment and prints the ID", (ctx) => {
      expect(ctx.stdout).toContain("dep_abc123");
    });

  test
    .stdout()
    .stderr()
    .nock("https://api.example.com", (api) =>
      api.post("/deployments").reply(401, { error: "Unauthorized" })
    )
    .command(["deploy", "--env", "staging", "--version", "1.2.3"])
    .catch(/Unauthorized/)
    .it("prints error on 401 response", (ctx) => {
      expect(ctx.stderr).toContain("Unauthorized");
    });
});

nock intercepts Node's HTTP module, so it works with fetch, axios, got, and any other HTTP library that uses Node's built-in http module underneath.

Testing Flag and Argument Validation

oclif validates flags before run() is called. You can test validation errors by catching the thrown error:

describe("Upload flag validation", () => {
  it("requires --file flag", async () => {
    await expect(
      Upload.run(["--bucket", "my-bucket"])
    ).rejects.toThrow(/Missing required flag/);
  });

  it("rejects unknown flags", async () => {
    await expect(
      Upload.run(["--file", "/tmp/test.txt", "--unknown-flag", "value"])
    ).rejects.toThrow(/Unexpected argument/);
  });
});

For flag type validation (integer ranges, enum options), test with values that should pass and values that should fail:

// Command with enum flag
static flags = {
  format: Flags.string({
    options: ["json", "yaml", "table"],
    description: "Output format",
    default: "table",
  }),
};

// Test
it("rejects invalid format option", async () => {
  await expect(
    MyCommand.run(["--format", "xml"])
  ).rejects.toThrow(/Expected --format=xml to be one of/);
});

Testing Error Handling with CLIError

oclif has a CLIError class for user-facing errors. These display cleanly with Error: prefix rather than a stack trace:

import { CLIError } from "@oclif/core/lib/errors";

export default class Deploy extends Command {
  async run(): Promise<void> {
    const { flags } = await this.parse(Deploy);

    try {
      await deployService(flags.env, flags.version);
    } catch (error) {
      if (error instanceof NetworkError) {
        throw new CLIError(`Deployment failed: ${error.message}`, { exit: 1 });
      }
      throw error; // Re-throw unexpected errors
    }
  }
}

Testing CLIError:

import { CLIError } from "@oclif/core/lib/errors";

it("throws CLIError on network failure", async () => {
  const { deployService } = require("../services/deploy");
  deployService.mockRejectedValue(new NetworkError("Connection refused"));

  try {
    await Deploy.run(["--env", "staging", "--version", "1.0.0"]);
    fail("expected error");
  } catch (err) {
    expect(err).toBeInstanceOf(CLIError);
    expect((err as CLIError).message).toMatch(/Connection refused/);
    expect((err as CLIError).oclif.exit).toBe(1);
  }
});

Integration Testing with execa

For true end-to-end testing where you need to test the full CLI invocation including oclif's own error formatting, help generation, and shell behavior, use execa to spawn the actual process:

npm install --save-dev execa
// test/integration/cli.test.ts
import { execa, ExecaError } from "execa";
import * as path from "path";
import * as fs from "fs";
import * as os from "os";

const CLI_PATH = path.resolve(__dirname, "../../bin/run");

async function runCLI(
  args: string[],
  options: { env?: Record<string, string>; cwd?: string } = {}
): Promise<{ stdout: string; stderr: string; exitCode: number }> {
  try {
    const result = await execa("node", [CLI_PATH, ...args], {
      env: { ...process.env, ...options.env },
      cwd: options.cwd,
      reject: false, // don't throw on non-zero exit
    });
    return {
      stdout: result.stdout,
      stderr: result.stderr,
      exitCode: result.exitCode,
    };
  } catch (err) {
    const execaErr = err as ExecaError;
    return {
      stdout: execaErr.stdout ?? "",
      stderr: execaErr.stderr ?? "",
      exitCode: execaErr.exitCode ?? 1,
    };
  }
}

describe("CLI integration tests", () => {
  let tmpDir: string;

  beforeEach(() => {
    tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), "cli-test-"));
  });

  afterEach(() => {
    fs.rmSync(tmpDir, { recursive: true });
  });

  it("shows help with --help flag", async () => {
    const { stdout, exitCode } = await runCLI(["--help"]);
    expect(exitCode).toBe(0);
    expect(stdout).toContain("USAGE");
    expect(stdout).toContain("COMMANDS");
  });

  it("upload command succeeds with valid file", async () => {
    const testFile = path.join(tmpDir, "data.json");
    fs.writeFileSync(testFile, JSON.stringify({ key: "value" }));

    const { stdout, exitCode } = await runCLI(
      ["upload", "--file", testFile, "--bucket", "test"],
      { env: { API_URL: "https://api-mock.example.com" } }
    );

    expect(exitCode).toBe(0);
    expect(stdout).toContain("Upload complete");
  });

  it("exits with code 1 on missing file", async () => {
    const { stderr, exitCode } = await runCLI([
      "upload",
      "--file",
      "/nonexistent/file.txt",
    ]);

    expect(exitCode).toBe(1);
    expect(stderr).toContain("File not found");
  });

  it("respects MYAPP_CONFIG_DIR environment variable", async () => {
    fs.writeFileSync(
      path.join(tmpDir, "config.yaml"),
      "default_bucket: custom-bucket\n"
    );

    const testFile = path.join(tmpDir, "test.txt");
    fs.writeFileSync(testFile, "hello");

    const { stdout } = await runCLI(["upload", "--file", testFile, "--dry-run"], {
      env: { MYAPP_CONFIG_DIR: tmpDir },
    });

    expect(stdout).toContain("custom-bucket");
  });
});

Jest Setup for oclif Projects

// jest.config.js
module.exports = {
  preset: "ts-jest",
  testEnvironment: "node",
  testMatch: ["**/*.test.ts", "**/test/**/*.test.ts"],
  collectCoverageFrom: ["src/**/*.ts", "!src/**/*.d.ts"],
  coverageReporters: ["text", "lcov"],
  // Increase timeout for integration tests
  testTimeout: 30000,
  // Clear mocks between tests
  clearMocks: true,
  restoreMocks: true,
};

Separate unit and integration test scripts in package.json:

{
  "scripts": {
    "test": "jest --testPathPattern='src/'",
    "test:integration": "jest --testPathPattern='test/integration/'",
    "test:all": "jest",
    "test:coverage": "jest --coverage --testPathPattern='src/'"
  }
}

GitHub Actions CI

# .github/workflows/test.yml
name: Tests

on: [push, pull_request]

jobs:
  test:
    runs-on: ubuntu-latest
    strategy:
      matrix:
        node-version: [18, 20, 22]

    steps:
      - uses: actions/checkout@v4

      - name: Setup Node.js ${{ matrix.node-version }}
        uses: actions/setup-node@v4
        with:
          node-version: ${{ matrix.node-version }}
          cache: "npm"

      - name: Install dependencies
        run: npm ci

      - name: Run unit tests
        run: npm test

      - name: Build
        run: npm run build

      - name: Run integration tests
        run: npm run test:integration

      - name: Upload coverage
        if: matrix.node-version == 20
        uses: codecov/codecov-action@v4

A Complete Test Suite for a File Upload Command

Putting it all together, a production-ready test suite covers:

  1. Unit tests — happy path, each error branch, each flag combination, mocked services
  2. Validation tests — missing required flags, invalid flag values, type mismatches
  3. Integration tests — full binary invocation, environment variable handling, real filesystem
  4. Error handling tests — network failures, auth errors, malformed responses

The goal is that when you run npm test, every code path in your CLI has been exercised and every user-facing behavior has been asserted. The release checklist becomes: "did npm test pass?" — nothing more.

oclif's class-based command model makes this achievable without heroic mocking effort. The run() method is just an async function. Mock its dependencies with Jest, capture its output, check the results. Do it for every command, and you ship CLIs that actually work.

HelpMeTest extends CLI testing with 24/7 endpoint monitoring and AI-powered test suites — start free at helpmetest.com

Read more