Blog
Get Your Preferred Secrets Manager in a Secure Cross-Platform CLI Toolkit
Steve Swoyer | 18 March 2025

Why store credentials locally when you can fetch them at runtime using your preferred secrets manager? Better yet, why not run the same environment everywhere, from local development to CI to production?
Persisting credentials in unencrypted files or system keyrings is a common pattern in local development, but it doesn’t scale to CI or production. Unencrypted storage is a known security risk, while keyrings are tied to specific platforms (like Keychain on macOS) or depend on desktop user sessions, making them a non-starter for headless environments like CI and prod.
A better pattern involves using a third-party secrets manager to retrieve credentials ephemerally, at runtime, injecting them only as needed so they aren’t persisted beyond the process using them.
This article walks through a cross-platform workflow spanning local dev, GitHub, and AWS to show how you can create a portable environment with built-in, transparent auth via a third-party secrets manager. This gives you CLI workflows that work the same way everywhere—whether you're running locally, in CI, or in prod.
A better, more secure pattern for managing secrets
If you’re working on your local system with tools like Git (git
), GitHub CLI (gh
), and the AWSCLI2 (aws
), there’s a good chance at least one of them is persisting credentials to unencrypted files, exporting them as env vars, or otherwise handling them in ways that would cause a CISO to cringe.
The AWSCLI2 and GitHub CLI do this by default, storing unencrypted tokens in ~/.aws/credentials
and ~/.config/gh/hosts.yml
, respectively. If credentials are sitting in unencrypted files, it’s just a matter of time before they end up in a repo, a log, or a build artifact—propagating silently from there.
So for GitHub, AWS, and other services, I use a hardened secrets manager like 1Password, HashiCorp Vault, or Doppler to fetch and inject credentials at runtime—without storing them on disk or exposing them as env vars. The good news is that each of these CLI tools is available in the Flox Catalog!
The following walk-through showcases 1Password as an exemplar, because this is what we use internally at Flox. But you can easily build an equivalent Flox environment that works with any other secrets manager that supports a comparable fetch-and-inject pattern. These include:
- 1Password CLI. Uses
op run
to inject secrets into CLI commands; - HashiCorp Vault CLI.
vault exec
runs a command with secrets injected at runtime; - Doppler. Like 1Password, uses
doppler run
to inject secrets into CLI commands.
How this pattern works
With 1Password CLI, op run
pulls secrets at runtime and injects them into CLI commands like gh
.
The overall pattern looks like this:
- The
op run
command executes in an ephemeral subshell; - Secrets are retrieved directly from 1Password and exported as environment variables;
- They’re available in the ephemeral subshell for as long as it exists;
- When the command finishes executing, the subshell itself is destroyed.
I built a proof-of-concept Flox environment that fetches GitHub, AWS, and other tokens from my 1Password vault and makes them available to op run
as environment variables, which the GitHub CLI uses for authentication. Secrets in this Flox environment are available only for the duration of the op run
subshell—and this subshell itself runs as a child process of the Flox environment.
This environment wraps gh
, git
, and aws
like so:
# gh wrapper function
gh() { op run --session "$OP_SESSION_TOKEN" --env-file <(echo "GITHUB_TOKEN=op://$OP_GITHUB_VAULT/$OP_GITHUB_TOKEN_ITEM/$OP_GITHUB_TOKEN_FIELD") -- gh "$@"; }
# git wrapper function
git() {
if [[ "$1" =~ ^(push|pull|fetch|clone|remote)$ ]] && token=$(op read "op://$OP_GITHUB_VAULT/$OP_GITHUB_TOKEN_ITEM/$OP_GITHUB_TOKEN_FIELD" --session "$OP_SESSION_TOKEN" 2>/dev/null); then
askpass=$(mktemp)
echo -e "#!/bin/sh\necho $token" > "$askpass"
chmod +x "$askpass"
GIT_ASKPASS="$askpass" GIT_TERMINAL_PROMPT=0 command git -c credential.helper= "$@"
rm -f "$askpass"
else
command git "$@"
fi
}
# aws wrapper function
aws() { op run --session "$OP_SESSION_TOKEN" --env-file <(echo -e "AWS_ACCESS_KEY_ID=op://$OP_AWS_VAULT/$OP_AWS_CREDENTIALS_ITEM/$OP_AWS_USERNAME_FIELD\nAWS_SECRET_ACCESS_KEY=op://$OP_AWS_VAULT/$OP_AWS_CREDENTIALS_ITEM/$OP_AWS_CREDENTIALS_FIELD") -- aws "$@"; }
The git
function is larger because git
normally prompts for credentials interactively or relies on a credential helper. We work around this by retrieving the token dynamically, generating a temporary script that outputs the token, and telling git
to use that script via GIT_ASKPASS
.
You could easily wrap other CLI tools in a Flox environment—like the Databricks CLI:
databricks() {
op run --session "$OP_SESSION_TOKEN" --env-file <(echo -e "DATABRICKS_HOST=op://$OP_DATABRICKS_VAULT/$OP_DATABRICKS_ITEM/$OP_DATABRICKS_HOST_FIELD\nDATABRICKS_TOKEN=op://$OP_DATABRICKS_VAULT/$OP_DATABRICKS_ITEM/$OP_DATABRICKS_TOKEN_FIELD") -- databricks "$@";
}
These tools, along with dozens of others—like the Snowflake CLI, Azure CLI, Google Cloud Platform SDK, Terraform CLI, OpenStack CLI, and so on—are available via the Flox Catalog.
Transparent auth for cross-platform workflows
Now let’s see how this works in a cross-platform workflow. We’ll use a Flox environment that gives us aws
, gh
, Node.js, and other tools along with integrated 1Password secrets management. With it, we’ll build, test, and deploy an AWS Lambda function—without having to worry about debugging missing dependencies or wrestling with auth headaches.
Imagine your team manages infrastructure, deploys applications, and runs data pipelines across GitHub, AWS, Databricks, and other cloud services. You use CLI tools to pull data, deploy code, and manage infra, all of which require secure auth and secrets management.
No problem! Your platform team built a Flox environment with everything you need, including automated auth via 1Password for git
, gh
, aws
, and databricks
. The only thing you care about is that this environment Just Works everywhere, regardless of which platform it’s running on. It even works across contexts—locally, in CI, and in prod. And because Flox is based on Nix, it enables reproducible builds at scale. You can get this environment by running:
flox pull --copy barstoolbluz/xplatform-cli-tools
Running flox pull --copy
** gives you a way to customize your platform team’s Flox environment. This is useful if you need to install packages, define env vars and services, or author custom logic. For your org as a whole, FloxHub enables simplified sharing via its intuitive flox push
and flox pull
syntax.
You can also get this environment from GitHub. Just run the following command:
git clone https://github.com/barstoolbluz/xplatform-cli-tools
OK, let’s dig into a plausible sample workflow that showcases this environment at work.
The workflow
In this example workflow, we’ll spin up a Git repo, throw together a basic Node.js Lambda, and test it locally before pushing it to GitHub. Deployment runs through AWS CLI, which is wrapped to handle auth via 1Password and op run
—which means no exposed creds, no manual token management. Once our Lambda is live, we’ll hit it with a test payload, verify the output, and log the results as a GitHub issue. Auth and secrets management in this workflow are completely transparent. Best of all, everything—CLI tools, env vars, auth wrappers—lives inside a Flox environment, so it’s portable and reproducible.
First, let’s set up our repository. We’ll create our repo directory, change into it, and git init
a new repo:
$ mkdir mars-needs-lambdas && cd mars-needs-lambdas
$ git init
…
Initialized empty Git repository in /home/daedalus/dev/lambda-demo/mars-needs-lambdas/.git/
Next we’ll create our lambda function file. This is a proof-of-concept only, so let’s keep it basic:
$ cat > package.json << 'EOF'
{
"name": "mars-needs-lambdas",
"version": "1.0.0",
"description": "a simple lambda function for demonstration purposes",
"main": "index.js",
"scripts": {
"test": "node -e \"const handler = require('./index').handler; handler({test: 'local-execution'}).then(console.log)\""
},
"author": "",
"license": "MIT",
"dependencies": {}
}
EOF
Now let’s test the function locally with Node.js before deploying to AWS. We can flox install nodejs
if we want any version, or **flox install**
a specific version—e.g., flox install nodejs@20.14.0
.
$ flox install nodejs@20.14.0
✅ 'nodejs' installed to environment 'aws-ghcli'
$ node -e "const handler = require('./index').handler; handler({test: 'sample-event'}).then(console.log)"
event received: {
"test": "sample-event"
}
response: {
"receivedEvent": {
"test": "sample-event"
},
"timestamp": "2025-03-16T18:39:31.738Z",
"message": "the last thing mars needs is human settlement",
"environment": "development"
}
{
receivedEvent: { test: 'sample-event' },
timestamp: '2025-03-16T18:39:31.738Z',
message: 'the last thing mars needs is human settlement',
environment: 'development'
}
It works! Next we’ll add index.js
and other required artifacts to the repo:
$ git add index.js package.json
$ git commit -m "mars needs lambdas function implementation"
[master (root-commit) 5636714] mars needs lambdas function implementation
2 files changed, 27 insertions(+)
create mode 100644 index.js
create mode 100644 package.json
Now let’s (re)create our repo on GitHub:
$ gh repo create mars-needs-lambdas --private --source=. --remote=origin
https://github.com/barstoolbluz/mars-needs-lambdas
And push it to master:
$ git push -u origin master # or 'main' depending on your default branch
Enumerating objects: 4, done.
Counting objects: 100% (4/4), done.
Delta compression using up to 16 threads
Compressing objects: 100% (4/4), done.
Writing objects: 100% (4/4), 807 bytes | 807.00 KiB/s, done.
Total 4 (delta 0), reused 0 (delta 0), pack-reused 0 (from 0)
To https://github.com/barstoolbluz/mars-needs-lambdas.git
* [new branch] master -> master
branch 'master' set up to track 'origin/master'.
In the background, offscreen, I ran a script that lets me easily define and/or invoke IAM roles as needed. This script also deploys my Lambda functions; here’s some partial output from its run:
creating lambda function 'mars-needs-lambdas'...
{
"FunctionName": "mars-needs-lambdas",
"FunctionArn": "arn:aws:lambda:us-east-1:140439111649:function:mars-needs-lambdas",
"Runtime": "nodejs18.x",
"Role": "arn:aws:iam::140439111649:role/mars-lambdas-role",
"Handler": "index.handler",
...
}
In the following code block, I’ll create a dummy JSON object that serves as the event input that gets passed to my Lambda function when I invoke it. When my Lambda executes, this data appears in the event parameter of my handler function—the entrypoint for my Lambda, defined in index.js
—giving me a way to test how my function processes different inputs. To do all of this I’m invoking my Lambda function using the AWS CLI, which is wrapped by my Flox environment to transparently handle authentication via 1Password and op run
. This both avoids credential leakage and keeps my workflow simple:
$ echo '{"key": "demo value", "source": "aws-cli"}' > payload.json
$ aws lambda invoke \
--function-name mars-needs-lambdas \
--payload fileb://payload.json \
--cli-binary-format raw-in-base64-out \
response.json
{
"StatusCode": 200,
"ExecutedVersion": "$LATEST"
}
$ cat response.json
{
"receivedEvent": {
"key": "demo value",
"source": "aws-cli"
},
"timestamp": "2025-03-16T19:09:48.097Z",
"message": "the last thing mars needs is human settlement",
"environment": "development"
}
Now I’ll use gh
to create a GitHub issue that includes the results:
$ gh issue create --title "Lambda function test results" --body "$(cat response.json)"
https://github.com/barstoolbluz/mars-needs-lambdas/issues/1
Success!
Try it for yourself
All of the CLI tools used in this example workflow—gh
, git
, aws
, and op
itself—are wrapped to use either op run
(gh
and aws
) or op read
(git
), so credentials aren’t exported as env vars or persisted to disk. And because it’s defined as a Flox environment, it runs the same everywhere.
If a CLI tool needs credentials, wrapping it is just a matter of knowing:
- Where your secrets live;
- How to fetch them; and
- How to pass them in.
Every secrets manager—whether it’s 1Password, HashiCorp Vault, or Doppler—has a structure: vaults, paths, projects, configs, whatever you want to call them. You just need to map these fields to the specific secrets and keys you care about. Tools like aws
and gh
work cleanly with env vars, so you can use op run
, vault exec
, or doppler run
to inject them when the command runs. Other tools, like git
, might need a different approach, like op read
or dynamically setting GIT_ASKPASS
.
The Flox environment used in this walk-through is built around 1Password, but the pattern applies to any secrets manager—fetch secrets at runtime, inject them where appropriate, and don’t leave them behind.
Workflows that Just Work anywhere
Managing auth, secrets, and tooling across the SDLC has got to be one of the biggest challenges in DevOps. Platform teams want to give engineers a reliable, reproducible way to handle all three, but spend too much time debugging works-on-my-machine issues. Managing secrets works differently in local dev than in CI and prod, so everyone—from individual contributors, to SREs, to platform teams—falls back on a least-common-denominator approach … because it’s the only thing that works everywhere.
Flox is a cross-platform package and environment manager you can use to create portable environments that include everything you need at runtime—like CLI tools, infrastructure services, dependencies, and application frameworks—so your workflows behave the same across local dev, CI, and prod.
Intrigued by what you’ve read? Skeptical? Flox is free—so why not download it and try it out? Put it to the test yourself!