Connect

Blog

A Turnkey Toolkit for Agentic Development with Flox

Steve Swoyer | 17 December 2025
A Turnkey Toolkit for Agentic Development with Flox

Scheduled some time off to close out the year? Have we at Flox got a modest proposal for you!

Thirty-plus AI assistants / MCP servers / spec-driven-AI tools. Two weeks—from today until New Years. Can you test drive all of them? On your own machine? At once? Without breaking anything? (Spoiler alert: Yes, you can!) What will you build? How quickly will you build it? With whom will you share it? How will they run it on their machines? And how much fun do you think you’ll all have along the way?

Oodles—assuming you’re pairing with Flox, the cross-platform, cross-language package manager powered by Nix.

It’s like this: Claude Code, OpenAI Codex, GitHub Copilot, Google Gemini CLI, and other AI agents/AI coding assistants can use Flox’s MCP server to discover, find specific versions of, and install just the right dependencies, searching among millions of historical packages. Plus, Flox is also a cross-platform, cross-language, cross-architecture virtual environment manager, so agentic tools won’t squander tokens resolving and debugging dependency conflicts across languages, toolchains, OSes, or CPU architectures. It’s even possible to run conflicting versions of dependencies in the same environment, on the same system, at the same time.

Best of all, Flox runs directly on your system—no containers or VMs required.

And when you’re ready to share what you’ve created, the same Flox environments you built and perfected locally—perhaps on that new MacBook Pro or Framework 16 laptop you got for the holidays?—Just Work when friends, coworkers, or amazed/grateful/suppliant users pull and run them on their own machines.

Read on to find out why Flox and FloxHub are perfect for vibe coding, prototyping, or agentic development.

AI Coding Agents & CLIs

This repo consolidates 30+ tools into a one-stop resource for agentic development. Each folder is its own turnkey Flox environment, complete with a README.md that explains what it is, how it works, and what features / affordances it offers.

After downloading and installing Flox, you start Flox environments like so:

$ flox activate

If an environment contains services—like MCP servers, databases, model servers, workflow scheduling engines (like Airflow), and so on—you can either start them at runtime or from within an activated session.

$ flox activate -s
 
## or:
 
$ flox services start <optional_service_name>		# if more than one service is defined

This is all you need to start working with any of these tools. As for the AI agents/assistants included with this repo, many if not most will get the knowledge + context they need from the Flox MCP server; others will benefit from reading and referring back to the FLOX.md file that’s included with this repo.

Of course, these tools don’t need to know anything about Flox. Flox just lets them run anytime/anywhere. How they run and what you do with them is up to you—or (Skynet isn’t a thing yet, right?) them.

Agentic Development with Flox

The repo is a work in progress: more projects will likely be added as they’re identified. Beaucoups of thanks go out to the folks at Numtide, who inspired this with their seminal curated repo of Nix AI tools. Some of the tools collected here still haven’t been picked up by upstream Nixpkgs, so I’ve used Numtide’s Nix expressions, along with Claude Code-concocted Nix expressions (generated at my own prompting), to build, package, and publish these with Flox.

For dynamic projects like Crush, nanocoder, and ollama, I closely track upstream and try to package + publish new versions as soon as they’re available. So for these and a few other tools, the versions you pull from the Flox Catalog will typically be newer than Nixpkgs. Nix users can use these same Nix expressions to build and/or run these tools without Flox.

You can fork this repo and do whatever you want with it. You can explore each environment to crib examples for creating, customizing, or improving your own Flox environments. To copy a Flox manifest.toml declarative configuration file is to copy a version of that environment. (To copy both manifest.toml and manifest.lock is to create an exact replica of that environment.) To copy the [install], [vars], [hook], [profile], [services], or [build] sections of a Flox manifest is to copy the dependencies, variables, features, setup/teardown tasks, helper functions, or services associated with them. (Just be sure to look for deps in other parts of the manifest. For example, stuff in [services] usually depends on packages defined under [install].) The world is, as it were, yours.

AI agents/CLI assistants

aichat

LLM CLI tool providing an interactive REPL and command execution across multiple AI providers (OpenAI, Anthropic, local models with Ollama). Features role/session management, shell command execution (-e flag), code generation (-c flag), and RAG support for document context. Supports adding files to conversation context and persisting sessions for context management.

amp

Sourcegraph's coding agent with interactive setup. Supports encrypted secrets storage (system keyring) and works with multiple providers (including Anthropic, OpenAI, Gemini, Groq, DeepSeek, and Ollama). Can use Sourcegraph’s proprietary models, too. “Smart” and “Rush” mode offer paid access to Sourcegraph’s models; “Free” mode lets you use basic models at no cost.

claude-code

Anthropic's official Claude Code CLI bundled with Flox MCP server. Automatically registers the Flox MCP server with Claude Code on activation, enabling Claude to manage Flox environments and packages directly. Creates a turnkey Claude Code development environment with built-in Flox integration. Defines optional GitHub and GitLab MCP servers, too; run flox edit and uncomment these to use.

claude-code-acp

Agent Client Protocol adapter that bridges Claude Code with various editors (Zed, Neovim, Emacs, marimo) via npm package auto-installation. Runs an ACP server that acts as a protocol translation layer that lets Claude Code work with these editors/IDEs.

code

Officially known as “Every Code,” nicknamed “Code” by its maintainers. (The better to defy grep, find, and search indexers?) Fork of OpenAI’s Codex (see below) that expects to track/stay compatible with that project. Extends Codex with browser/Chrome DevTools Protocol (CDP) integration, plus support for third-party agents and MCP severs. Recent versions add a verification-first loop: automatically re-reviewing code changes in parallel (in a separate worktree) and streaming runtime signals from the app/browser back into the agent.

coderabbit-cli

CodeRabbit is a proprietary code-review service that embeds in pull requests and analyzes changes in the context of the overall repo, then synthesizes analyzer output into review comments and one-click fixes. CodeRabbit CLI uses that service’s free tier (which supports access for a large number of public repos), although workflows still run through the CodeRabbit SaaS platform. Free or paid, CodeRabbit learns team preferences and generates finishing work (e.g., tests/docs) across major Git hosts and IDEs/editors.

codex

OpenAI's local AI coding agent with ChatGPT account integration and approval-based command execution. (Supports a YOLO mode to bypass approval requirements: codex --sandbox danger-full-access). This environment includes a pre-integrated Flox MCP server.

copilot-cli

Conversational CLI tool with GitHub-specific context and support for multiple providers, including Anthropic’s Claude and OpenAI’s GPT models. This environment is pre-integrated with the Flox MCP server, so it’s the same Copilot CLI you know and love—just kitted out to work copacetically with Flox.

cursor-agent

Cursor's autonomous AI coding agent, exposed via an interactive TUI. Capable of planning and implementing project changes, manipulating files, running shell commands, etc. Note: Cursor Agent require a valid Cursor subscription, can use any model that’s available via that subscription. This Flox environment is pre-integrated with the Flox MCP server for a first-class Flox experience.

crush

Charm's glamorous, stylish, and surprisingly powerful AI coding agent front-end. (Full disclosure: We here at Flox are fans.) Ships with a self-bootstrapping wizard that encrypts/stores secrets using system keyring (if available). Supports multiple providers—including Ollama! Pre-integrated with the Flox MCP server. Arguably the most versatile agentic coding assistant out there.

droid

Droid is Factory’s terminal-based development agent: a CLI that runs interactive sessions in your repos to plan, implement, test, refactor, or review code. Droid supports human-in-the-loop controls (e.g., diff viewing, approvals, autonomy and reasoning settings) and integrates with an organization’s shared tools and workflows (e.g., Jira/Notion/Slack). IDE integration automatically passes your current file/selection and IDE diagnostics (lint/typecheck/compiler errors) into the running Droid session. Supports an optional “cloud sync” function that lets you mirror local IDE sessions to Factory’s web service.

ECA

ECA, or Editor Code Assistant, is an open-source, editor-agnostic system that defines a JSON-RPC protocol (based on the Language Server Protocol) for connecting editors/IDEs to LLM-backed chat, rewrite, and inline completion features. Editors can start ECA as a subprocess and communicate over stdin and stdout, with ECA handling model/provider routing; prompt/context assembly (files, cursor, repo signals, MCP resources); and running tools. ECA instruments via OpenTelemetry and supports a standard config across editors and projects. Provider support includes OpenAI, Anthropic, GitHub Copilot, Ollama, and custom endpoints.

forge

TUI coding agent that runs in any repo to explain projects, plan changes, implement features, review diffs, and help debug. Can connect to one or more LLM providers and uses a tool-driven workflow to read projects, propose edits, run commands, and provide guidance re: changes. (Includes an optional restricted shell mode). You can drive Forge interactively or non-interactively by feeding prompts, command files, workflows, events, or saved conversations. You tune Forge’s behavior with forge.yaml, which is useful for defining rules, shortcuts, a default model, traversal depth, temperature, retry limits. MCP lets you attach external tools/services via local or user-level configs + use them with multi-agent workflows.

gemini-cli

Google's Gemini AI agent for terminal with authentication (OAuth, API key, or Vertex AI) and MCP server integration with Flox. Features JSON output for scripting automation and includes VS Code extension for editor integration. Includes Google Cloud integration and JSON output format for scripting.

goose-cli

The CLI for Goose, an open-source, machine-local AI agent that plans and completes tasks by running commands, editing files, and testing code. Goose works with a staggering array of providers/models: Amazon Bedrock, Amazon SageMaker TGI, Anthropic, Azure OpenAI, Databricks, Docker Model Runner (local OpenAI-compatible), Google Gemini, GCP Vertex AI, GitHub Copilot, Groq, LiteLLM, Mistral AI, Ollama (local), Ramalama (local), OpenAI-compatible endpoints (like vLLM/KServe), OpenRouter, Snowflake, Tetrate Agent Router Service, Venice AI, and xAI. Supports sessions plus configuration and extension management from the CLI, so you can reuse the same workflows across projects and environments. Extensible via MCP-based add ons.

kilocode-cli

Terminal CLI for Kilo.ai’s open-source agent. Kilocode CLI runs the same workflow as Kilo.ai’s VS Code extension, but gives you a way to plan/debug/implement changes from your TUI. Kilocode CLI works directly with third-party providers (using API keys) or routes requests through Kilo Gateway, Kilo.ai’s proprietary hosted access layer. This last exposes a large catalog of models (via OpenRouter) and is notionally useful if you need centralized billing, access, and control, as an alternative to managing multiple providers.

kiro

Kiro is a desktop IDE that embeds a coding agent. Instead of a typical/ad-hoc chat-driven UX, Kiro incorporates first-class workflow primitives—specs, steering files, and hooks—to formalize how work gets planned, constrained, and automated. It turns prompts into “specs,” extracting requirements and producing an implementation plan. It uses markdown “steering” files to apply persistent guidance. Kiro “codebase-indexes” your workspace by scanning + tracking code, configs, docs, and dependencies. The IDE runs locally, but AWS operates the backend service and powers model inference via Amazon Bedrock.

nanocoder

A community-driven, local-first CLI coding agent that runs an interactive TUI chat (but also supports a non-interactive run mode) and can read/write files + execute shell commands as tools. Nanocoder connects to any OpenAI-compatible endpoint, so you can use local providers (Ollama, LM Studio, vLLM, llama.cpp, etc.) or hosted gateways (OpenRouter/OpenAI), with per-project or user-level config. It can also load extra capabilities via MCP servers (plus optional tool-approval controls, custom slash commands, and a VS Code extension for previewing diffs). Nanocoder is to working with local providers/models what Charm’s Crush is to working with commercial providers: an attractively styled, versatile, powerful tool.

opencode

Open-source AI coding agent built for the terminal. Its TUI lets you switch between “build” (for interactive development) and “plan” (for analysis + code exploration) agents. In “plan” mode, OpenCode defaults to read-only behavior, asks before running shell commands, and is geared toward planning changes. You can point OpenCode at Anthropic’s Claude, OpenAI’s GPT models, or Google Gemini, along with local providers (like Ollama). Uses a client/server architecture so the TUI is just one possible client. (You can run the agent on one machine and drive it from another client). OpenCode offers an optional (beta) desktop app, plus an internal “@general” subagent that can perform complex searches + multi-step tasks.

qwen-code

An open-source AI coding CLI adapted from Gemini CLI and tuned for Qwen3-Coder models. Chat in a TUI to analyze, edit, and automate work across a repo. Built-in authentication via Qwen OAuth for free quota-limited access, or run against any OpenAI-compatible endpoint by setting standard env vars: API key, base URL, model. (Also works with local Ollama models.) qwen-code adds Qwen-specific parsing tweaks + session/token controls. Supports optional vision auto-switching when pasting images.

Infrastructure & Services

amazon-q-cli

AWS's now-abandoned AI-powered terminal assistant with AWS Builder ID or IAM authentication and natural-language-to-command translation. Integrates with AWS ecosystem and provides enterprise auth options. The folks at AWS recently replaced q with a closed-source tool (kiro-cli), but (touch wood) it works—for now. This environment includes a pre-integrated Flox MCP server.

backlog-md

Markdown-native project board for any Git repo: Backlog.md manages tasks/docs/decisions as plain .md files inside your repos, renders an interactive terminal Kanban (backlog board), and can launch a web UI (backlog browser). Supports fuzzy search and exporting backlog board snapshots, stays private/offline by default, and integrates with AI assistants via MCP (or legacy CLI mode).

catnip

Containerized runner for long-lived, parallel Claude Code sessions. Spins up isolated workspaces via git worktrees inside a container; exposes a webUI, APIs, and mobile access. Catnip works with Codespaces/devcontainers or local runs; this Flox environment bundles Colima as a container runtime. (In case you don’t have one.) It auto-forwards ports for live previews + sets up SSH for remote-IDE access + can mount containers for container-based workflows. When Claude edits code, Catnip auto-commits to refs/catnip/... and keeps a branch in sync for direct review outside the container.

claude-code-router

A wrapper that runs Claude Code inside a “workspace,” giving you a live, paneled terminal view so you can watch what the agent is doing as it works. Instead of dropping you into a single chat-driven UX, it spins up a dedicated session environment (with your repo mounted into it); launches Claude Code there; and uses a tmux-style UI to show commands, output, and progress in real time. You can switch layouts (e.g., side-by-side or stacked panes), keep runs organized, and review what happened as the agent edits files and runs commands. If you want to use a different model—e.g., models like Ollama—you do that by pointing Claude Code at a local proxy/router that speaks the right API. The claude code box wrapper should still work the same because it’s just supervising the session and displaying what’s happening.

claudebox

Claudebox runs Claude Code inside a bubblewrap sandbox and opens a two-pane tmux session so you can watch what the agent does while it works. It shadows your $HOME so credentials stay out of reach (exception: ~/.claude); likewise, it mounts the project’s parent directory read-only and launches Claude Code with --dangerously-skip-permissions (YOLO mode) inside that sandbox. Claudebox also intercepts and logs every command Claude executes, streaming that live into a side pane (and saving a log under /tmp). You can disable the monitor pane, choose vertical vs horizontal splits, and optionally ignore your personal tmux config.

codex-acp

An ACP adapter that wraps the OpenAI Codex CLI (see above) so ACP-compatible clients like Zed can run Codex as an external agent. It supports context @-mentions, images, tool calls with permission prompts, follow-mode, edit review, TODO lists, and a set of slash commands (e.g., /review, /review-branch, /review-commit, /init, /compact, /logout), plus custom prompts and client-side MCP servers. For auth, it can use a ChatGPT subscription only when you run it locally (it won’t work for remote projects); alternatively, it can use API-key auth via CODEX_API_KEY or OPENAI_API_KEY for local and remote setups.

comfyui

An open-source, node-based UI for generating images using Stable Diffusion, Flux, and related models. Build workflows by wiring nodes together (load model → prompt/conditioning → sampler → decode → save/output), then generate images. ComfyUI emphasizes composable, inspectable pipelines: tweak a node, swap components, and reuse or share complete workflows. You supply the model files, often by downloading checkpoints/LoRAs from HuggingFace, CivitAI, or other open source providers.

gpt4all

A “run LLMs on your own machine” toolkit. gpt4all ships a Web-based chat UI (works across Windows / macOS / Linux) and Python SDK you can use to download + run open models locally. Bundles an inference engine built around llama.cpp-style GGUF workflows. Adds useful patterns on top of ad hoc/interactive inference: e.g., a pre-built RAG-like pattern (LocalDocs), which you can use to index local folders and answer questions by pulling relevant snippets from indexed documents. If you want other tools (e.g., editors, scripts, agents) to talk to it, GPT4All can run a local OpenAI-compatible HTTP server on localhost so clients can call /v1/chat/completions and similar endpoints.

ollama

Local model runtime provider. Pull, run, and work with models. Ollama also runs a local HTTP service (default is localhost:11434) so other apps can talk to it instead of embedding their own model runners. Support for OpenAI APIs unlocks interoperability with almost all coding agents, coding assistants/CLI tools, and many editors. This Flox environment runs GPU accelerated on supported platforms (Nvidia CUDA and Apple Metal/MPS), with fallback CPU operation. Can be configured to support AMD ROCm—just edit (flox edit) the Flox manifest to replace the declaratively defined ollama-cuda package with ollama-rocm.

Open-WebUI

A self-hosted, browser-based chat UI for interacting with LLMs. Pair it with Ollama for running local models; can also connect to other providers and non-OpenAI APIs via “functions/pipelines” adapters. Open WebUI adds chat UX on top of model selection + optional knowledge/RAG-style features, so you can run + manage an AI chat experience locally. This Flox environment includes an integrated Ollama service. Note: You’ll first want to pull Ollama models to use with it.

vllm

An open-source inference and serving engine for running large language models at scale. vLLM serves open-source models faster and at lower cost, without locking you into a single provider’s API. Its métier is high throughput + low latency: vLLM optimizes by managing KV-cache memory and batching incoming requests. This keeps GPUs (and other available accelerators) busy. Use it as a Python library or run it as an OpenAI-compatible API server. vLLM supports a broad range of Hugging Face models (including MoE, embeddings, and multimodal models) and offers pragmatic deployment features—like streaming responses, prefix caching, Multi-LoRA, distributed inference (tensor/pipeline/data/expert parallelism). Supports multiple quantization options (INT4/INT8/FP8 via common toolchains).

MCP Servers, Spec Tools, and Other Goodies

The following are most useful as demonstrations of which MCP tools are available from the Flox Catalog, plus as canonical examples of how to use MCPs in Flox environments. Some environments (Claude Code, Codex, Gemini CLI, Crush) define one or more MCPs; you can refer to these, too, for additional examples.

Tools like OpenSpec and GitHub spec-kit make AI-assisted coding more predictable by making requirements into versioned, reviewable spec artifacts that you can diff, test against, and hold the implementation accountable to. Spec tools aim to reduce rework and scope drift by forcing alignment on intent before code generation. They provide an auditable trail from spec → tasks → changes.

flox-mcp-server

Model Context Protocol server for Flox package management that integrates with AI assistants via stdio . Features Claude Code and VS Code integration and operates as a standalone MCP server for Flox operations. Provides Flox package management capabilities + context for AI coding workflows.

MCPHost

Command-line app that lets you chat with AI providers (Anthropic Claude, OpenAI GPT, Google Gemini, or local providers like Ollama) while giving them access to external tools via MCP servers. Point MCPHost at the tool servers you want—built-in ones (filesystem, bash, todo, HTTP); local MCP processes you run on your machine; or remote MCP endpoints over HTTP—and it connects to all of them, discovers their tools, and presents them a single consistent toolset. Use it as a central tool for managing tool connections, permissions, and policy hooks (logging, blocking risky commands, custom rules), or to switch models / providers without reconfiguring MCP tools across separate clients. Also supports non-interactive prompts and script-style runs, so you can reuse the same setup for automation + repeatable workflows.

OpenSkills

CLI utility that brings Claude Code–style “skills” to any coding assistant via a shared on-disk skills folder + a standard AGENTS.md index. Users run openskills install <command> to pull skill bundles from GitHub/local repos into .claude/skills/ (or .agent/skills/ in universal mode); OpenSkills sync writes an <available_skills> list into AGENTS.md so your agent can discover what’s available. When working with other CLI terminal agents (OpenAI Codex CLI, Google Gemini CLI, Charm Crush) the agent reads AGENTS.md. Skill content loads only when needed, i.e., without requiring a dedicated MCP server or plugin. Whenever an agent needs a skill, it runs openskills read <skill>; this prints that skill’s SKILL.md content into the session.

OpenSpec

A spec workflow you add to your existing repos so you + your AI coding assistant can align on intended behavior. Run openspec init to create an openspec/ sub-folder structure: openspec/specs/ stores the current source-of-truth specs; openspec/changes/ the per-feature change folders that bundle a proposal. This path also stores a task checklist and spec deltas (i.e., the “patch” to the specs). Some assistants surface OpenSpec’s steps as /openspec <verb> slash commands after running openspec init, because OpenSpec writes the instruction/config files they read at startup. But these commands are mere shortcuts: the “real” workflow lives in the repo as an openspec/ folder. With tools that don’t support custom slash commands, you run the same spec-first loop by posing requests in plain language and having the assistant create/review/apply/archive OpenSpec files.

spec-kit

GitHub’s open-source toolkit for “spec-driven development,” Pushes you to define requirements + outcomes first, turns your spec into: plan → task list → code. Run the spec cli (with commands like specify init and specify check) to scaffold or retrofit projects. spec-kit drops a small set of files into your repo and sets up preferred your agent so that when you type /speckit.<command>, the agent runs spec-kit’s scripts and uses the generated artifacts. Supports a number of tools, including Claude Code, OpenAI Codex CLI, Gemini CLI, Cursor, Copilot, Kilo Code, opencode, and Qwen Code.

>190,000 Other Packages, Millions of Historical Versions

The Flox Catalog contains just about every dependency you (or your AI agent minions) will need to build, test, and ship software. The Flox environments collected in this repo offer ready-to-run implementations showcasing some of these packages; however, what you see here isn’t close to exhausting the possibilities of what’s available. Here’s a sampling of the MCP packages available in the Flox Catalog:

  • github-mcp-server. Exposes GitHub repo/issue/PR operations via the GitHub API.
  • gitea-mcp-server. Exposes Gitea repo/issue/PR operations via the Gitea API.
  • terraform-mcp-server. Lets agent inspect/plan/apply Terraform workflows.
  • mcp-k8s-go. Go-based MCP tool server that exposes Kubernetes cluster operations (via kubectl-style reads/actions) as callable tools.
  • aks-mcp-server. MCP server for Microsoft’s Azure Kubernetes Service (AKS); supports cluster/resource management + other operations; surfaced as MCP tools.
  • playwright-mcp. Exposes Playwright browser automations (navigate/click/extract/screenshot) as MCP tools.
  • mcp-grafana. Exposes Grafana entities (dashboards/panels/alerts/queries) as tools.
  • mcp-proxy. Forwards/bridges MCP connections so clients can reach tool servers through a single hop.
  • pythonXXXPackages.mcp (where XXX is the version). Python MCP library/SDK used to build MCP clients/servers and define tool schemas in Python. Packages available for Python versions 3.10-3.13.
  • toolhive. Installs/runs/manages MCP servers as “tool bundles” so agents can consume them.

A Holiday Gift to you from the Folks at Flox

How about another serving of turn-key environments to build with as you ramp up your vibecoding vacation? If you or your AI agent minion need a database, scheduler, workflow management engine, httpd/reverse proxy, or other tools, services, or runtimes, check out the following repos:

  • flox/floxenvs. A collection of ready-to-run Flox environment examples (via flake.nix) for language toolchains like Go, Python (pip/poetry/uv), JavaScript (node/bun/deno), Ruby, and Rust, plus local service stacks like Postgres, Redis, MySQL, MongoDB, Cassandra, and Elasticsearch. This repo also includes environment templates for tools/apps such as nginx, mkcert, direnv/dotenv, 1Password, Dagger, Podman/Colima, LocalStack, JupyterLab, and Ollama.
  • floxrox/floxenvs. Another collection of ready-to-run Flox environments, including: Airflow, AWS CLI, Colima, ComfyUI, Dagster, GitHub CLI, Jenkins, JupyterLab, Kafka (plus Karapace, a schema registry), kind, MariaDB/MySQL, n8n, Neo4j, nginx, Node-RED, Ollama (plus Open WebUI), Postgres (including a Metabase combo), Prefect, Redis, Spark, Temporal (plus a temporal-ui env), and multiple Python dev environments (3.10–3.13).

You and your AI assistants can combine these environments in two complementary ways:

  • Composition. Compose multiple modular environments into a single, declarative “stack” that resolves and locks everything up front. Composition is great when you need reproducible, shareable setups;
  • Layering. Layer environments one on top of another at runtime. Useful for adding tools/services when you need them in the moment: e.g., layering Python debugging tools on top of core Python dev tools.

In practice you’ll often use both: using composition to create rich “stack” environments; layering extras on top when you need them.

What else? How about Flox-and-AI guides to get you started?