Anthropic's Claude 4 Sonnet is widely regarded as the best model for coding (as of July 2025). It's my first choice for most programming tasks, but it's far from perfect, and I often need to seek help from other models.

It's always good to seek a second opinion, especially with the non-deterministic nature of LLMs. For me, that usually means copy/pasting from Claude into ChatGPT or Gemini, then copy/pasting the response back into Claude Code. But this can disrupt your flow and lead to vibe-oscillations, or even worse, a complete collapse of your vibe-field.

Fortunately, Anthropic just released a feature that can help keep your vibes intact and flowing without interruption: Claude Code Hooks.

In this guide, I'll show you how to use Claude Code Hooks to ask Gemini CLI for help from directly within a Claude Code chat. No need to copy/pasta and break your flow. Just start your prompt with ask_gemini and Claude will call for backup in the background, then show you the new reply.

This guide will cover:

  • Installing Claude Code and Gemini CLI
  • Running prompts from the terminal (non-interactive mode/headless)
  • Creating a Python script to call Gemini
  • Creating a Claude Code Hook to call the script

Let's get to it!

Installing Claude Code and Gemini CLI

Prerequisites Make sure you have Node v20 or newer installed first.

Install Claude Desktop I'll be using MacOS for this guide, but the instructions should be similar for Windows or Linux. Start out by installing Claude Code from the terminal:

npm install -g @anthropic-ai/claude-code

Then cd into a safe directory to run Claude Code. You can use an existing project, or a blank folder, but avoid running it from your main user directory and giving it full access.

Next, start it by running claude. The first time you run it in a folder, it should ask for permission to access it. Once approved, you can begin chatting with Claude Code. Installing Claude Code and Gemini CLI

This is known as interactive mode, where you can chat instead of typing terminal commands.

Try it out once, then type /exit to get back to the main terminal.

Installing Gemini CLI Next, install Gemini's CLI from the terminal:

npm install -g @google/gemini-cli

Then type gemini to run it and test it out. Installing Claude Code and Gemini CLI Just like with Claude Code, you can chat with Gemini in the terminal. You can also drag in an image or other file to insert its path and reference it in your prompt, with Gemini or Claude Code. But what if you want to stay in the terminal where you can run other commands, and chain them together?

Interactive mode is for chatting with the LLM or generating code. You can't run terminal commands until you exit it. Luckily, there's a way to stay in the terminal, and send a prompt without entering interactive mode. But first, exit Gemini by typing /quit

Running prompts from the terminal (non-interactive mode/headless)

With Claude Code, you can send a prompt without leaving the terminal by adding a prompt parameter.

claude -p "What is an LLM?"

Running prompts from the terminal (non-interactive mode/headless)

This keeps your terminal session active and allows you to pipe the LLM response to another command, like saving it to a file.

claude -p "what is an LLM?" > ~/Desktop/response.txt

Running prompts from the terminal (non-interactive mode/headless)

You can do the same thing with Gemini:

Running prompts from the terminal (non-interactive mode/headless)

Ok, so you can prompt either CLI tool while staying in the terminal, and you can chain commands together. This, combined with Claude Code Hooks will give us the ability to call Gemini without leaving the Claude Code interactive mode session.

Creating a Python script to call Gemini

Next, we'll create a Python script to handle calling Gemini from Claude Code, parse the response, and return it to Claude.

Create a new Python script called gemini-context.py:

#!/usr/bin/env python3
import json
import sys
import subprocess

input_data = json.load(sys.stdin)
prompt = input_data.get("prompt", "")

# Check if user wants to ask Gemini
if prompt.lower().startswith("ask gemini:"):
    question = prompt[11:].strip()
    
    try:
        # Call Gemini
        result = subprocess.run(
            ['gemini', '-p', question],
            capture_output=True,
            text=True,
            check=True
        )
        
        # Add Gemini's response as context
        context = f"Gemini's response to '{question}':\n{result.stdout}"
        
        output = {
            "hookSpecificOutput": {
                "hookEventName": "UserPromptSubmit",
                "additionalContext": context
            }
        }
        print(json.dumps(output))
    except subprocess.CalledProcessError as e:
        print(f"Error calling Gemini: {e}", file=sys.stderr)
        sys.exit(1)

sys.exit(0)

Note the if block that checks for prompts starting with ask gemini:. And in the output config, we're using UserPromptSubmit for the hook. This means that the hook will run as soon as you submit the prompt, and before Claude begins to generate a response. Running it first allows Gemini to response and include that context with the prompt to Claude.

Save the script in ~/.claude/hooks, then make it executable:

chmod +x ~/.claude/hooks/gemini-context.py

Ok, the script is ready to run, but it's not connected to Claude yet. Next we need to configure the hook.

Creating a Claude Code Hook to call the script

Navigate to ~/.claude and create a settings.json file if it doesn't exist yet. Set the content to:

{
  "hooks": {
    "UserPromptSubmit": [
      {
        "hooks": [
          {
            "type": "command",
            "command": "$HOME/.claude/hooks/gemini-context.py"
          }
        ]
      }
    ]
  }
}

Adjust the command path as needed for your script location. You can also save the setting file in different locations depending on where you want it to apply:

  • ~/.claude/settings.json - User settings
  • .claude/settings.json - Project settings
  • .claude/settings.local.json - Local project settings (not committed)
  • Enterprise managed policy settings (applied at team level)

Time to test it out!

Save the settings file, then restart claude from the terminal in interactive mode. Now try asking it a question, but start with ask gemini:.

> ask gemini: what model are you?

⏺ Based on the hook response, Gemini identifies itself as "a large language model, trained by Google" without specifying a particular model version or name.

Creating a Claude Code Hook to call the script

Look at that! I didn't even mention using a Hook, but the prompt started with the right key phrase to trigger it. And Claude responded with Based on the hook response,..., showing that it called Gemini to answer the prompt.

Now whenever Claude is blocked, you can call for backup without breaking your flow state. Keep those vibes rolling by just starting the next prompt with ask gemini: without leaving Claude Code.

Conclusion

Claude Code Hooks are a great way to extend Anthropic's CLI tool. You can even use it to interact with other models through their CLI tool, with a bit of Python to handle routing the response. This can save you from copy/pasting between multiple tools, allowing you to maintain your vibe-levels, and code uninterrupted.

What's Next?

If you wanted, you could even take it a step further and create a local MCP so that Claude Code can decide to call Gemini whenever it wants, without needing a hook! Or create different keyword triggers, and different prompts to ask different models, or include a different system prompt.