GC-201c · Module 2
Multi-Turn Automation
3 min read
Some automation tasks require multiple turns of interaction — analyze, then decide, then execute, then validate. Headless mode with -p handles single-turn tasks. For multi-turn automation, you have two approaches: chained headless calls (each invocation passes the output of the previous as input) or the Jules extension for fully autonomous multi-step workflows.
Chained headless calls work like Unix pipes for AI. The first call analyzes the problem and outputs a plan as JSON. The second call takes that plan and generates code. The third call takes the code and generates tests. Each step is a separate gemini -p invocation, and the output of one feeds into the input of the next. This approach gives you checkpoints between steps — you can inspect intermediate results, add human judgment, or abort if the analysis is wrong.
#!/bin/bash
# Multi-turn automation: Analyze → Plan → Execute → Validate
# Step 1: Analyze the problem
ANALYSIS=$(gemini -p "Analyze the failing test in src/api/auth.test.ts. Identify the root cause. Output JSON: {cause, affected_files[], fix_approach}" --output-format json)
# Step 2: Generate the fix based on analysis
FIX=$(gemini -p "Based on this analysis, generate the code fix. Output only the file patches.\n\nAnalysis: $ANALYSIS")
# Step 3: Validate the fix
VALIDATION=$(gemini -p "Review this proposed fix for the auth test failure. Does it address the root cause without introducing new issues? Output JSON: {approved: true|false, concerns: []}\n\nFix: $FIX\nOriginal Analysis: $ANALYSIS" --output-format json)
# Step 4: Apply or flag for human review
APPROVED=$(echo "$VALIDATION" | jq -r '.response.approved')
if [ "$APPROVED" = "true" ]; then
echo "Fix approved — applying..."
# Apply the fix
else
echo "Fix flagged for human review"
echo "$VALIDATION" | jq '.response.concerns'
fi
Do This
- Use chained headless calls for multi-step workflows that benefit from checkpoints
- Use Jules for well-defined, self-contained tasks suitable for full delegation
- Validate AI output between steps — do not blindly pipe analysis into execution
Avoid This
- Build complex multi-step automation without intermediate validation
- Use Jules for ambiguous tasks that require human judgment at multiple points
- Chain 10 headless calls without checking intermediate results