GC-201b · Module 2
Database Access Patterns
3 min read
Database MCP servers give Gemini CLI direct access to your data layer. The official @modelcontextprotocol/server-postgres handles PostgreSQL. Community servers cover MySQL, SQLite, MongoDB, and Redis. Once configured, Gemini can query your database, inspect schemas, generate migration files, and debug data issues — all without leaving the terminal session.
The practical workflow changes when your AI agent can see your data. Instead of describing a bug verbally — "the user count seems wrong on the dashboard" — you can say "query the users table and compare the count to what the /api/users endpoint returns." Gemini runs the database query via the MCP server, fetches the API response via web_fetch or run_shell_command, and identifies the discrepancy. Data-driven debugging replaces guesswork.
{
"mcpServers": {
"postgres": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-postgres"],
"env": {
"DATABASE_URL": "${DATABASE_URL}"
}
},
"sqlite": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-sqlite", "--db", "./data/app.sqlite"]
}
}
}
Do This
- Use read-only database credentials for MCP server connections
- Connect to development or staging databases for interactive debugging
- Let Gemini query schemas to generate accurate migration files and model definitions
Avoid This
- Connect to production with write-enabled credentials through an MCP server
- Use the database MCP server as a replacement for proper migration tooling
- Ignore connection security because "it is just a local dev database"