One WisGate API key and a single cURL call is all it takes to generate a Nano Banana 2 image. Test the exact call interactively in AI Studio before integrating with your app.
AI Image API for Developers: What You Need to Know First
Before you write a single line of code, confirm these four critical facts about the Nano Banana 2 API on WisGate:
Model ID: gemini-3.1-flash-image-preview
API Endpoint: https://wisgate.ai/v1beta/models/gemini-3.1-flash-image-preview:generateContent
Authentication Header: x-goog-api-key: $WISDOM_GATE_KEY — not Authorization: Bearer. This is the Gemini-native approach, different from OpenAI-compatible endpoints.
Output Format: Images arrive as Base64-encoded inline data embedded in the JSON response, not as URLs. You decode them locally to save as PNG, JPEG, or other formats.
These details matter because they differ from what you might expect if you've worked with other image APIs. The WisGate endpoint is Gemini-native, the auth header is specific to Google's API structure, and the Base64 inline data requires an explicit decode step that many tutorials skip or assume you already know.
Pricing on WisGate is $0.058 per image, compared to Google's official rate of $0.068 per image—a $0.010 savings per call. Generation time is consistent at 20 seconds across all supported resolutions, from 0.5K to 4K Base64 outputs.
Nano Banana 2 on WisGate: The Minimal Working cURL Call
Here is the complete, runnable cURL command to generate an image with Nano Banana 2 on WisGate:
curl -s -X POST \
"https://wisgate.ai/v1beta/models/gemini-3.1-flash-image-preview:generateContent" \
-H "x-goog-api-key: $WISDOM_GATE_KEY" \
-H "Content-Type: application/json" \
-d '{
"contents": [{
"parts": [{
"text": "Da Vinci style anatomical sketch of a dissected Monarch butterfly. Detailed drawings of the head, wings, and legs on textured parchment with notes in English."
}]
}],
"tools": [{"google_search": {}}],
"generationConfig": {
"responseModalities": ["TEXT", "IMAGE"],
"imageConfig": {
"aspectRatio": "1:1",
"imageSize": "2K"
}
}
}' | jq -r '.candidates[0].content.parts[] | select(.inlineData) | .inlineData.data' | head -1 | base64 --decode > butterfly.png
Break down what happens here:
Endpoint: https://wisgate.ai/v1beta/models/gemini-3.1-flash-image-preview:generateContent — this is the WisGate Gemini-native API base URL with the model ID and the :generateContent action appended.
Auth Header: x-goog-api-key: $WISDOM_GATE_KEY — replace $WISDOM_GATE_KEY with your actual API key from https://wisgate.ai/hall/tokens. This header is required and must be exact.
Prompt: The text field inside contents[0].parts[0] contains your image generation prompt. Replace the butterfly example with any prompt you want.
responseModalities: Set to ["TEXT", "IMAGE"] to receive both text and image in the response. This is critical—if you omit it or set it to ["IMAGE"] only, the response structure changes and the extraction command may fail.
imageConfig: Specifies aspectRatio (here "1:1" for square) and imageSize (here "2K" for 2K resolution). Both are optional; defaults are "1:1" and "1K" respectively.
jq and base64 decode: The pipe chain extracts the Base64 data from the JSON response, decodes it, and saves it as butterfly.png. Without this decode step, you have only the Base64 string, not a usable image file.
Run this command in your terminal with your API key set as an environment variable:
export WISDOM_GATE_KEY="your-api-key-here"
Then paste the cURL command. In under 20 seconds, butterfly.png appears in your current directory. That's the full cycle: prompt → API call → Base64 response → decoded image file.
responseModalities: The Parameter That Trips Most Integrations
The responseModalities parameter controls what the API returns. This single setting is responsible for more integration failures than any other parameter because its behavior is not always obvious.
responseModalities: ["IMAGE"] — Returns image data only. The response contains only the image in Base64 format, no text. Use this if you want image-only output and want to skip text extraction logic.
responseModalities: ["TEXT", "IMAGE"] — Returns both text and image. The response includes a text part (often a caption or description) and an image part. Both are in the candidates[0].content.parts array. You must extract both separately if you need both.
Omitting responseModalities entirely — The API defaults to text-only output. No image is generated. This is the most common mistake: developers assume the API will return an image by default, but it does not. You must explicitly set responseModalities to include "IMAGE".
If you see a response with no image data, check responseModalities first. If it's missing or set to ["TEXT"] only, add or correct it to ["TEXT", "IMAGE"] or ["IMAGE"].
Nano Banana 2 API Integration: imageConfig Parameter Reference
The imageConfig object inside generationConfig controls image dimensions and aspect ratio. Both parameters are optional, but understanding them prevents unexpected output sizes.
aspectRatio: Controls the shape of the generated image. Supported values are:
"1:1"— Square (default)"4:3"— Landscape"3:4"— Portrait"16:9"— Widescreen landscape"9:16"— Widescreen portrait"1:4"— Ultra-tall portrait (Nano Banana 2 extension)"4:1"— Ultra-wide landscape (Nano Banana 2 extension)"1:8"— Extreme tall portrait (Nano Banana 2 extension)"8:1"— Extreme wide landscape (Nano Banana 2 extension)
The last four ratios are unique to Nano Banana 2 and not available on standard Google Gemini. Use them for specialized layouts like vertical story formats or panoramic compositions.
imageSize: Controls the resolution of the Base64 output. Supported values are:
"0.5K"— 512×512 pixels (or proportional to aspect ratio)"1K"— 1024×1024 pixels (default)"2K"— 2048×2048 pixels"4K"— 4096×4096 pixels
Generation time is consistent at 20 seconds regardless of which imageSize you choose. Larger sizes produce higher-quality Base64 output but do not increase latency. Pricing is $0.058 per image across all sizes on WisGate, compared to $0.068 on Google's official API.
Here is a complete example with all parameters set:
{
"contents": [{
"parts": [{
"text": "Minimalist poster design for a tech conference, 16:9 aspect ratio, modern sans-serif typography, blue and white color scheme"
}]
}],
"generationConfig": {
"responseModalities": ["TEXT", "IMAGE"],
"imageConfig": {
"aspectRatio": "16:9",
"imageSize": "4K"
}
}
}
This generates a 4K widescreen image (4096×9216 pixels at 16:9 ratio) in 20 seconds. The Base64 output is larger but decodes to a high-resolution PNG suitable for printing or large displays.
For most use cases, "1K" and "1:1" are sufficient. Use "2K" or "4K" only if you need high-resolution output for specific applications. Use the extended aspect ratios ("1:4", "4:1", "1:8", "8:1") only when your design requires extreme proportions.
Using responseModalities: "TEXT", "IMAGE" for Combined Output
When you set responseModalities to ["TEXT", "IMAGE"], the API returns both a text description and an image. Extracting both requires understanding the response structure.
The response looks like this:
{
"candidates": [{
"content": {
"parts": [
{
"text": "A detailed anatomical sketch of a Monarch butterfly..."
},
{
"inlineData": {
"mimeType": "image/png",
"data": "iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJAAAADUlEQVR42mNk+M9QDwADhgGAWjR9awAAAABJRU5ErkJggg=="
}
}
]
}
}]
}
The parts array contains two objects: one with text and one with inlineData. To extract both:
Extract text:
curl ... | jq -r '.candidates[0].content.parts[] | select(.text) | .text'
Extract and decode image:
curl ... | jq -r '.candidates[0].content.parts[] | select(.inlineData) | .inlineData.data' | base64 --decode > output.png
Extract both in one command:
curl ... | jq -r '.candidates[0].content.parts[] | if .text then .text else .inlineData.data end' | while read line; do if [[ $line == iVBORw* ]]; then echo "$line" | base64 --decode > output.png; else echo "$line"; fi; done
The simpler approach is to extract them separately using two jq commands. The text part is useful for captions, alt text, or metadata. The image part is the Base64 data you decode to a file.
If you only want the image and don't need the text, use responseModalities: ["IMAGE"] instead. This simplifies the response and removes the text part entirely, making extraction faster.
Nano Banana 2 API Integration: Python and Node.js Equivalents
If you prefer Python or Node.js over cURL, here are minimal working examples that replicate the cURL call.
Python:
import requests
import base64
import os
import time
api_key = os.getenv("WISDOM_GATE_KEY")
endpoint = "https://wisgate.ai/v1beta/models/gemini-3.1-flash-image-preview:generateContent"
payload = {
"contents": [{
"parts": [{
"text": "Da Vinci style anatomical sketch of a dissected Monarch butterfly. Detailed drawings of the head, wings, and legs on textured parchment with notes in English."
}]
}],
"generationConfig": {
"responseModalities": ["TEXT", "IMAGE"],
"imageConfig": {
"aspectRatio": "1:1",
"imageSize": "2K"
}
}
}
headers = {
"x-goog-api-key": api_key,
"Content-Type": "application/json"
}
start = time.time()
response = requests.post(endpoint, json=payload, headers=headers, timeout=30)
response.raise_for_status()
data = response.json()
for part in data["candidates"][0]["content"]["parts"]:
if "inlineData" in part:
image_data = base64.b64decode(part["inlineData"]["data"])
with open("butterfly.png", "wb") as f:
f.write(image_data)
print(f"Image saved as butterfly.png in {time.time() - start:.1f}s")
Key points:
- Set
timeout=30to allow 20 seconds for generation plus buffer. - Extract the Base64 data from
inlineData.dataand decode it withbase64.b64decode(). - Write the decoded bytes directly to a file in binary mode (
"wb").
Node.js:
const https = require("https");
const fs = require("fs");
const apiKey = process.env.WISDOM_GATE_KEY;
const endpoint = "https://wisgate.ai/v1beta/models/gemini-3.1-flash-image-preview:generateContent";
const payload = {
contents: [{
parts: [{
text: "Da Vinci style anatomical sketch of a dissected Monarch butterfly. Detailed drawings of the head, wings, and legs on textured parchment with notes in English."
}]
}],
generationConfig: {
responseModalities: ["TEXT", "IMAGE"],
imageConfig: {
aspectRatio: "1:1",
imageSize: "2K"
}
}
};
const options = {
hostname: "wisgate.ai",
path: "/v1beta/models/gemini-3.1-flash-image-preview:generateContent",
method: "POST",
headers: {
"x-goog-api-key": apiKey,
"Content-Type": "application/json"
},
timeout: 30000
};
const req = https.request(options, (res) => {
let body = "";
res.on("data", (chunk) => (body += chunk));
res.on("end", () => {
const data = JSON.parse(body);
for (const part of data.candidates[0].content.parts) {
if (part.inlineData) {
const imageBuffer = Buffer.from(part.inlineData.data, "base64");
fs.writeFileSync("butterfly.png", imageBuffer);
console.log("Image saved as butterfly.png");
}
}
});
});
req.on("error", (e) => console.error(e));
req.write(JSON.stringify(payload));
req.end();
Key points:
- Set
timeout: 30000(30 seconds) to accommodate the 20-second generation time. - Extract Base64 data and decode it with
Buffer.from(data, "base64"). - Write the buffer to a file using
fs.writeFileSync().
Both examples follow the same structure as the cURL call: set headers, send payload, extract Base64 image data, decode, and save to file. Adjust the prompt and imageConfig parameters as needed for your use case.
Common Integration Errors and Fixes
These are the most frequent issues developers encounter when integrating Nano Banana 2 on WisGate:
| Error Symptom | Likely Cause | Fix |
|---|---|---|
401 Unauthorized | Missing or invalid x-goog-api-key header | Verify your API key at https://wisgate.ai/hall/tokens and ensure the header is exactly x-goog-api-key: $WISDOM_GATE_KEY |
| Response contains no image data | responseModalities is missing or set to ["TEXT"] only | Add "responseModalities": ["TEXT", "IMAGE"] or ["IMAGE"] to generationConfig |
jq: error (at <stdin>:1): Cannot index null with string "candidates" | API returned an error response, not a valid image response | Check the full response with `curl ... |
| Request times out after 10 seconds | Timeout is too low; generation takes 20 seconds | Increase timeout to at least 30 seconds in your client (Python: timeout=30, Node.js: timeout: 30000, cURL: --max-time 30) |
Invalid value for imageSize | imageSize is set to an unsupported value like "3K" or "8K" | Use only "0.5K", "1K", "2K", or "4K" |
Invalid value for aspectRatio | aspectRatio is set to an unsupported value | Use only "1:1", "4:3", "3:4", "16:9", "9:16", "1:4", "4:1", "1:8", or "8:1" |
| Base64 decode produces corrupted image | Base64 string is truncated or incomplete | Ensure you're extracting the full inlineData.data value; use head -1 in cURL to get only the first line if multiple are returned |
If you encounter an error not listed here, run the cURL command directly with jq . to see the full API response. The error message from WisGate will indicate what parameter or header is incorrect.
Nano Banana 2 API Integration: What to Build Next
You now have everything needed to integrate Nano Banana 2 image generation into your application:
- The exact endpoint:
https://wisgate.ai/v1beta/models/gemini-3.1-flash-image-preview:generateContent - The authentication header:
x-goog-api-key: $WISDOM_GATE_KEY - The critical
responseModalitiesparameter to ensure image output - The
imageConfigoptions for aspect ratio and resolution - Working cURL, Python, and Node.js code
- Base64 decode steps to convert inline data to image files
- Common error fixes
The next step is to generate your WisGate API key at https://wisgate.ai/hall/tokens and run your first call. Test it interactively in AI Studio at https://wisgate.ai/studio/image before integrating into your codebase. Once you confirm the API works, adapt the code examples to your application's language and framework.
For pricing details, model specifications, and additional models available on WisGate, visit https://wisgate.ai/models. All working code, decoding, and parameter references are here. Start building now.