Playground/fal-ai/gpt-image-2
GPT Image 2 1.0

Drop a prompt below or pick a starting point. Every submission runs live against the model.

GPT Image 21.0

Runs against fal-ai/gpt-image-2 via server proxy

Integration

Ship GPT Image 2 into your own stack.

The playground above is the same endpoint you call in production. Below, three canonical code paths and the full input reference. Copy what matches your stack.

Endpointfal-ai/gpt-image-2image
TypeScript · @fal-ai/client
TS
1import { fal } from "@fal-ai/client";
2
3fal.config({ credentials: process.env.FAL_KEY });
4
5const { data } = await fal.subscribe("fal-ai/gpt-image-2", {
6 input: {
7 "prompt": "A dense East-Asian bodega storefront at dusk, hand-painted signs with the exact...",
8 "safety_tolerance": 2,
9 "size": "1024x1024",
10 "quality": "medium",
11 "num_images": 1
12 },
13 logs: true,
14});
15
16console.log(data);
Python · fal-client
PYTHON
1import fal_client
2
3result = fal_client.subscribe(
4 "fal-ai/gpt-image-2",
5 arguments={
6 "prompt": "A dense East-Asian bodega storefront at dusk, hand-painted signs with the exact...",
7 "safety_tolerance": 2,
8 "size": "1024x1024",
9 "quality": "medium",
10 "num_images": 1
11 },
12 with_logs=True,
13)
14
15print(result)
HTTP · queue.fal.run
BASH
1curl -X POST "https://queue.fal.run/fal-ai/gpt-image-2" \
2 -H "Authorization: Key $FAL_KEY" \
3 -H "Content-Type: application/json" \
4 -d '{ "prompt": "A dense East-Asian bodega storefront at dusk, hand-painted signs with the exact...", "safety_tolerance": 2, "size": "1024x1024", "quality": "medium", "num_images": 1 }'

Input parameters

KeyKindDefaultOptions
prompttext (required)Primary generation input
sizeselect1024x1024Square 1024, Landscape 1536, Portrait 1536, Square 2048, Wide 16:9 (2048), Tall 9:16 (2048)
qualityselectmediumLow, Medium, High
num_imagesselect11 image, 2 images, 4 images
safety_tolerancedefault2Sent with every request unless overridden.

Full, authoritative schema at fal.ai/models/fal-ai/gpt-image-2/llms.txt.

Also reading