Generate AI images with the Google Gemini model.
Nextjs quickstart for to generating and editing images with Google Gemini 2.0 Flash. It allows users to generate images from text prompts or edit existing images through natural language instructions, maintaining conversation context for iterative refinements. Try out the hosted demo at Hugging Face Spaces.
https://github.com/user-attachments/assets/8ffa5ee3-1b06-46a9-8b5e-761edb0e00c3
Get your GEMINI_API_KEY
key here and start building.
How It Works:
For developers who want to call the Gemini API directly, you can use the Google Generative AI JavaScript SDK:
const { GoogleGenerativeAI } = require("@google/generative-ai");const fs = require("fs");const genAI = new GoogleGenerativeAI(process.env.GEMINI_API_KEY);async function generateImage() {const contents ="Hi, can you create a 3d rendered image of a pig " +"with wings and a top hat flying over a happy " +"futuristic scifi city with lots of greenery?";// Set responseModalities to include "Image" so the model can generateconst model = genAI.getGenerativeModel({model: "gemini-2.0-flash-exp",generationConfig: {responseModalities: ["Text", "Image"]}});try {const response = await model.generateContent(contents);for (const part of response.response.candidates[0].content.parts) {// Based on the part type, either show the text or save the imageif (part.text) {console.log(part.text);} else if (part.inlineData) {const imageData = part.inlineData.data;const buffer = Buffer.from(imageData, "base64");fs.writeFileSync("gemini-native-image.png", buffer);console.log("Image saved as gemini-native-image.png");}}} catch (error) {console.error("Error generating content:", error);}}
First, set up your environment variables:
cp .env.example .env
Add your Google AI Studio API key to the .env
file:
Get your GEMINI_API_KEY
key here.
GEMINI_API_KEY=your_google_api_key
Then, install dependencies and run the development server:
npm installnpm run dev
Open http://localhost:3000 with your browser to see the application.
docker build -t nextjs-gemini-image-editing .
docker run -p 3000:3000 -e GEMINI_API_KEY=your_google_api_key nextjs-gemini-image-editing
Or using an environment file:
# Run container with env filedocker run -p 3000:3000 --env-file .env nextjs-gemini-image-editing
Open http://localhost:3000 with your browser to see the application.
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
Generate AI images with the Google Gemini model.
Nextjs quickstart for to generating and editing images with Google Gemini 2.0 Flash. It allows users to generate images from text prompts or edit existing images through natural language instructions, maintaining conversation context for iterative refinements. Try out the hosted demo at Hugging Face Spaces.
https://github.com/user-attachments/assets/8ffa5ee3-1b06-46a9-8b5e-761edb0e00c3
Get your GEMINI_API_KEY
key here and start building.
How It Works:
For developers who want to call the Gemini API directly, you can use the Google Generative AI JavaScript SDK:
const { GoogleGenerativeAI } = require("@google/generative-ai");const fs = require("fs");const genAI = new GoogleGenerativeAI(process.env.GEMINI_API_KEY);async function generateImage() {const contents ="Hi, can you create a 3d rendered image of a pig " +"with wings and a top hat flying over a happy " +"futuristic scifi city with lots of greenery?";// Set responseModalities to include "Image" so the model can generateconst model = genAI.getGenerativeModel({model: "gemini-2.0-flash-exp",generationConfig: {responseModalities: ["Text", "Image"]}});try {const response = await model.generateContent(contents);for (const part of response.response.candidates[0].content.parts) {// Based on the part type, either show the text or save the imageif (part.text) {console.log(part.text);} else if (part.inlineData) {const imageData = part.inlineData.data;const buffer = Buffer.from(imageData, "base64");fs.writeFileSync("gemini-native-image.png", buffer);console.log("Image saved as gemini-native-image.png");}}} catch (error) {console.error("Error generating content:", error);}}
First, set up your environment variables:
cp .env.example .env
Add your Google AI Studio API key to the .env
file:
Get your GEMINI_API_KEY
key here.
GEMINI_API_KEY=your_google_api_key
Then, install dependencies and run the development server:
npm installnpm run dev
Open http://localhost:3000 with your browser to see the application.
docker build -t nextjs-gemini-image-editing .
docker run -p 3000:3000 -e GEMINI_API_KEY=your_google_api_key nextjs-gemini-image-editing
Or using an environment file:
# Run container with env filedocker run -p 3000:3000 --env-file .env nextjs-gemini-image-editing
Open http://localhost:3000 with your browser to see the application.
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.