Imagine you have an almost perfect image
Imagine you just generated an image with AI. A portrait in a picturesque garden, warm evening light, everything feels right. But the background doesn't quite fit. Or you think: what would this look like as a watercolor? Or you need the same subject in three different moods because you want to run A/B tests on your website. A year ago, you would have had to open Photoshop, spend hours editing layers, creating masks, and applying effects. Or you would have generated three completely new images and hoped that at least one would work.
The good news: today, all of this takes just a few clicks. AI-powered image editing has advanced at a breathtaking pace. You can modify existing images, extend them, transfer them into new styles, swap backgrounds, improve quality, and create countless variations. All without ever having learned Photoshop.
In this article, you'll learn the most important techniques of AI image editing. By the end, you'll know how to turn a single image into an entire portfolio. And since this is the final article in Module 7, we'll also look back at everything you've learned about AI and visual content in this module.
Modifying and extending existing images: Inpainting and Outpainting
Perhaps the most exciting feature of AI image editing is called inpainting. You mark an area within an existing image and describe what the AI should place there instead. You essentially paint a hole in your image and tell the AI: "Fill this with something new."
A concrete example: you have an AI-generated image of a cafe. The table in the foreground is empty, but you'd like to see a cup of coffee and an open book there. With inpainting, you mark the table, enter the prompt "a steaming cup of coffee and an open book on the wooden table," and the AI adds exactly that. It pays attention to the perspective, lighting, and style of the surrounding image so that the addition looks seamless.
Inpainting is excellent for removing or replacing distracting elements. An object in the background is bothersome? Mark it and have it replaced with something more fitting. A person has closed eyes in a photo? Mark and correct. A logo or watermark needs to go? Inpainting can fill the area as if nothing was ever there.
The counterpart to inpainting is outpainting. Here you extend an existing image beyond its original boundaries. Imagine you have a portrait in vertical format and need it in landscape format for a website header. With outpainting, the AI says: "Okay, let me look at what's happening at the edges of the image and paint logically outward." The result: your image gets larger without you having to paint a single pixel yourself.
Outpainting is particularly useful when you need images in different formats. An Instagram post as a square, a story in portrait format, a website banner in extreme landscape format. Instead of generating three different images, you take one good image and extend it in different directions. The AI ensures that the extension matches the original style.
Both inpainting and outpainting are now available in many tools. DALL-E offers these features directly in ChatGPT. Midjourney has its own editing capabilities. And Stable Diffusion offers extremely flexible inpainting options through interfaces like ComfyUI or Automatic1111. Canva and Adobe Firefly have also integrated such features. You're not locked into a single tool.
Transferring styles: When a photo becomes a painting
One of the most fascinating capabilities of AI image editing is style transfer. You take an existing image and apply a completely different visual style to it. Your vacation photo suddenly becomes an impressionist painting. Your product image transforms into a pencil sketch. Your selfie looks like a painting by Frida Kahlo.
Let me explain with an analogy. Imagine you hand a painter a photo and say: "Paint this, but in your own style." The painter keeps the subject, meaning the composition, shapes, and arrangement, but changes everything else: the color palette, brushstrokes, texture, and atmosphere. That's exactly what AI does with style transfer, only in seconds instead of hours.
There are several ways to transfer a style:
Text-based style transfer: You upload an image and describe the desired style in your prompt. For example: "Convert this photo into a watercolor in the style of Claude Monet" or "Turn this portrait into a pixel art illustration." This works in DALL-E, Midjourney, and many other tools. Results vary depending on the tool and prompt, but the basic idea is always the same.
Image-to-image transfer: Here you provide not just text but also a reference image for the desired style. You show the AI an image in Van Gogh's style and say: "My photo should look like this." This method is often more precise because the AI can "read" the style directly from the reference image rather than relying on a text description. Tools like Stable Diffusion with ControlNet or Midjourney with the style reference feature offer this capability.
Preset style filters: Some tools offer ready-made style options, similar to Instagram filters but significantly more powerful. You select "Anime," "Oil Painting," "Photorealism," or "3D Render" from a list, and the AI adjusts your image accordingly. Canva, Adobe Firefly, and many mobile apps use this approach. It's the easiest to use but offers less creative control.
Style transfer isn't just a creative playground. It also has solid practical applications. Imagine you create content for social media and want a consistent visual identity. All your images should have a certain "look," regardless of whether they're photos, illustrations, or graphics. With style transfer, you can bring diverse source images into a consistent style. This not only saves time but also ensures a professional and recognizable appearance.
A tip: when experimenting with style transfer, try extreme style changes first. Turn a photo into a comic, transform a landscape into an abstract artwork. With extreme differences, you can most clearly see what the AI can do and where its limits lie. Finer adjustments, like slightly shifting a photo toward a "cinematic film look," require more practice and more precise prompts.
Removing and changing backgrounds
The background of an image is often what separates "okay" from "wow." A product photo against a clean, professional background instantly looks more premium than the same product in front of a messy kitchen. A portrait with an atmospheric bokeh background looks professional, while the same face against a white wall looks boring.
AI has revolutionized background editing. In the past, you needed Photoshop and a lot of patience to cleanly separate an object or person from the background. Today, AI does it in seconds, and the results are often surprisingly good. Even with challenging subjects like hair, fine structures, or transparent objects, modern AI tools deliver impressive results.
There are two basic operations:
Background removal: The AI automatically recognizes the main subject in the foreground and removes everything behind it. The result is an image with a transparent background that you can then place on any surface. Tools like remove.bg, Canva, or Adobe Express do this with a single click. DALL-E and many image editing apps also offer this feature.
Background replacement: Here the AI goes one step further. Instead of simply deleting the background, it replaces it with something new. You describe in the prompt what the new background should be. "Place the person in front of an autumn forest landscape" or "Replace the background with a modern office with large windows." The AI adjusts lighting and color temperature so that the subject looks natural in the new background.
The applications are diverse. For social media, you can place the same portrait against different backgrounds to create a series. For online shops, you can put product photos against uniform, professional backgrounds. For presentations, you can adjust images to match your slide colors. And for personal use, you can finally banish that construction fence from your vacation photo.
A practical tip: when replacing backgrounds, pay attention to the lighting direction. If your subject is lit from the left, the new background should also suggest a light source from the left. If these don't match, the result looks unnatural, even when the technical quality is good. Most AI tools try to compensate automatically, but in tricky cases, a corresponding hint in the prompt helps.
Upscaling and quality improvement: More details from fewer pixels
You have an image that's perfect in content, but the resolution isn't enough? Perhaps an older photo, a screenshot, or an AI-generated image you want to print in high quality? This is exactly where AI upscaling comes in.
Conventional image enlargement works like this: the software takes the existing pixels and simply makes them bigger. The result: a mushy, blurry image. Anyone who has ever stretched a small image in a word processor knows what this looks like.
AI upscaling works fundamentally differently. The AI analyzes the existing image and "invents" new details that plausibly match what it sees. A blurry edge becomes a sharp contour. A fuzzy face gains recognizable features. A mushy background transforms into a detailed landscape. The AI has learned from millions of images how details should look in various contexts and applies this knowledge to your image.
The results can be astounding. An image at 512 by 512 pixels can be upscaled to 2048 by 2048, and the additional details look as if they were always there. Of course, there are limits: the AI can't conjure information that truly isn't present. But for most use cases, the results are impressively good.
There are various tools for AI upscaling:
Built-in upscalers: Midjourney offers an integrated upscaler that brings your generated images to higher resolutions. Many online tools like Topaz Gigapixel or Upscayl (free and open source) also deliver excellent results.
Specialized web tools: Services like Let's Enhance, Icons8 Upscaler, or Bigjpg specialize in exactly this task. You upload an image, select the enlargement factor, and get a high-resolution result in seconds.
Local solutions: If you work with Stable Diffusion, there are various upscaling models you can run directly on your computer. ESRGAN and Real-ESRGAN are particularly popular and deliver outstanding quality.
Beyond pure upscaling, there are also AI tools for general quality improvement. They can remove noise, correct blur, optimize colors, and adjust contrast. Especially with older photos or scans, this can make an enormous difference. A faded family photo from the 1980s can gain new vibrancy, as if it were taken yesterday.
An important note: AI upscaling adds details that the AI considers plausible. This means the added details don't necessarily correspond to reality. For creative projects, that's no problem. But if you're enlarging a documentary photo, you should be aware that the "new" details are the AI's interpretation, not actual reality.
Variations of a subject: From one idea to many possibilities
One of the most powerful techniques in AI image editing is creating variations. You take an image you fundamentally like and have the AI create modifications of it. The basic subject remains, but the AI changes certain aspects: colors, mood, style, details, or composition.
Why is this so useful? Because in practice, you rarely need just a single image. You need variations. For A/B tests on your website. For different social media platforms with different moods. For presentations where you want to show different options. Or simply because you're not yet sure which version you like best.
There are various approaches to creating variations:
Automatic variations: Many image AIs offer a "variations" feature. You click on a generated image and say: "Create variations of this." The AI keeps the basic structure and changes details. This is the simplest approach and perfect when you only need slight modifications.
Prompt-based variations: Here you deliberately change individual elements in the prompt. Your original image shows a mountain landscape in summer? Change "summer" to "winter" and you get the same subject in a completely different season. Change "photorealistic" to "watercolor" and you get the same mountain in an entirely new style. This method gives you the most control over what changes and what stays the same.
Parameter variations: In advanced tools like Stable Diffusion, you can adjust technical parameters to generate variations. You change the seed value (the random starting point) and get different results with the same prompt. Or you adjust the strength of the change: a low value produces subtle modifications, a high value produces dramatic transformations.
Image-to-image with different strengths: In image-to-image generation, you provide a source image and describe what should change. Depending on the "denoising strength" setting (how much freedom you give the AI), results range from barely noticeable changes to completely new interpretations of the same subject.
A practical example: you're planning an Instagram feed and want all images to match while still looking different. You generate a base subject, say a minimalist landscape. Then you create variations in different color palettes: a warm version in orange tones, a cool one in blue and gray, a fresh one in green tones. All images share the same fundamental character but look individual. Your feed looks as if a professional art director worked on it.
Variations are also an excellent learning tool. If you want to understand how different prompt elements affect the result, create systematic variations: change only one element at a time and observe what happens. After a few rounds, you'll develop an intuitive understanding of how image AIs "think" and which descriptions produce which effects.
The workflow: From raw image to finished result
So far, we've looked at individual techniques. In practice, you often combine them into a workflow. Let me show you what a typical editing process looks like, step by step:
Step 1: Generation. You generate a source image with the image AI of your choice. You use the prompt generator at optiprompt.io in the Images category to create a good prompt. The result is your raw material.
Step 2: Variations. You generate several variations of the image and select the best one. Maybe you like the composition of one variation but the color mood of another. Some tools allow you to combine elements from different variations.
Step 3: Inpainting. You correct individual areas of the image that aren't quite right. A distorted detail? Mark it and regenerate. A missing object? Add it via prompt. In this step, you refine the image with precision.
Step 4: Outpainting (if needed). You extend the image to the desired format. From square to landscape? From portrait to panorama? Outpainting makes it possible without having to regenerate the subject.
Step 5: Style transfer (optional). If you need a specific visual style, you apply it now. From photorealistic to illustrated, from modern to vintage. This step is especially relevant when you need a consistent visual identity across multiple images.
Step 6: Upscaling. You scale the finished image up to the required resolution. Print requires higher resolutions than social media. Upscaling always comes last because you're working on the final image here.
Step 7: Final touches. One last look at contrast, brightness, and colors. Traditional image editing tools can help here too, not everything has to be done via AI. Sometimes a small saturation adjustment or a slight crop is all it takes to make the image perfect.
You don't need to go through all seven steps for every image. Often two or three are enough. But it helps to know the complete workflow so you're aware of your options. The more often you go through this process, the faster and more intuitive it becomes.
Your exercise: Modifying one image into three variations
Now it's time to get practical. In this exercise, you'll generate an image and create three distinctly different variations of it. Use the prompt generator at optiprompt.io with the Images category. Try all three variants of the prompt generator: the structured, compact, and creative variants.
Here's how to proceed:
Step 1: Generate a source image. Open the prompt generator at optiprompt.io and select the Images category. Describe a subject you like, for example: "A cozy reading nook with an armchair next to a large window, raindrops on the glass, warm lamp light." Generate the prompt and use it in your preferred image AI.
Step 2: Create Variation 1, the style change. Take your source image and change the style completely. If the original is photorealistic, turn it into a watercolor or comic illustration. Use the prompt generator again and add a clear style specification to your original prompt. Compare the result with the original: what changed, what stayed the same?
Step 3: Create Variation 2, the mood change. Keep the style but change the mood completely. The cozy rainy day becomes a bright summer scene. Or the warm lamp light becomes a cool moonlit night. Change the elements in your prompt that determine mood: light, colors, weather, time of day.
Step 4: Create Variation 3, the perspective change. Keep style and mood but change the viewpoint. Show the scene from outside through the window instead of from inside. Or zoom in close on a detail: the raindrops on the glass, the book spine on the armchair, the lamp flame. Perspective changes can turn a simple subject into something entirely new.
Step 5: Compare and reflect. Place all four images side by side (the original plus three variations). Which variation surprised you most? Which would you actually use, for example for social media, as a background image, or in a presentation? What did you learn about the impact of style, mood, and perspective?
This exercise shows you how much creative potential lies within a single subject. You don't have to start from scratch for every new image. Instead, you can take a good base subject and develop it in countless directions. That's the essence of AI image editing: not perfect images at the push of a button, but a creative process where you set the direction and the AI brings your ideas to life.
Conclusion: Module 7 complete, and you can now work visually with AI
You made it. With this article, you're completing Module 7, and with it, you've discovered an entirely new dimension of AI: the visual one.
Let's briefly look back at what you learned in this module. You understood how image AIs work and which tools are available. You learned how to write image prompts that deliver better results. You know how to create images for social media, professional purposes, and personal occasions. And in this final article, you learned advanced techniques: inpainting and outpainting, style transfer, background editing, upscaling, and creating variations.
The most important takeaway from this module: visual AI is not a toy for tech enthusiasts. It's a real tool that concretely helps you in your daily life, your career, and your creative projects. You don't need a graphic design degree or Photoshop skills. You need the right words, an understanding of the fundamental techniques, and the willingness to experiment.
You now know that the best results don't come from a single prompt but from an iterative process. Generate, vary, refine, adjust. Just as a photographer doesn't capture the perfect shot on the first click, AI image editing is also a creative process. But it's a process that saves you an incredible amount of time and opens up possibilities that were unthinkable just a few years ago.
And now something truly exciting is coming. In the next article, "Understanding Video AI: The Current State," we're moving from images to moving pictures. Module 8 is all about AI for video and audio. You'll learn how AI can generate and edit videos, how text-to-speech works, and how you can create audio content with AI. If you thought AI-generated images were impressive, wait until you see what's possible with video and audio.
Until then: try the exercise. Create variations, experiment with styles and moods, test different tools. The more you practice, the better your results will be. And don't forget: you can't break anything. Every experiment moves you forward.
Module 7 is complete. Module 8 awaits. The journey continues.


