CONVERSATIONAL EDITING RISES: IS THIS THE BEGINNING OF THE END FOR TRADITIONAL PHOTO EDITORS?
For the last few weeks the AI world has been buzzing, GPT-5, Claude 4.1, and rumours of Google Gemini 3.0 all dropped or are just around the corner. In the middle of this wave, a strange new image model appeared: nano-banana. From what I’ve seen so far, it might just be the missing piece that makes conversational editing finally practical.
WASN’T CONVERSATIONAL EDITING ALREADY A THING?
Yes — kind of. We already had conversational-style editing inside ChatGPT, Google AI Studio, and a bunch of other apps where you could tell the model what to change in your picture. The problem was speed.
When Studio’s image features or ChatGPT’s Ghibli mode went viral, they were fun to test, but the actual generations were too slow to feel like a real editing workflow. That killed the momentum.
With nano-banana, things are clearly different. We don’t know the size, the architecture, or any official technical details yet. But judging from my own testing, the efficiency focus is obvious. The model is fast. Fast enough that conversational editing suddenly feels like it can work in everyday use.
WHAT MAKES NANO-BANANA SPECIAL
Right now, nano-banana is only available inside LMArena’s Battle mode, and you can’t choose it directly — it appears randomly. Luckily, I managed to catch it quite a few times, and here’s what stood out:
- Localized editing actually works. If I tell it to add sunglasses to someone, it usually edits just the glasses area without reimagining the entire person. That might sound basic, but if you’ve ever tried inpainting with other AI models, you know how often they end up redrawing the whole face or body, breaking the identity of the subject. With nano-banana, this problem happens way less often in my experience.
- It’s fast. The generations come back quick enough that I don’t feel like I’m waiting on the model, which is exactly what conversational workflows need.
- It’s not perfect. Sometimes I do see small mismatches or minor distortions in the edited objects. But compared to other publicly available models like DALL·E, MidJourney, or Ideogram, this feels like a real step up in both speed and edit accuracy.
From my limited testing, I’d say nano-banana feels like a major upgrade over everything else we’ve had in image editing AI so far.
GOOGLE’S MOVE WITH PIXEL 10
At the same time nano-banana showed up, Google announced the Pixel 10 with AI-powered conversational editing built into Google Photos. Now you can simply say something like, “brighten the face, remove the obstacles, or reimagine the damaged part,” and the edit happens instantly.
The timing is too perfect to ignore. Even though Google hasn’t officially confirmed nano-banana is theirs, Logan Kilpatrick (lead product for Google AI Studio and the Gemini API) dropped a 🍌 emoji on Twitter right after the model appeared, and people instantly connected the dots. It really looks like nano-banana is the engine powering conversational editing on the Pixel 10.
And honestly, after seeing how fast it works, that makes a lot of sense.
IS THIS THE END OF PHOTOSHOP?
This is the big question. A lot of people online are already saying conversational editing + nano-banana means Photoshop is dead.
As someone who has used Photoshop for years, here’s my take: No, Photoshop isn’t going anywhere.
Why? Because Adobe isn’t sleeping. Photoshop already has Firefly integrated natively with features like Generative Fill, Generative Expand, reference control, and precise manipulation tools. Yes, there’s a learning curve, but once you get used to it, Photoshop gives you the ability to change specific things with pixel-level control. That’s something conversational editing just can’t fully replace.
Sure, conversational editing is fantastic for quick fixes, creative exploration, and mobile workflows. But when it comes to professional design, compositing, or serious artistic work — Photoshop is still in a league of its own.
WHERE I THINK THIS IS HEADED
Here’s how I see it:
Conversational editing will dominate casual and mobile use. Social posts, quick family album fixes, and basic content will naturally move here. These tools can generate realistic, professional-quality images from scratch in seconds. But even when the results look polished, sometimes even better than what humans might imagine, that does not automatically mean real creativity or deep design thinking has taken place.
Pro editors will remain the backbone of serious work. Campaigns, print, ads, and detailed design still need the precision and flexibility of full editing software. And as of now, AI still needs prompts—text or conversation—to work. It cannot do everything by itself. Now, you might say: “But Shree, people even use AI to generate creative prompts for images. Doesn’t that mean creativity is dead, with humans completely handing over the steering wheel to AI?” The answer remains: no. This is not the death of creativity but the start of a new chapter. AI is built from human knowledge and training, yet having absorbed so much, it can sometimes generate ideas that feel new or unexpected. For the first time, we are sharing creative space with something that can rival us, and sometimes even surpass us. But creativity is still a deeply subjective experience. Ask yourself: would you really get excited to watch a video knowing it was entirely created and performed by AI? Or would you rather watch a Vsauce or Veritasium video, where a human voice, curiosity, and perspective drive the content? Most of us would still crave the human touch.
Hybrid is winning. Sometimes a person’s imagination and intent will carry a project, and other times AI will provide the spark. Right now the advantage goes to humans who master these tools. A designer fluent with prompts and AI workflows can leap ahead—so if you worry about job impact, remember that skill shifts faster than jobs disappear. Still, the shift is visible: many studios and companies are cutting costs. Where once dozens of graphic designers were needed, now a few people can manage the workload of an entire team, often faster. In some cases, entire workflows may eventually be replaced by agentic AI with no human involvement, but there will remain demand for humans where originality, nuance, and high-level creative direction are required.
The editor is still the creative playground. Personally, when I just sit in front of a model and only prompt, my brain often goes blank. My best ideas come while I am inside the editor playing with layers, masks, and brushes. That manual, iterative process cannot be fully replaced by conversational prompts.
FINAL THOUGHTS
It is not black or white, it is a Gray zone. We are living through a watershed moment in human history—the point where human and artificial intelligence meet, overlap, and reshape creativity itself. I’m super bullish on AI and this new conversational editing. Nano-banana shows us how good it can get — fast, accurate, and practical. Google Photos on Pixel 10 makes it mainstream. But does this kill Photoshop? No.
Conversational editing will save us time and make casual users feel like pros. But real creativity and professional workflows still need the depth of a traditional editor. Think of it like adding a turbo engine to a bicycle — it makes the ride faster and more fun, but it doesn’t mean the bike itself is going away.
For me, the future is clear: I want both. Conversational editing for speed, and Photoshop for control. And honestly, that’s the most exciting part.