


Written by Greg Martin
10 min read
MAY 5 2025
Daisy-Chaining AI Hallucinations
Developing a workflow for crafting high-quality images using multiple flavors of generative AI.

Written by Greg Martin
10 min read
MAY 5 2025
Daisy-Chaining AI Hallucinations
Developing a workflow for crafting high-quality images using multiple flavors of generative AI.

Written by Greg Martin
10 min read
MAY 5 2025
Daisy-Chaining AI Hallucinations
Developing a workflow for crafting high-quality images using multiple flavors of generative AI.
As a digital illustrator, photographer, and designer, my first reaction to generative AI was frustration and uncertainty. But I quickly recognized this “get off my lawn” instinct for what it was: fear of change. Like every tool that came before it, AI is simply another evolution in the creative process. It’s here to stay. The real opportunity lies in understanding it, and using its strengths to elevate my creative projects.
As a digital illustrator, photographer, and designer, my first reaction to generative AI was frustration and uncertainty. But I quickly recognized this “get off my lawn” instinct for what it was: fear of change. Like every tool that came before it, AI is simply another evolution in the creative process. It’s here to stay. The real opportunity lies in understanding it, and using its strengths to elevate my creative projects.
As a digital illustrator, photographer, and designer, my first reaction to generative AI was frustration and uncertainty. But I quickly recognized this “get off my lawn” instinct for what it was: fear of change. Like every tool that came before it, AI is simply another evolution in the creative process. It’s here to stay. The real opportunity lies in understanding it, and using its strengths to elevate my creative projects.
The (mid)journey begins.
The (mid)journey begins.
The (mid)journey begins.
Midjourney was my chosen starting point, mostly because it didn’t require a complex PC setup and I already had an account. I dove straight in, learning how to craft effective prompts, work with reference material, and control image styles. To give myself something to focus on I started creating a series of 4K wallpaper collections, each one experimenting with a different theme or technique. This approach not only helped me learn, it gave me a tangible output I could share with others.
I had a ridiculous amount of fun. But I quickly discovered that the challenge of generative AI isn’t creation — it's curation and quality. One particular wall I hit was image size. Like most services, Midjourney limits its native output resolution. For an image with a 21:9 aspect ratio, the service will provide a 1680×720px image. Since my target resolution was 3440x1440px, this meant upscaling was pretty much required.
Upscaling has some issues. It exposes every flaw in an image — and often introduces new ones. So I began exploring ways to boost image fidelity, developing a production process that combined multiple AI tools to create, upscale, and refine images to meet a higher visual standard.
Midjourney was my chosen starting point, mostly because it didn’t require a complex PC setup and I already had an account. I dove straight in, learning how to craft effective prompts, work with reference material, and control image styles. To give myself something to focus on I started creating a series of 4K wallpaper collections, each one experimenting with a different theme or technique. This approach not only helped me learn, it gave me a tangible output I could share with others.
I had a ridiculous amount of fun. But I quickly discovered that the challenge of generative AI isn’t creation — it's curation and quality. One particular wall I hit was image size. Like most services, Midjourney limits its native output resolution. For an image with a 21:9 aspect ratio, the service will provide a 1680×720px image. Since my target resolution was 3440x1440px, this meant upscaling was pretty much required.
Upscaling has some issues. It exposes every flaw in an image — and often introduces new ones. So I began exploring ways to boost image fidelity, developing a production process that combined multiple AI tools to create, upscale, and refine images to meet a higher visual standard.
Midjourney was my chosen starting point, mostly because it didn’t require a complex PC setup and I already had an account. I dove straight in, learning how to craft effective prompts, work with reference material, and control image styles. To give myself something to focus on I started creating a series of 4K wallpaper collections, each one experimenting with a different theme or technique. This approach not only helped me learn, it gave me a tangible output I could share with others.
I had a ridiculous amount of fun. But I quickly discovered that the challenge of generative AI isn’t creation — it's curation and quality. One particular wall I hit was image size. Like most services, Midjourney limits its native output resolution. For an image with a 21:9 aspect ratio, the service will provide a 1680×720px image. Since my target resolution was 3440x1440px, this meant upscaling was pretty much required.
Upscaling has some issues. It exposes every flaw in an image — and often introduces new ones. So I began exploring ways to boost image fidelity, developing a production process that combined multiple AI tools to create, upscale, and refine images to meet a higher visual standard.
Let’s create a high-quality desktop wallpaper.
Let’s create a high-quality desktop wallpaper.
Let’s create a high-quality desktop wallpaper.
So here I am, ready to create a beautiful and highly detailed desktop wallpaper — let’s say an alpine meadow high in the Cascade mountains. I hop into Midjourney, craft a strong prompt, and activate my global style profile. Since we're talking about landscapes, I also pull in some of my own photography and older AI pieces to use as reference imagery. These help me define the color, lighting, and level of detail I'm looking for.
Here’s the prompt I’ll be working with, using Midjourney's v7 Alpha model:
So here I am, ready to create a beautiful and highly detailed desktop wallpaper — let’s say an alpine meadow high in the Cascade mountains. I hop into Midjourney, craft a strong prompt, and activate my global style profile. Since we're talking about landscapes, I also pull in some of my own photography and older AI pieces to use as reference imagery. These help me define the color, lighting, and level of detail I'm looking for.
Here’s the prompt I’ll be working with, using Midjourney's v7 Alpha model:
So here I am, ready to create a beautiful and highly detailed desktop wallpaper — let’s say an alpine meadow high in the Cascade mountains. I hop into Midjourney, craft a strong prompt, and activate my global style profile. Since we're talking about landscapes, I also pull in some of my own photography and older AI pieces to use as reference imagery. These help me define the color, lighting, and level of detail I'm looking for.
Here’s the prompt I’ll be working with, using Midjourney's v7 Alpha model:
Close up photography of the rocky slopes of the Cascade mountains in autumn, with strikingly large masses of rock and ice that rise high above a landscape of amber meadows and trees. Misty sunrise lighting with towering backlit cumulus clouds. Hasselblad photography shot on Kodak Portra 100T 35mm film with vibrant colors, crisp focus, and low contrast natural light. Use grey, black, cream, gold, and indigo colors.
Close up photography of the rocky slopes of the Cascade mountains in autumn, with strikingly large masses of rock and ice that rise high above a landscape of amber meadows and trees. Misty sunrise lighting with towering backlit cumulus clouds. Hasselblad photography shot on Kodak Portra 100T 35mm film with vibrant colors, crisp focus, and low contrast natural light. Use grey, black, cream, gold, and indigo colors.
Close up photography of the rocky slopes of the Cascade mountains in autumn, with strikingly large masses of rock and ice that rise high above a landscape of amber meadows and trees. Misty sunrise lighting with towering backlit cumulus clouds. Hasselblad photography shot on Kodak Portra 100T 35mm film with vibrant colors, crisp focus, and low contrast natural light. Use grey, black, cream, gold, and indigo colors.
Explorations and variations.
Prompt in hand, I begin generating a flood of iterations, chasing down compositional rabbit holes and exploring promising variations. The image prompt and selected reference material stayed largely the same, tweaking only slightly when I started seeing things I liked.
Explorations and variations.
Prompt in hand, I begin generating a flood of iterations, chasing down compositional rabbit holes and exploring promising variations. The image prompt and selected reference material stayed largely the same, tweaking only slightly when I started seeing things I liked.
Explorations and variations.
Prompt in hand, I begin generating a flood of iterations, chasing down compositional rabbit holes and exploring promising variations. The image prompt and selected reference material stayed largely the same, tweaking only slightly when I started seeing things I liked.


























































































At the end of all this I’ve produced a large set of potential candidates. For the purposes of this example I limited my working set to 120 variations, but it’s not uncommon to explore many hundreds of options before arriving at a single final image.
At the end of all this I’ve produced a large set of potential candidates. For the purposes of this example I limited my working set to 120 variations, but it’s not uncommon to explore many hundreds of options before arriving at a single final image.
At the end of all this I’ve produced a large set of potential candidates. For the purposes of this example I limited my working set to 120 variations, but it’s not uncommon to explore many hundreds of options before arriving at a single final image.
First Upscale (2x)
Quite a few of these look compelling enough to take further, but at just 1680×720px, it’s hard to judge their true potential. So a select few get bumped to 3360×1440px using Midjourney’s built-in upscaling tools. They don't all survive this transition — some develop artifacts, patterns, excessive noise, excessive sharpening, or simply lose the magic they had at their smaller size. And while many issues are correctable, if they’re too pervasive across the image they’ll cause issues with editing.
Below are a few examples of issues that would cause me to think twice about proceeding with an image.
First Upscale (2x)
Quite a few of these look compelling enough to take further, but at just 1680×720px, it’s hard to judge their true potential. So a select few get bumped to 3360×1440px using Midjourney’s built-in upscaling tools. They don't all survive this transition — some develop artifacts, patterns, excessive noise, excessive sharpening, or simply lose the magic they had at their smaller size. And while many issues are correctable, if they’re too pervasive across the image they’ll cause issues with editing.
Below are a few examples of issues that would cause me to think twice about proceeding with an image.
First Upscale (2x)
Quite a few of these look compelling enough to take further, but at just 1680×720px, it’s hard to judge their true potential. So a select few get bumped to 3360×1440px using Midjourney’s built-in upscaling tools. They don't all survive this transition — some develop artifacts, patterns, excessive noise, excessive sharpening, or simply lose the magic they had at their smaller size. And while many issues are correctable, if they’re too pervasive across the image they’ll cause issues with editing.
Below are a few examples of issues that would cause me to think twice about proceeding with an image.

Any elements showing excessive sharpening will have to be replaced.

Prominent noise or grain across the image will confuse later editing stages, leading to new artifacts and a lot of extra work.

A pronounced pseudo-HDR edge glow is also something that is hard to correct without replacing the entire element.

Any elements showing excessive sharpening will have to be replaced.

Prominent noise or grain across the image will confuse later editing stages, leading to new artifacts and a lot of extra work.

A pronounced pseudo-HDR edge glow is also something that is hard to correct without replacing the entire element.

Any elements showing excessive sharpening will have to be replaced.

Prominent noise or grain across the image will confuse later editing stages, leading to new artifacts and a lot of extra work.

A pronounced pseudo-HDR edge glow is also something that is hard to correct without replacing the entire element.
Second Upscale (4x)
The surviving candidates undergo a second round of upscaling using Topaz Gigapixel and Magnific, doubling their pixel count to 6720x2880px. Gigapixel and Magnific each have their own distinct style and strengths, with successful outcomes often dependent on the subject being scaled.
Gigapixel tends to stay close to its source material, making it great for smoothing out noise and refining edges. I've found that it works well with clean lines and geometric forms, but can sometimes push too far, over-enhancing edges and making the image feel even more “AI” than the original.
Magnific excels with natural and organic content, using a fresh layer of AI hallucination to invent new details based on a prompt as it upscales. The results are striking, but may introduce fuzziness, artificial textures, and occasional pseudo-HDR artifacts to the image that can be tricky to correct.
Second Upscale (4x)
The surviving candidates undergo a second round of upscaling using Topaz Gigapixel and Magnific, doubling their pixel count to 6720x2880px. Gigapixel and Magnific each have their own distinct style and strengths, with successful outcomes often dependent on the subject being scaled.
Gigapixel tends to stay close to its source material, making it great for smoothing out noise and refining edges. I've found that it works well with clean lines and geometric forms, but can sometimes push too far, over-enhancing edges and making the image feel even more “AI” than the original.
Magnific excels with natural and organic content, using a fresh layer of AI hallucination to invent new details based on a prompt as it upscales. The results are striking, but may introduce fuzziness, artificial textures, and occasional pseudo-HDR artifacts to the image that can be tricky to correct.
Second Upscale (4x)
The surviving candidates undergo a second round of upscaling using Topaz Gigapixel and Magnific, doubling their pixel count to 6720x2880px. Gigapixel and Magnific each have their own distinct style and strengths, with successful outcomes often dependent on the subject being scaled.
Gigapixel tends to stay close to its source material, making it great for smoothing out noise and refining edges. I've found that it works well with clean lines and geometric forms, but can sometimes push too far, over-enhancing edges and making the image feel even more “AI” than the original.
Magnific excels with natural and organic content, using a fresh layer of AI hallucination to invent new details based on a prompt as it upscales. The results are striking, but may introduce fuzziness, artificial textures, and occasional pseudo-HDR artifacts to the image that can be tricky to correct.

Original upscale from Midjourney, boosted to twice its size to match the resolution of the Gigapixel and Magnific outputs so it's easier to compare.

Output from Topaz Gigapixel. Smooth and crisp details that read as decidedly artificial (in this use case at least.)

Output from Magnific. Much more natural details, but fuzzy and still far from perfect.

Original upscale from Midjourney, boosted to twice its size to match the resolution of the Gigapixel and Magnific outputs so it's easier to compare.

Output from Topaz Gigapixel. Smooth and crisp details that read as decidedly artificial (in this use case at least.)

Output from Magnific. Much more natural details, but fuzzy and still far from perfect.

Original upscale from Midjourney, boosted to twice its size to match the resolution of the Gigapixel and Magnific outputs so it's easier to compare.

Output from Topaz Gigapixel. Smooth and crisp details that read as decidedly artificial (in this use case at least.)

Output from Magnific. Much more natural details, but fuzzy and still far from perfect.
Leveraging both upscaling tools gives me the flexibility to pick the stronger result or blend the best parts of both. Any images that don’t survive this second round of upscaling get dropped, leaving me with a small set of finalists:
Leveraging both upscaling tools gives me the flexibility to pick the stronger result or blend the best parts of both. Any images that don’t survive this second round of upscaling get dropped, leaving me with a small set of finalists:
Leveraging both upscaling tools gives me the flexibility to pick the stronger result or blend the best parts of both. Any images that don’t survive this second round of upscaling get dropped, leaving me with a small set of finalists:












Final Upscale (8x)
I decided to move forward with option D. One last round of upscaling doubles the pixel count yet again, bringing the image to a final working resolution of 13440×5760px. I now have a strong composition and plenty of pixels to work with, but the fine details are a bit of a mess. The 1:1 crops below are good examples of what I have to work with... and there’s definitely work to be done.
Now it’s on to my favorite part of the process: editing in Photoshop. I'm starting with a blend of the Gigapixel and Magnific upscales, as the combination provides just enough structure to help Photoshop interpret lighting, depth, and perspective while still leaving room to reimagine visual details.
Final Upscale (8x)
I decided to move forward with option D. One last round of upscaling doubles the pixel count yet again, bringing the image to a final working resolution of 13440×5760px. I now have a strong composition and plenty of pixels to work with, but the fine details are a bit of a mess. The 1:1 crops below are good examples of what I have to work with... and there’s definitely work to be done.
Now it’s on to my favorite part of the process: editing in Photoshop. I'm starting with a blend of the Gigapixel and Magnific upscales, as the combination provides just enough structure to help Photoshop interpret lighting, depth, and perspective while still leaving room to reimagine visual details.
Final Upscale (8x)
I decided to move forward with option D. One last round of upscaling doubles the pixel count yet again, bringing the image to a final working resolution of 13440×5760px. I now have a strong composition and plenty of pixels to work with, but the fine details are a bit of a mess. The 1:1 crops below are good examples of what I have to work with... and there’s definitely work to be done.
Now it’s on to my favorite part of the process: editing in Photoshop. I'm starting with a blend of the Gigapixel and Magnific upscales, as the combination provides just enough structure to help Photoshop interpret lighting, depth, and perspective while still leaving room to reimagine visual details.

These are supposed to be snow-covered mountain ridges, but you'd have to squint your eyes to see it.

This snow-covered hillside is covered in artifacts, giving it a fingerprint-like texture that I'll have to fix or replace.

There are distorted, artifact-ridden trees throughout the image that will need some serious love before they're recognizable.

These are supposed to be snow-covered mountain ridges, but you'd have to squint your eyes to see it.

This snow-covered hillside is covered in artifacts, giving it a fingerprint-like texture that I'll have to fix or replace.

There are distorted, artifact-ridden trees throughout the image that will need some serious love before they're recognizable.

These are supposed to be snow-covered mountain ridges, but you'd have to squint your eyes to see it.

This snow-covered hillside is covered in artifacts, giving it a fingerprint-like texture that I'll have to fix or replace.

There are distorted, artifact-ridden trees throughout the image that will need some serious love before they're recognizable.
We can rebuild this — we have the technology.
Enter our last flavor of generative AI: Photoshop’s Generative Fill. This tool enables you to select an area and replace it with AI-generated content based either on a custom prompt or by sampling the area around it for context. And our massively upscaled Midjourney image has plenty of context to draw from. My next task is to methodically work across the composition, rebuilding elements and layering in sharper, more detailed visuals one small selection area at a time.
When prompts are needed I keep them simple — things like “small forest of evergreen trees” or “mountain rock and ice” are often all that’s needed to produce workable results. See the example below for what this looks like.
We can rebuild this — we have the technology.
Enter our last flavor of generative AI: Photoshop’s Generative Fill. This tool enables you to select an area and replace it with AI-generated content based either on a custom prompt or by sampling the area around it for context. And our massively upscaled Midjourney image has plenty of context to draw from. My next task is to methodically work across the composition, rebuilding elements and layering in sharper, more detailed visuals one small selection area at a time.
When prompts are needed I keep them simple — things like “small forest of evergreen trees” or “mountain rock and ice” are often all that’s needed to produce workable results. See the example below for what this looks like.
We can rebuild this — we have the technology.
Enter our last flavor of generative AI: Photoshop’s Generative Fill. This tool enables you to select an area and replace it with AI-generated content based either on a custom prompt or by sampling the area around it for context. And our massively upscaled Midjourney image has plenty of context to draw from. My next task is to methodically work across the composition, rebuilding elements and layering in sharper, more detailed visuals one small selection area at a time.
When prompts are needed I keep them simple — things like “small forest of evergreen trees” or “mountain rock and ice” are often all that’s needed to produce workable results. See the example below for what this looks like.
Example of Generative Fill in action.
Example of Generative Fill in action.
Example of Generative Fill in action.
At the time I’m writing this Generative Fill has a 1024×1024px output limit. This means you can generate within as large a selected area as you like, but you'll be stretching that single megapixel of detail to fit. To get around this, I apply the tool in hundreds of small, overlapping passes to gradually rebuild each part of the image. It’s meticulous, time-consuming work, often requiring multiple attempts to work my way past AI artifacts and achieve convincing results.
But the transformation this effort produces can be absolutely extraordinary, delivering an entire canvas of fresh, crisp details that are (mostly) artifact-free. Here’s what that process looks like:
At the time I’m writing this Generative Fill has a 1024×1024px output limit. This means you can generate within as large a selected area as you like, but you'll be stretching that single megapixel of detail to fit. To get around this, I apply the tool in hundreds of small, overlapping passes to gradually rebuild each part of the image. It’s meticulous, time-consuming work, often requiring multiple attempts to work my way past AI artifacts and achieve convincing results.
But the transformation this effort produces can be absolutely extraordinary, delivering an entire canvas of fresh, crisp details that are (mostly) artifact-free. Here’s what that process looks like:
At the time I’m writing this Generative Fill has a 1024×1024px output limit. This means you can generate within as large a selected area as you like, but you'll be stretching that single megapixel of detail to fit. To get around this, I apply the tool in hundreds of small, overlapping passes to gradually rebuild each part of the image. It’s meticulous, time-consuming work, often requiring multiple attempts to work my way past AI artifacts and achieve convincing results.
But the transformation this effort produces can be absolutely extraordinary, delivering an entire canvas of fresh, crisp details that are (mostly) artifact-free. Here’s what that process looks like:
The impact of excessive artifacts.
If you look closely in the video above you can see a few places where I replaced large swaths of the original material all at once — the foreground mountain ridge is one example of this (I was dealing with that oddly textured snow). As I mentioned previously, pervasive artifacts make editing extremely tricky. If they cover to large of an area, they become irresistible to Generative Fill, compelling it to reference and replicate the artifacts instead of replace them. This makes it impossible to rebuild in small sections like I would elsewhere in the image. Instead, I must make my peace with the resolution limitations of Generative Fill, using it to create a compositionally improved — but horribly low-res — replacement for the entire afflicted area. Then I have to go back over that area one small piece at a time to bring the level of resolution and detail back up to par. It's hands-down the most time-intensive part of this editing process, with lots of failed attempts and re-do moments I don’t show in the video above... but the results are worth it.
The impact of excessive artifacts.
If you look closely in the video above you can see a few places where I replaced large swaths of the original material all at once — the foreground mountain ridge is one example of this (I was dealing with that oddly textured snow). As I mentioned previously, pervasive artifacts make editing extremely tricky. If they cover to large of an area, they become irresistible to Generative Fill, compelling it to reference and replicate the artifacts instead of replace them. This makes it impossible to rebuild in small sections like I would elsewhere in the image. Instead, I must make my peace with the resolution limitations of Generative Fill, using it to create a compositionally improved — but horribly low-res — replacement for the entire afflicted area. Then I have to go back over that area one small piece at a time to bring the level of resolution and detail back up to par. It's hands-down the most time-intensive part of this editing process, with lots of failed attempts and re-do moments I don’t show in the video above... but the results are worth it.
The impact of excessive artifacts.
If you look closely in the video above you can see a few places where I replaced large swaths of the original material all at once — the foreground mountain ridge is one example of this (I was dealing with that oddly textured snow). As I mentioned previously, pervasive artifacts make editing extremely tricky. If they cover to large of an area, they become irresistible to Generative Fill, compelling it to reference and replicate the artifacts instead of replace them. This makes it impossible to rebuild in small sections like I would elsewhere in the image. Instead, I must make my peace with the resolution limitations of Generative Fill, using it to create a compositionally improved — but horribly low-res — replacement for the entire afflicted area. Then I have to go back over that area one small piece at a time to bring the level of resolution and detail back up to par. It's hands-down the most time-intensive part of this editing process, with lots of failed attempts and re-do moments I don’t show in the video above... but the results are worth it.
Making final tweaks.
Once the rebuilding phase is complete, I move on to more traditional color grading, lighting adjustments, and downscaling. The completed image isn’t perfect in every detail, but that’s why we’re working at such a ridiculous resolution — scaling down forgives all manner imperfections. Before resizing to my target resolution of 3440×1440px, I’ll often apply a subtle field blur and a touch of noise to soften edges and create a more cohesive, photographic feel.
And that's it… our high-quality desktop wallpaper is finally complete.
Making final tweaks.
Once the rebuilding phase is complete, I move on to more traditional color grading, lighting adjustments, and downscaling. The completed image isn’t perfect in every detail, but that’s why we’re working at such a ridiculous resolution — scaling down forgives all manner imperfections. Before resizing to my target resolution of 3440×1440px, I’ll often apply a subtle field blur and a touch of noise to soften edges and create a more cohesive, photographic feel.
And that's it… our high-quality desktop wallpaper is finally complete.
Making final tweaks.
Once the rebuilding phase is complete, I move on to more traditional color grading, lighting adjustments, and downscaling. The completed image isn’t perfect in every detail, but that’s why we’re working at such a ridiculous resolution — scaling down forgives all manner imperfections. Before resizing to my target resolution of 3440×1440px, I’ll often apply a subtle field blur and a touch of noise to soften edges and create a more cohesive, photographic feel.
And that's it… our high-quality desktop wallpaper is finally complete.
Here's a closer crop so you can enjoy some snow-covered mountain details.
Here's a closer crop so you can enjoy some snow-covered mountain details.
Here's a closer crop so you can enjoy some snow-covered mountain details.
I’ve used some variation of this process across most of my recent collections, and genuinely enjoyed both the work and the results. Some images I'm just cleaning up the finer details, and others I'm pulling in new elements from Midjourney and building something entirely new. You'll see some instances of the latter in the examples below… some of them can end up quite a distance from where they started.
There’s something deeply satisfying about shaping and refining each piece by hand. I know this workflow could become obsolete at any moment — AI is moving so fast — but I hope there will always be space for a hands-on approach to craftsmanship.
I’ve used some variation of this process across most of my recent collections, and genuinely enjoyed both the work and the results. Some images I'm just cleaning up the finer details, and others I'm pulling in new elements from Midjourney and building something entirely new. You'll see some instances of the latter in the examples below… some of them can end up quite a distance from where they started.
There’s something deeply satisfying about shaping and refining each piece by hand. I know this workflow could become obsolete at any moment — AI is moving so fast — but I hope there will always be space for a hands-on approach to craftsmanship.
I’ve used some variation of this process across most of my recent collections, and genuinely enjoyed both the work and the results. Some images I'm just cleaning up the finer details, and others I'm pulling in new elements from Midjourney and building something entirely new. You'll see some instances of the latter in the examples below… some of them can end up quite a distance from where they started.
There’s something deeply satisfying about shaping and refining each piece by hand. I know this workflow could become obsolete at any moment — AI is moving so fast — but I hope there will always be space for a hands-on approach to craftsmanship.
Below are a few of my favorite wallpapers created using the methods described in this article. You can find the rest of my work available to browse and download in my /imagine project gallery.
Below are a few of my favorite wallpapers created using the methods described in this article. You can find the rest of my work available to browse and download in my /imagine project gallery.
Below are a few of my favorite wallpapers created using the methods described in this article. You can find the rest of my work available to browse and download in my /imagine project gallery.
A few closing insights.
A few closing insights.
A few closing insights.
Generative AI is an incredible tool for conceptual exploration.
Its ability to iterate rapidly, combined with the built-in element of serendipity, makes it uniquely powerful for producing unexpected and inspiring results.
Generative AI is an incredible tool for conceptual exploration.
Its ability to iterate rapidly, combined with the built-in element of serendipity, makes it uniquely powerful for producing unexpected and inspiring results.
Generative AI is an incredible tool for conceptual exploration.
Its ability to iterate rapidly, combined with the built-in element of serendipity, makes it uniquely powerful for producing unexpected and inspiring results.
Watching AI tools evolve has been almost as fascinating as using them.
I was fortunate to generate enough images with Midjourney to be invited into their web alpha early on, and it’s been incredible to watch the platform grow in real time. Bugs disappeared faster than I could report them, features steadily improved, and the experience kept getting better with each update. Hats off to the Midjourney crew… you’re doing great things.
Watching AI tools evolve has been almost as fascinating as using them.
I was fortunate to generate enough images with Midjourney to be invited into their web alpha early on, and it’s been incredible to watch the platform grow in real time. Bugs disappeared faster than I could report them, features steadily improved, and the experience kept getting better with each update. Hats off to the Midjourney crew… you’re doing great things.
Watching AI tools evolve has been almost as fascinating as using them.
I was fortunate to generate enough images with Midjourney to be invited into their web alpha early on, and it’s been incredible to watch the platform grow in real time. Bugs disappeared faster than I could report them, features steadily improved, and the experience kept getting better with each update. Hats off to the Midjourney crew… you’re doing great things.
Upscaling is almost always a little disappointing.
No matter whas been promised, the results are never quite as clean, crisp, or detailed as you hope. But that’s today. Tomorrow is another story. The pace of progress in this space is wild, and the next breakthrough could be just around the corner.
Upscaling is almost always a little disappointing.
No matter whas been promised, the results are never quite as clean, crisp, or detailed as you hope. But that’s today. Tomorrow is another story. The pace of progress in this space is wild, and the next breakthrough could be just around the corner.
Upscaling is almost always a little disappointing.
No matter whas been promised, the results are never quite as clean, crisp, or detailed as you hope. But that’s today. Tomorrow is another story. The pace of progress in this space is wild, and the next breakthrough could be just around the corner.
Photoshop’s Generative Fill is incredible at adding the detail you’re looking for.
Seriously, is this thing reading my mind?
Photoshop’s Generative Fill is incredible at adding the detail you’re looking for.
Seriously, is this thing reading my mind?
Photoshop’s Generative Fill is incredible at adding the detail you’re looking for.
Seriously, is this thing reading my mind?
Every advancement in AI seems to take half a step back before it leaps forward.
In my experience, it takes a little time for a new generative model to find its rhythm. If you’re aiming for consistency in style or output, it’s often worth giving the latest release a bit of breathing room before going all in.
Every advancement in AI seems to take half a step back before it leaps forward.
In my experience, it takes a little time for a new generative model to find its rhythm. If you’re aiming for consistency in style or output, it’s often worth giving the latest release a bit of breathing room before going all in.
Every advancement in AI seems to take half a step back before it leaps forward.
In my experience, it takes a little time for a new generative model to find its rhythm. If you’re aiming for consistency in style or output, it’s often worth giving the latest release a bit of breathing room before going all in.
Reaching a level of quality you can truly be proud of still takes a fair amount of hands-on work — especially at scale.
There’s a noticeable difference between an AI image that’s been carefully refined and one that’s simply rendered and shipped as-is. Generative AI is an incredible tool for creating compelling imagery, but it still rewards those willing to put in the extra effort.
Reaching a level of quality you can truly be proud of still takes a fair amount of hands-on work — especially at scale.
There’s a noticeable difference between an AI image that’s been carefully refined and one that’s simply rendered and shipped as-is. Generative AI is an incredible tool for creating compelling imagery, but it still rewards those willing to put in the extra effort.
Reaching a level of quality you can truly be proud of still takes a fair amount of hands-on work — especially at scale.
There’s a noticeable difference between an AI image that’s been carefully refined and one that’s simply rendered and shipped as-is. Generative AI is an incredible tool for creating compelling imagery, but it still rewards those willing to put in the extra effort.
The perceived value of generative AI is inversely proportional to your expectations.
If you approach it with little or no expectations, the stunning results you get from relatively little effort can feel almost magical. But if you’re chasing a higher standard for quality, AI becomes just another step in a much longer and more involved effort. It’s powerful, yes, but unlocking its full potential requires building new workflows, mastering new tools, and embracing a new kind of creative discipline.
The perceived value of generative AI is inversely proportional to your expectations.
If you approach it with little or no expectations, the stunning results you get from relatively little effort can feel almost magical. But if you’re chasing a higher standard for quality, AI becomes just another step in a much longer and more involved effort. It’s powerful, yes, but unlocking its full potential requires building new workflows, mastering new tools, and embracing a new kind of creative discipline.
The perceived value of generative AI is inversely proportional to your expectations.
If you approach it with little or no expectations, the stunning results you get from relatively little effort can feel almost magical. But if you’re chasing a higher standard for quality, AI becomes just another step in a much longer and more involved effort. It’s powerful, yes, but unlocking its full potential requires building new workflows, mastering new tools, and embracing a new kind of creative discipline.