Member-only story

Adventures in Imagineering — Stable Diffusion, the Unstable Bits …

Fahim Farook
10 min readSep 7, 2022

--

I figured out a more or less functional set up for running Stable Diffusion on my Apple M1 MacBook Pro a couple of days ago. Since then, I’ve mostly done Stable Diffusion tasks on my local machine instead of doing things on Google Colab (or Amazon SageMaker Studio Lab).

Working on the local machine, at least for me, gives me more time to think about what I’m doing and to try to improve upon the code. And in the process, I’ve learnt a few things which affect people working on Apple Silicon devices. Plus, I did some improvements to my original code from the first article I linked to above and wanted to talk a bit about that too 🙂

The Stable Diffusion bits from my very first code that I wrote to run on my MacBook looked like this:

device = torch.device("cuda" if torch.cuda.is_available() else "mps" if torch.backends.mps.is_available() else "cpu")
pipe = StableDiffusionPipeline.from_pretrained("stable-diffusion-v1-4")
pipe = pipe.to(device)
prompt = "an elf sitting on a toadstool"
image = pipe(prompt)["sample"][0]

You can generate a new image using Stable Diffusion with just five lines (four if you drop the first line and hardcode the device for line 3, or even three if you combine lines 2 and 3). That gets the job done. But keep this code in mind as we progress through the various iterations of the code 🙂

NSFW Images

--

--

Fahim Farook
Fahim Farook

Written by Fahim Farook

CEO and head iPhone tinkerer at RookSoft. Mad coder and tech editor. Author of a couple of books on iOS development.

No responses yet