Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

sunny5

macrumors 68000
Original poster
Jun 11, 2021
1,835
1,706
Well, since late 2022, AI generated Art becomes sensational and revolutionary as you can create high quality of images and paints with some prompts. I used Automatic1111's WebUI Stable Diffusion with a lot of models. Yeah, Midjourney is another good service but so far, WebUI with Stable Diffusion is the best.

AI generated ART is extremely GPU and RAM intensive and even my M1 Max reached 100 degree celsius, had loud fan noise, and consume a lot of power. Both GPU and RAM reached 100% while I generate images on 512x768 resolution. Nvidia GPU with a lot of VRAM is highly recommended to create images. AMD GPU sucks for this. Just nope. How about Apple Silicon Mac? Well, it's fine with M1 Max but you really need more than 64GB of unified memory. Even then, it maxed out to make one 512x768 image. It would be nice to have Mac Pro with a lot of memory but Nvidia RTX 4090 is better for its price.

M1 Max MBP with 32 GPU cores + 64GB of RAM took me 10 min and 40 sec while RTX 4090 took only 10 sec to create 4 images at once. RTX 3090 took 16 sec with the same setting which isn't bad.

So far, M1 Max is just fine but I have to say it's really slow to create one image especially if you increase the resolution, steps, upscaler, and more. 64GB of RAM? Well, not really useful as the bandwidth isn't really fast enough. 64GB of RAM is the minimum requirement for using Stable Diffusion or otherwise, it would be very difficult to generate images or you need to decrease the memory uses with some codes. More GPU cores for faster speed.

You also can use and rent cloud compute for this which might be way better than just buying a new computer. I gotta check which service fits and works with Mac and WebUI.

There is an article about Core ML for Stable Diffusion from Apple's machine learning. Not sure if Automatic1111 is optimizing Apple Silicon but it would be nice if Apple Silicon performs better. But for now, if you running locally, Nvidia is the best bet unless M1 Ultra prove to be as powerful as RTX 3090.
 

l0stl0rd

macrumors 6502
Jul 25, 2009
479
412
Well, since late 2022, AI generated Art becomes sensational and revolutionary as you can create high quality of images and paints with some prompts. I used Automatic1111's WebUI Stable Diffusion with a lot of models. Yeah, Midjourney is another good service but so far, WebUI with Stable Diffusion is the best.

AI generated ART is extremely GPU and RAM intensive and even my M1 Max reached 100 degree celsius, had loud fan noise, and consume a lot of power. Both GPU and RAM reached 100% while I generate images on 512x768 resolution. Nvidia GPU with a lot of VRAM is highly recommended to create images. AMD GPU sucks for this. Just nope. How about Apple Silicon Mac? Well, it's fine with M1 Max but you really need more than 64GB of unified memory. Even then, it maxed out to make one 512x768 image. It would be nice to have Mac Pro with a lot of memory but Nvidia RTX 4090 is better for its price.

M1 Max MBP with 32 GPU cores + 64GB of RAM took me 10 min and 40 sec while RTX 4090 took only 10 sec to create 4 images at once. RTX 3090 took 16 sec with the same setting which isn't bad.

So far, M1 Max is just fine but I have to say it's really slow to create one image especially if you increase the resolution, steps, upscaler, and more. 64GB of RAM? Well, not really useful as the bandwidth isn't really fast enough. 64GB of RAM is the minimum requirement for using Stable Diffusion or otherwise, it would be very difficult to generate images or you need to decrease the memory uses with some codes. More GPU cores for faster speed.

You also can use and rent cloud compute for this which might be way better than just buying a new computer. I gotta check which service fits and works with Mac and WebUI.

There is an article about Core ML for Stable Diffusion from Apple's machine learning. Not sure if Automatic1111 is optimizing Apple Silicon but it would be nice if Apple Silicon performs better. But for now, if you running locally, Nvidia is the best bet unless M1 Ultra prove to be as powerful as RTX 3090.
From my testing I think the Mac version is just badly optimized and perhaps not fully native.

I am not saying that only because it is slow but I can create bigger pictures on an Nvidia with 12 GB the on an M1 with 32 GB. With Automatic 1111 a few months back.

I hope someone will implement CoreML but it might be a while and Diffusion Bee has been really quiet the last months.

 
Last edited:

Broric

macrumors regular
Oct 1, 2009
210
24
I'm confused here. Are you training your own model or just creating images?

I had it running reasonably ok on my 2019 (intel) MBP with a 4GB GPU to create my own images. I'd assumed that it'd be blazingly fast with the large unified memory on AS.
 
Last edited:

leman

macrumors Core
Oct 14, 2008
19,518
19,668
Apple's CoreML implementation of stable diffusion puts M1 Max at roughly 3x slower than the desktop RTX 3080, which is a decent result. If you take 10 minutes to generate a few images then you are using a bad implementation that likely doesn't use any acceleration at all.
 
  • Like
Reactions: l0stl0rd and Broric

Broric

macrumors regular
Oct 1, 2009
210
24
How many seconds per iteration? On my intel mac with 4GB VRAM, it's about 8-12s per iteration. Are you saying that Apple Silicon is far worse than that?
 

sunny5

macrumors 68000
Original poster
Jun 11, 2021
1,835
1,706
I'm confused here. Are you training your own model or just creating images?

I had it running reasonably ok on my 2019 (intel) MBP with a 4GB GPU to create my own images. I'd assumed that it'd be blazingly fast with the large unified memory on AS.
Just creating for now but I'm planning to training my own. Try the setting from YouTube and tell me the result because every single setting affect the performance.

Prompt settings: 1girl, apron, architecture, black_dress, black_hair, blurry, blurry_background, blurry_foreground, blush, bookshelf, building, cafe, city, cityscape, convenience_store, depth_of_field, dress, east_asian_architecture, house, library, long_hair, looking_at_viewer, maid, maid_apron, maid_headdress, motion_blur, outdoors, photo_background, puffy_short_sleeves, puffy_sleeves, real_world_location, shop, short_sleeves, shrine, skyscraper, smile, solo, stadium, storefront, street, town, white_apron
Negative prompt: lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name,

Steps: 28, Sampler: Euler, CFG scale: 7, Seed: 1181893402, Size: 768x960, Model hash: 7145e188, Batch size: 4, Batch pos: 2, Denoising strength: 0.75, Clip skip: 2, Mask blur: 4
 

sunny5

macrumors 68000
Original poster
Jun 11, 2021
1,835
1,706
From my testing I think the Mac version is just badly optimized and perhaps not fully native.

I am not saying that only because it is slow but I can create bigger pictures on an Nvidia with 12 GB the on an M1 with 32 GB. With Automatic 1111 a few months back.

I hope someone will implement CoreML but it might be a while and Diffusion Bee has been really quiet the last months.

I tried DiffusionBee but it sucks and lacks so many features so I wouldn't use it.
 

l0stl0rd

macrumors 6502
Jul 25, 2009
479
412
I tried DiffusionBee but it sucks and lacks so many features so I wouldn't use it.
Yes I know I prefer Automatic 1111 too.

I wonder if there is something up with you M1 Max.

I did not try img2img yet but your settings / prompt with text2img take 8min 52 sec on an M1 Pro with 14 core GPU.

Time is the same if I use a 1.5 or a 2.0 based model.

Did try the same thing in Invoke AI and that definitely is slower and take 10 min 41 sec. https://github.com/invoke-ai/InvokeAI

Did some more testing on my PC nut sure if it is the same on Mac but img2img is faster then test2img.
Took 29 sec on my 3080Ti for the batch of 4 with img2img

So instead of 8 min 52 sec it takes 7min 08 sec with img2img on the 14 core M1.

I would have guessed your M1 Max should be able to do it in like 3 - 4 min not over 10 min.
 
Last edited:
  • Like
Reactions: Broric

sunny5

macrumors 68000
Original poster
Jun 11, 2021
1,835
1,706
Yes I know I prefer Automatic 1111 too.

I wonder if there is something up with you M1 Max.

I did not try img2img yet but your settings / prompt with text2img take 8min 52 sec on an M1 Pro with 14 core GPU.

Time is the same if I use a 1.5 or a 2.0 based model.

Did try the same thing in Invoke AI and that definitely is slower and take 10 min 41 sec. https://github.com/invoke-ai/InvokeAI

Did some more testing on my PC nut sure if it is the same on Mac but img2img is faster then test2img.
Took 29 sec on my 3080Ti for the batch of 4 with img2img

So instead of 8 min 52 sec it takes 7min 08 sec with img2img on the 14 core M1.

I would have guessed your M1 Max should be able to do it in like 3 - 4 min not over 10 min.
I updated to the latest and ran the same setting and got 5 min 59 sec for text to image. Doubling the core doesn't really provide better performance or something is not right. I ran again and it took 10 min. Do you have a screenshot of your WebUI?
 
Last edited:

l0stl0rd

macrumors 6502
Jul 25, 2009
479
412
I updated to the latest and ran the same setting and got 5 min 59 sec for text to image. Doubling the core doesn't really provide better performance or something is not right. I ran again and it took 10 min. Do you have a screenshot of your WebUI?
Sure but with todays update it seems bugged but had to reinstall it as I switched machines.
Invoke AI at the moment just seems to crash python for some reason.

Also new time is between 3 min 20 and 3 min 30 sec most of the time on M2 Max 30 core.

Screenshot 2023-02-04 at 18.06.10.png
 
Last edited:

Broric

macrumors regular
Oct 1, 2009
210
24
M2 Max using Automatic1111's repo takes about 10-20 seconds for me to generate stuff. Maybe 1 minute if lots of steps or higher res.
 

l0stl0rd

macrumors 6502
Jul 25, 2009
479
412
M2 Max using Automatic1111's repo takes about 10-20 seconds for me to generate stuff. Maybe 1 minute if lots of steps or higher res.
Yes of course if I do one picture only and lower the resolution to 512 x 768 px it takes 18 sec.

"Default" 512 x 512 at 20 steps is 8 sec.

It works quite well but seems there is a bug if I push the resolution to 768 x 960 px which was not there before.
 

Broric

macrumors regular
Oct 1, 2009
210
24
This is not art. It a lot of keywords translated into a picture without any type of feeling or expression. Copy paste art without development of the field.

I find this kind of attitude laughable, especially when it comes from "artists".

It's analogous to a classical pianist raging against electro-pop keyboard players. Mediums move on. I'd even go so far as to say this might end up giving us all a better understanding of what art actually is.
 

iPadified

macrumors 68020
Apr 25, 2017
2,014
2,257
I find this kind of attitude laughable, especially when it comes from "artists".

It's analogous to a classical pianist raging against electro-pop keyboard players. Mediums move on. I'd even go so far as to say this might end up giving us all a better understanding of what art actually is.
Predictable answer. Art is not only a nice looking picture or the skill to draw one. Art is what the artist want to express and what he artist intent to trigger in the audience.

For someone who listen to classical music and Jean Michel Jarre and darker electro pop such as Kraftwerk, I recognize an artist when I hear one. In the 80’s I always said that “ if Mozart had a DX7 he would use one”.

Likewise I think photographs are great example of art (well some pictures are) using a modern tool.
 

Burnincoco

macrumors regular
May 6, 2007
132
133
Predictable answer. Art is not only a nice looking picture or the skill to draw one. Art is what the artist want to express and what he artist intent to trigger in the audience.

For someone who listen to classical music and Jean Michel Jarre and darker electro pop such as Kraftwerk, I recognize an artist when I hear one. In the 80’s I always said that “ if Mozart had a DX7 he would use one”.

Likewise I think photographs are great example of art (well some pictures are) using a modern tool.
What about Martial Arts?
After a robot with AI watches a Bruce Lee movie, you better start running
 
  • Like
Reactions: DearthnVader

jimbobb24

macrumors 68040
Jun 6, 2005
3,475
5,607
Has anyone currently using Automatic1111 got textual inversion to work on mac. I also tried training a LORA on COLAB but didnt work for me.

I was able to train my own model on COLAB but the techniques have moved on and am trying to use the more efficient ones. I swear that I have never thought about getting a PC but seriously considering for Stable Diffusion tools.
 
  • Like
Reactions: spydr

jimbobb24

macrumors 68040
Jun 6, 2005
3,475
5,607
Predictable answer. Art is not only a nice looking picture or the skill to draw one. Art is what the artist want to express and what he artist intent to trigger in the audience.

For someone who listen to classical music and Jean Michel Jarre and darker electro pop such as Kraftwerk, I recognize an artist when I hear one. In the 80’s I always said that “ if Mozart had a DX7 he would use one”.

Likewise I think photographs are great example of art (well some pictures are) using a modern tool.
There is definitely something transformative happening. Reviewing images from MidJourney 5 that are absolutely incredible. What is art? The person creating it had a vision of the thing they wanted and the AI implemented it. But most the time the user refines and refines and refines their prompt trying to produce the image that they imagine. This iterative process with an AI co-pilot is definitely new. Does it produce art? The images definitely evoke in me something similar to art. It is a fascinating change in workflow. We are only at the start of the tools utility and power.
 

name99

macrumors 68020
Jun 21, 2004
2,407
2,309
This is not art. It a lot of keywords translated into a picture without any type of feeling or expression. Copy paste art without development of the field.
:eyeroll:

The next step in the evolution of "Art" from pictures to a language-based theory of sociology based on aggressive gate-keeping.
We started down this path in the late 1800s. How well has it worked out for art since then?
 

spydr

macrumors 6502
Jul 25, 2005
445
2
MD
Has anyone currently using Automatic1111 got textual inversion to work on mac. I also tried training a LORA on COLAB but didnt work for me.

I was able to train my own model on COLAB but the techniques have moved on and am trying to use the more efficient ones. I swear that I have never thought about getting a PC but seriously considering for Stable Diffusion tools.
I know this space is moving fast - so curious in the past month since this question, if there's been any updates on being able to train textual inversion (using A1111) using Apple silicon?
 

retlif

macrumors member
Feb 2, 2020
79
34
I manage to train textual inversion (Automatic1111) on a basic Studio Max model with 32GB of memory. But with problems like "ValueError: cannot convert float NaN to integer". Then I just start with the penultimate .pt and can continue training - until it works.

It's all a bit of a matter of luck - sometimes the problem occurs a few iterations sometimes every few. I do not really see any regularity here.

The effects are on the one hand great - on the other hand unsatisfactory. Great because you can see that some features are perfectly replicated, but when it comes to similarity it's highly questionable in my opinion - after 1000 to 5000 cycles you get something that once in a while (once in ten maybe more often, maybe less often, it's hard to say) produces a truly stunningly similar result to the original, but for the most part it's an image in the style of a perfect hairstyle, but some glasses and roughly oval face/beard etc. etc.

Naturally, the more abstract style you choose (illustration, painting), the less it bothers you.
However, if we are talking about photography, it is not the image of the person on which you trained the model. Rather, a variant similar in a certain and, in my opinion, rather small percentage.

A1111 configuration is of course a separate topic, but it worked best for me:
export COMMANDLINE_ARGS="--disable-safe-unpickle --skip-torch-cuda-test --upcast-sampling --opt-sub-quad-attention --no-half --no-half-vae --disable-nan-check --precision autocast --use-cpu interrogate"

Speed, well, I guess I can't complain since one 512x512 generates in about 10 seconds (average 1.55s/it) but this is obviously (unfortunately) not comparable to Nvidia.

I wonder how it looks like for you guys?

Oh well, and one more thing - everything works on the latest version of Monterey 12.6.6 (21G646).

Is it true that Ventura improves something in terms of performance? (I know that Core ML but A1111 does not use it anyway so I guess it is irrelevant?) I don't know if it's worth updating the system?
 

Attachments

  • SCR-20230604-mahx.png
    SCR-20230604-mahx.png
    18.5 KB · Views: 119
Last edited:

plugsnpixels

macrumors regular
Jul 10, 2008
115
76
LA area
Today I discovered that DiffusionBee and Draw Things crash on the macOS Sonoma beta... 2020 M1 MBP.

Exploring other App Store options, others do work.
 
Last edited:

Tdude96

macrumors 6502
Oct 16, 2021
462
717
Today I discovered that DiffusionBee and Draw Things crash on the macOS Sonoma beta... 2020 M1 MBP.

Exploring other App Store options, others do work.
I hear Draw Things also crashes on the iOS 17 beta. I've not tried the Sonoma beta, but I suspect both will get support (likely Draw Things before Diffusion Bee).

Both developers have had the problem reported to them, so support should come at some point, but may not be prioritized until closer to a live release.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.