Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Status
The first post of this thread is a WikiPost and can be edited by anyone with the appropiate permissions. Your edits will be public.
9e8b93b50687050b438aedbbfeb88977.jpg

Dino is hoping you all get to play soon!

A three handed split tailed Dino? Novel!
 
  • Like
Reactions: Simacca
I think they’re going to have a very large controversy about image playground + people you know when it goes live. I’ve just tried it with like 10 friend/family faces and I don’t think a single one of them would find the AI photos to be a flattering depiction of them, even after scrolling through several source images for each.
 
  • Haha
Reactions: Jumpie
I think they’re going to have a very large controversy about image playground + people you know when it goes live. I’ve just tried it with like 10 friend/family faces and I don’t think a single one of them would find the AI photos to be a flattering depiction of them, even after scrolling through several source images for each.
Well, it is a first beta. Hopefully, improvements will be forthcoming.
 
  • Like
Reactions: gwhizkids
Yes - Junk emails that can't be deleted and/or keep re-appearing. Have to use browser to delete on web. Gone back to Edison Mail for a while.
Thought I was alone! Been doing the same steps you posted here to delete junk email.

Hoping we get iOS 18.2 beta 2 Monday.
 
We can all cry together.

I’ll bring the tissues… I was so hopeful when the new round of approvals started flowing in early today and my hopes were dashed as I got ready to leave work and realized that I would get home not be able to play around on the AI playground. Might as well pop an ambien and go to bed. Is consciousness without all the beta features I was promised worth it?! 😂
 
I'm not trying to be snarky, but is this your first beta? Why even release a beta if the software could have been perfected in-house?
These models are not developed in the same way as developing an app. It's not nearly as much effort to move some pieces of the photos app around as it would be to "adjust" this kind of model. What we got looks basically identical to WWDC, and the WWDC preview images for this weren't received particularly well either.

These models take weeks to months to complete training, and while on-device models are smaller and thus might faster (It's difficult to predict how much compute Apple has lying around for training vs an org like OpenAI), I am very doubtful this will be something that can be meaningfully adjusted week-to-week. You can't really prompt optimize these kinds of problems away easily (possibly at all) which would be the fastest solution for something that's text based.

IMHO, they are doing the beta release strategy for this primarily to make sure it can't generate objectionable content. Our feedback about its successes and failures can help training in the future, but I'm doubtful it will have an effect short term.
 
Last edited:
I’ll bring the tissues… I was so hopeful when the new round of approvals started flowing in early today and my hopes were dashed as I got ready to leave work and realized that I would get home not be able to play around on the AI playground. Might as well pop an ambien and go to bed. Is consciousness without all the beta features I was promised worth it?! 😂
I'm watching the World Series and eating tacos with my girlfriend. I'll survive.
 
are u using a custom audio for it or one of the defaults u can use? i just upgraded i hope i dont have that bug. this will ruin me
I was using one of the default tones. If I set a normal alarm (not sleep schedule), everything is fine. Just no sound with the sleep schedule.
 
These models are not developed in the same way as developing an app. It's not nearly as much effort to move some pieces of the photos app around as it would be to "adjust" this kind of model. What we got looks basically identical to WWDC, and the WWDC preview images for this weren't received particularly well either.

These models take weeks to months to complete training, and while on-device models are smaller and thus might faster (It's difficult to predict how much compute Apple has lying around for training vs an org like OpenAI), I am very doubtful this will be something that can be meaningfully adjusted week-to-week. You can't really prompt optimize these kinds of problems away easily (possibly at all) which would be the fastest solution for something that's text based.

IMHO, they are doing the beta release strategy for this primarily to make sure it can't generate objectionable content. Our feedback about its successes and failures can help training in the future, but I'm doubtful it will have an effect short term.

What I think you may be missing is we don’t know how Apple is structuring the prompts that actually are presented to the LLM (as opposed to what we type in the instruction interface). It may be that those prompts are throttled somewhat during this initial rollout phase as Apple determines the model’s tolerances in more widespread testing.

So while the model itself may not improve appreciably during the beta as you suggest, the output presented to users may improve as Apple lets the reins out a bit as we move from beta to beta.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.