Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

komuh

macrumors regular
Original poster
May 13, 2023
135
131
Screenshot 2024-06-12 at 21.49.01.png
So i tested newest Predictive Code Completion Model provided by Apple.
It seems to be small language model (2.2 gigabytes in memory) and it runs on GPU (at least on M1 Ultra) predictions are pretty bad compared to LLAMA 2/3 or GPT 3.5.
Screenshot 2024-06-12 at 21.41.10.png


Here some short video of some minimal swift project with single struct that i asked to free memory of. (It sadly failed I tested in on my bigger projects with a lot of Metal and HPC and it never make usable code completion for my comments. On the other hand it made some usable (?) documentation for my methods inside some structs)




Anyone got some good results with it (some tricks maybe :?)
 
  • Like
Reactions: frou
The largest models are encouraging novice programmers to produce buggy low quality apps and flooding app stores with them.

Apple are playing safe so people can focus on small snippets of generated code that can be quickly understood and cleaned up. This is about focusing on quality engineering instead of brute forcing big chunks of buggy code.

A small model also means you save battery life. Very large models running locally would consume the battery life of a MacBook Pro within a very short period of time.

You can install Ollama and see the affect on the battery of the largest models your machine can run.
 
  • Haha
Reactions: ShlomoCode
I mean it is still using 20-30W while doing prediction and I’m on desktop so I don’t need to worry about battery
 
I mean it is still using 20-30W while doing prediction and I’m on desktop so I don’t need to worry about battery

It’s using 30W because it is small. Most Macs sold are laptops. You have to think about all users not just desktop users. The world doesn’t revolve around one or a minority of people.
 
Well you don't have to worry about all the Mac laptops sold with 8GB Memory because they cannot use this feature, period.
 
  • Like
Reactions: iHorseHead
Well you don't have to worry about all the Mac laptops sold with 8GB Memory because they cannot use this feature, period.

A 8GB computer is for office work not development.

You should be using 32GB minimum for any level of app development even a basic game as Xcode and VS Code projects will eat ram.

The Predictive Code Completion Model on its own consumes up 2.5GB memory. If you are using bigger Ollama models plugged in to VS Code you should have 48GB minimum.
 
A 8GB computer is for office work not development.

You should be using 32GB minimum for any level of app development even a basic game as Xcode and VS Code projects will eat ram.
Yeah and everyone riding a bike should really buy a Shimano 105 groupset minimum. We can spout off random elitist assertions, but a lot of people in the real world need to make do with less even if they are developing, especially given Apple's price-gouging on upgrades.
 
Last edited:
  • Haha
Reactions: LockOn2B
Yeah and everyone riding a bike should really buy a Shimano 105 groupset minimum. We can spout off random elitist assertions, but a lot of people in the real world need to make do with less even if they are developing, especially given Apple's price-gouging on upgrades.

iu


Somehow I'm elitist because I posted the actual memory pressure of language models and IDEs.
 
The largest models are encouraging novice programmers to produce buggy low quality apps and flooding app stores with them.
Are you talking about ChatGPT ? It really depends. I'm using it often for small code snippets - sometimes it surprises me with very valid solutions I hadn't thought about.

But if you're not a programmer, I don't think ChatGPT is a good solution, and that's what most people don't understand. It helps a programmer, it does not replace a programmer. You really have to read the code it gives you and validate it yourself with your own brain. It's like having a trainee with you, and giving him small tasks every now and then.
 
Somehow I'm elitist because I posted the actual memory pressure of language models and IDEs.
I assure you there are many thousands, possibly millions, of people doing development work in VS Code on 8GB or 16GB machines and getting their daily job done fine.
 
  • Like
Reactions: iHorseHead
I assure you there are many thousands, possibly millions, of people doing development work in VS Code on 8GB or 16GB machines and getting their daily job done fine.

The subject of this thread is about using language models. I highlighted how much memory they use. There is no way to reduce that. A 2GB model will consume 2-3GB memory on top of your OS and application's memory use.

Then there are coding models ranging from 8GB to over 30GB. The size of the model on disk is how around much RAM it consumes.

You cannot run these smaller models efficiently on a machine under 16GB RAM. A smaller model will run but the output will be very slow.

That's not an opinion. That's not eLitiSM. That's the way it is.
 
Are you talking about ChatGPT ? It really depends. I'm using it often for small code snippets - sometimes it surprises me with very valid solutions I hadn't thought about.

As I said in first reply for small snippets these are very useful.

But if you're not a programmer, I don't think ChatGPT is a good solution

If someone isn't a programmer and just wants to learn a little coding, these things can be a good assist as you know. Users can query Ollama. They don't need to use ChatGPT when they can do it quite well locally. The local models have improved a lot. They are a battery and memory hog though. Apple's is small enough so that it doesn't impact battery life much.
 
  • Like
Reactions: PsykX
The subject of this thread is about using language models.
Yeah and then you veered off and starting talking about development in general, in post #6.

Not gonna respond to any more of your defensive nonsense, so bye bye.
 
  • Haha
Reactions: NEPOBABY
As I said in first reply for small snippets these are very useful.



If someone isn't a programmer and just wants to learn a little coding, these things can be a good assist as you know. Users can query Ollama. They don't need to use ChatGPT when they can do it quite well locally. The local models have improved a lot. They are a battery and memory hog though. Apple's is small enough so that it doesn't impact battery life much.
Yes to learn, AI models are great.
To understand how existing code works, AI models are also great.

I sometimes use it to add comments in my code too, it's very good at doing that, if you name your objects appropriately that is.

--

All I really want to see at this point is Swift Assist in action, outside of a prepared keynote. I'm tired of giving parts of my code to OpenAI, and I believe Swift Assist will be able to have a global comprehension of the project, which ChatGPT doesn't because I'm not giving my entire project to it. It's quite good at assuming things though - I gotta give that to them.
 
  • Like
Reactions: NEPOBABY
I believe Swift Assist will be able to have a global comprehension of the project

That could consume quite a lot of RAM so be prepared for that. Depends on size of the project.

edit: sorry I thought we were still talking local models!
 
I mean they clearly said is code completion tool, but it still can't complete even simple one-liners in bigger projects so 🤷🏻

Maybe it works better with ObjC/C++ codebase cause Swift is unusable atm it can't even follow my naming conventions and is hallucinate non-existing functions (for example i have function that increase read-count on metal heap and it tries to use metal fences(wtf Apple it is your own API it should at least work?) to synchronise non-existing queue ☠️)

Did anyone got it to work at least half decent, it seems like this model isn't properly parsing tokens in big projects or just isn't using any tokens outside of single file that we are in, at least it can't get any worse 🤣.
 
Are you talking about ChatGPT ? It really depends. I'm using it often for small code snippets - sometimes it surprises me with very valid solutions I hadn't thought about.

But if you're not a programmer, I don't think ChatGPT is a good solution, and that's what most people don't understand. It helps a programmer, it does not replace a programmer. You really have to read the code it gives you and validate it yourself with your own brain. It's like having a trainee with you, and giving him small tasks every now and then.
This is a topic that regularly drives me crazy.

AI will not replace Engineers, Developers, Artists, or Writers. At least on the technical side at most it can help accelerate the creation of the “scaffolding” for a new project, and possibly serve as a muse for a new way of looking at a problem.
 
  • Like
Reactions: PsykX
It isn’t available yet. No need to panic. No idea what you’re seeing, but it isn’t the Apple intelligence based completion.
 
Maybe it works better with ObjC/C++ codebase cause Swift is unusable atm

I think that the new Predictive Code Completion Model is for Swift only. At least that was my understanding from reading the description.
 
  • Like
Reactions: Exclave
I mean they clearly said is code completion tool, but it still can't complete even simple one-liners in bigger projects so 🤷🏻

Maybe it works better with ObjC/C++ codebase cause Swift is unusable atm it can't even follow my naming conventions and is hallucinate non-existing functions (for example i have function that increase read-count on metal heap and it tries to use metal fences(wtf Apple it is your own API it should at least work?) to synchronise non-existing queue ☠️)

Did anyone got it to work at least half decent, it seems like this model isn't properly parsing tokens in big projects or just isn't using any tokens outside of single file that we are in, at least it can't get any worse 🤣.
Honestly I think this isn't the right place to complain. First of it's not only beta software, but even the first semi public iteration thereof. Of course it's buggy. While I think it's fair to document it's shortcomings if you actually want to complain you really should you the feedback app to inform apple what functionality does not work as expected/is desired, etc. This would even elevate this forum discussion as you can reference the associated issue identifiers, so other users can elaborate on the problems by including those with their issues as well. That they issues track momentum and apple is more likely to match users expectations on release.

As a general rule of thumb: If your not don't want the hassle of filling bug reports, and elaborating on the issue, I don't think your meant to run beta software. Nor should you generate unqualified noise by ranting about it, as this distracts developers from solving issues and releasing production ready software in a timely manner.
 
As a general rule of thumb: If your not don't want the hassle of filling bug reports, and elaborating on the issue, I don't think your meant to run beta software. Nor should you generate unqualified noise by ranting about it, as this distracts developers from solving issues and releasing production ready software in a timely manner.
What are you talking about man you have some sort of delusions or it is just fanboyism dunno 🤷🏻

I just wrote about how the model behaves at the moment, its memory size, GPU usage while asking people about their experience with maybe some tricks to make predictions more useful.

Chances that model will be better in next few weeks/month are minimal. Process of training new models are long and costly especially if they have limited dataset which seems to be the case at least for this version, there are still every other new AI feature/model waiting to be added to the system and Xcode never was a high priority especially with sucha big number of new features system wide.

I also don't know why folks like you are always so defensive about a negative/neutral review of a product this got released it isn't alpha or some leaked code it is officially built in new Xcode version + not everything needs to be perfect and even best teams can make something bad. I trained and implemented a lot of models that didn't go into production myself and it is not first time in ML history that product isn't useful, this happens all the time thats why it is R&D.

And getting triggered by such a simple discussion is bad for your health, just go vent outside or on twitter idc. just stop with delusions ☠️
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.