I'm already using it to assist with writing change management request, asking queries of documents, and generating code boilerplate. But you don't need to go that far.
LLMs are great at interpreting language. Random example: A plumber or other tradesman could feed it the design document and then just use natural language queries against it to answer questions and provide a citation from the document where the information was found (potentially from his smartwatch, while he's working). That's not even complex AI, that's just document summarisation and querying. And that's something that could be done today with some minor, minor pieces of glue to tie say, Siri on the watch into chatgpt.
If you aren't investigating and looking into how these new models can help with your day to day now, you're well behind the curve - you're like the people hanging onto their electric typewriters for writing business letters because they don't trust these newfangled computers.
Work smarter, not harder.
Lets throw those use cases on the table shall we and look at them objectively:
1. Change management request. If this is a change control board sort of thing under ISO 9001 then it should be fairly established what the process is and that should be simply defined with a form. A word template is fine for that stuff. Just fill the fields in yourself - the complicated bit is thinking of the impact assessment of the change which is domain specific and outside the model's scope of knowledge. If you use an LLM the format will be randomly generated so the output will not necessarily be consistent between each change control document.
2. Querying documents. As LLMs are just statistical representations of the tokenised document in the first place, they aren't necessarily capable of picking up things such as nuance and idioms which change the context of the writing completely. On top of that there is no guarantee the interpretation is correct. In fact I saw someone shoot themselves in the toes by being utterly lazy, asking an LLM to do exactly that and then going and blowing their toes right off. In this case it resulted in them being fired.
3. Generating code boilerplate. If you need lots of boilerplate, then the fault is with the programming language. This is a crutch and a result of our terrible programming languages which haven't evolved much since Algol 68 and smalltalk other than wrapping crap syntax around them. Better to look at some really old papers on abstracting intent through domain specific languages for example. I've written a couple of DSLs over time which describe things like state machines, workflows, user interfaces and mathematical constraints (think CAS etc) and compile down to native code. That is better than writing boilerplate. Describe your problems better!
Really your options when it comes to LLMs are:
1. Trust the ramblings of a non-deterministic statistical crack head implicitly and entirely forget about the risks. This appears to be everyone's naive approach. This incurs nothing but risk.
2. Validate the statistical crack head's ramblings which is considerably harder and more time consuming than the initial task you asked it to perform in the first place. This incurs the cost of the LLM and additional time over doing the work yourself.
3. Do the work yourself in the first place and look at traditional methods of automating it (templates / don't bother / think for 5 minutes)
An LLM is absolutely fine if you need to generate something with has no measurable veracity or trust and does not need validation and has no value.
And that is of entirely no use whatsoever to most people. Really people lack the objectivity and the ability to critically evaluate if technology is effective, blindly trusting the "boilerplate" marketing out there.
In my field, mathematics and statistics, we even have some people enamoured with it to point they have stopped all functional work and research to go all in on this because the marketing is driving research budgets now.
And please don't give me the "rapid improvements" thing because every 5% gain in accuracy, above 50% which is terrible, doubles both the model generation cost, increases the execution cost and requires more information. Thus it'll likely never get to 90% before we run out of money, energy and information to feed the grifting sharks with.
I await the market crash and next AI winter. I can't wait to short the hell out of that, make a ton of money then buy myself a nice new Herman Miller Aeron on the cheap from one of the AI startups that went under (as my current one is wearing out from the blockchain crash) and get by nice burger served by a former £120k salaried prompt engineer.
Rant over.