Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Um I've got two of those things and all the knowledge at my fingertips and I'm hopelessly unreliable, never meet a deadline and hate working with other people. Also my ex girlfriend, a high ranking academic in the social sciences was utterly incompetent at everything including producing papers that made any sense.

Do not hold us in high regard.
And yet you're likely still more reliable than someone who couldn't be bothered to stick it out through a university programme. It's obviously not a hard and fast rule, there are going to be exceptions and variance, but it's a general indicator.

This obviously isn't the only indicator either, there are plenty of trades etc that require years of training too, as well as plenty of valid reasons as to why someone might not complete such a course (financial difficulty, etc), but if I were hiring someone and could look at nothing beyond their CVs, I'd probably pick the person who had completed a long-term programme over someone who hadn't (assuming they're equally qualified in the other ways that matter).
 
Last edited:
Taking a firm, moral stance on an issue is not the function of LLMs. They always give these long-winded answers in the form of "here's a summary of the issue, here are the different sides/nuances, here's a tenuous conclusion". Obviously there are countless issues people believe should have firm "yes" or "no" answers, but an LLM isn't going to answer that way. They aren't designed to do so.
 
Last edited:
Whether you have high hopes for it or whether you're afraid of it, the truth is that AI is still very limited and we're very far off from the sci-fi utopia/dystopia people like to claim is coming.)
😕 I want to replace my family doctor with the EMH (Emergency Medical Hologram). My family doctor is too polite. I need a little snark from the Doc to keep my from repeating stupid activities that result in the same injury over and over again.😏 I want a doctor with the withering wit of Winston Churchill to shame me into stop being stupid.😳
I don't like it. Back when I was in school we had computers (Apple II, Mac LC III) but I had to learn how to do research, problem solve and so forth. We had no siri.
[CURMUDGEON]You tell those ankle biters.😄[/CURMUDGEON]
One thing I have noticed is that people these days do not read like they used to and very very ignorant in many subjects outside technology. Also many of them did not grow up with good work ethics.
Well, I'm not going to blame the kids. That's the way we raised them, so look in the mirror before blaming the younger generation for their relianced on technology. Growing up, the adults scoffed at my generation. "Kids with their pencil and paper. Back in my day, we had chalk and slate."😁
 
Interestingly this is one of the things that might actually kill the current ML craze. To make auditable decisions you need to record the entire state of the system and it needs to be deterministic. That is not necessarily true and to keep all state would mean storing the entire state of the model and the inputs at the time of execution.

We tried using various ML solutions for financial risk modelling and no models passed our test suite for making a capable and informed decision about risk. One of the finest screw ups it made was a terrorist laundering scenario it thought looked like a low risk business deal. There was not possible to even consider as realistic after pumping hundreds of thousands of rule verified training data sessions into it. Total disaster. It literally hallucinated the output.
Many companies are willing to sacrifice auditable decisions, and this to me is where the real threat lies in AI. If you have an opaque, black box system making important determinations that affect people’s finances, health, etc etc, you get into some dangerous territory, like Kafka’s The Trial made manifest.
 
  • Like
Reactions: AlmightyKang
All technology is a threat to society, because human beings in large groups, especially when tangible consequences for actions are removed from the equation, are for the most part stupid, paranoid, tribal, cruel, self-centered, but still (almost ironically) easily manipulated. Look at social media: it is basically a child's toy, but has been used to influence election results, sway public opinion on important matters, and even has undermined democracy in more ways than people care to realize. Generative AI will be horrible. It will make the internet, once a technology that had the potential to put all of the collective knowledge of mankind in the palms of our hands, into a useless wasteland when it comes to actual information. Essentially will turn the internet into Cable TV: for entertainment purposes only.

When it comes to jobs, AI might be able to replace people doing repetitive computer-based administrative work. Which, unfortunately, is a lot of people. And a lot of those people are well compensated. This will be a huge shock for them eventually. For example, there are a good number of individuals in the organization I work for that do mostly work from home. The work they do is mostly writing emails and basic spreadsheets/scheduling, maybe taking minutes to zoom meetings. They are paid close to $100000 CAD, with full pension and benefits. They could be replaced by software tomorrow and no one would know the difference.
 
There are a good number of individuals in the organization I work for that do mostly work from home. The work they do is mostly writing emails and basic spreadsheets/scheduling, maybe taking minutes to zoom meetings. They are paid close to $100000 CAD, with full pension and benefits. They could be replaced by software tomorrow and no one would know the difference.
Damn! I want one of those jobs. I guess I’m already late :rolleyes:

By the way, I agree with you. Generative AI is going to ruin the Internet as an information tool/source.

If AI keeps progressing and developing at this pace, the disruption in society will be perceived soon. The other scenario is that AI plateaus at the current state, if big companies get stuck at escalating current models or finding better ones.
 
I’m one of those people I suppose. I can’t be replaced by AI because the thing I add is intelligence and our current “AI” systems are not intelligence but a bunch of linear equations and weights that puke out feasible sounding things that people without knowledge can’t seem to distinguish from intelligence. People with knowledge in the fields in question know exactly where this isn’t going and how dangerous it is. The marketers and grifters however play purely on peoples lack of understanding of how absolutely crap this technology is.
 
  • Like
Reactions: Chuckeee

Is AI a threat to society?​


Well, the projects my brother is leading includes the first and second Exascale supercomputers (Frontier and El Capitan) and the next gen processors for OpenAI.

Also, my birthday is August 29. Judgment Day, according to the Terminator franchise.

Make of these facts what you will.

EDIT: But seriously? No. Climate change will do us in long before AI is actually AI and not a glorified copy-paste engine. In that regard, we are the biggest threat to ourselves and every other living thing on the planet.
 
  • Like
Reactions: Chuckeee
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.