Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

AI is pandora's box and we're just salivating at unlocking it without thinking things through...
We're way past that point, only concern is greed & power. Just a bunch of immature monkeys juggling with weapons of mass destruction. First step would be to cease the urge to kill one another, second actually try understanding one another...

Q-6
 
I’m playing a game called Detroit: Becoming Human, an intriguing scenario that portrays a future where Androids are common place, fill a variety of human jobs and needs. Of interest you play the role of 3 androids simultaneously, but from a single perspective at a time, per chapter. Many humans resent androids for a variety of grounded, justifiable reasons, such as job loss.

Besides the mechanics of controlling an Android, the game almost lets you see the world as a programmed being who (for whatever is required to achieve) reaches a level of both self awareness, consciousness, empathy for others, and a desire to be something other than a slave. This arguably is the challenge and danger of programming something that emulates a human. See the game thread in the Game forum.

I did not scan back though this thread to see if previously mentioned, but but another excellent example/portrayal of the danger and challenge of AI, watch Ex Machina (2014).

IMG_3482.jpeg
 
Star Trek showed us both the potential and dangers of AI. The computer in the ships of the Federation is fully capable of being self aware AI since using only a fraction of it's potential it can create a self aware AI, ie. the Doctor in Voyager and Professor Moriarty in TNG. The danger of AI was showcased in the M5 in TOS: The Ultimate Computer.

What they learned from the M5 incident cause the Federation to prevent their ship's computer from being self aware AI. Can you imagine the chaos if the ship's computer refuses to fire upon hostile aliens because of the peaceful philosophy fostered by the Federation? Or worse, the ship decides that humans are a blight on a planet and decides to sanitized all the human presence on a planet?

It seems the only time AI poses a danger is if it becomes self aware, like Skynet (Terminator) or the Machines (The Matrix). Or if we're lucky, Gort who is strict, but not homicidal. Or the Iron Giant, a weapon that doesn't want to be a weapon.
 
Artificial intelligence is nothing more than software. All of the software I've ever seen in the past 30 years is buggy and needs human intervention to continue operating. Some people are afraid that artificial intelligence will suddenly become self-aware and "rise up" and be harmful to humans. Humans don't even understand how self-awareness works, there's no way we could possibly impart that to software. Calm down folks, artificial intelligence will never suddenly become self-aware. I think it is best to avoid operating in fear and instead educate yourself as to how things work. otherwise you will always be at the mercy of any human smarter than you, and that should never be allowed.
 
Last edited:
  • Like
Reactions: Huntn and MRMSFC
Artificial intelligence is nothing more than software. All of the software I've ever seen in the past 30 years is buggy and needs human intervention to continue operating. Some people are afraid that artificial intelligence will suddenly become self-aware and "rise up" and be harmful to humans. Humans don't even understand how self-awareness works, there's no way we could possibly impart that to software. Calm down folks, artificial intelligence will never suddenly become self-aware. I think it is best to avoid operating in fear and instead educate yourself as to how things work. otherwise you will always be at the mercy of any human smarter than you, and that should never be allowed.

I'm far more concerned about "AI" systems getting plugged into critical infrastructure and cementing human implemented biases in a way that becomes nearly impossible to root out or course correct on down the line.

Much like trying to understand actual opportunity costs from roads not taken, it can be nearly impossible to understand systemic discrimination of various forms, let alone correct for it.

(Far beyond racial things ... discrimination occurs across a broad range of spectrums having nothing to do with our present hot button political time)
 
  • Like
Reactions: Huntn
I'm far more concerned about "AI" systems getting plugged into critical infrastructure and cementing human implemented biases in a way that becomes nearly impossible to root out or course correct on down the line.

Much like trying to understand actual opportunity costs from roads not taken, it can be nearly impossible to understand systemic discrimination of various forms, let alone correct for it.

(Far beyond racial things ... discrimination occurs across a broad range of spectrums having nothing to do with our present hot button political time)
I don't think that will happen. I think the worst that can happen is we plug a broken system (software) into a critical system and it screws up the critical system. But it's never going to "take over".

Software is broken because humans are dumb and lazy. I still don't understand why humans would think that a broken system left alone will automatically repair itself and become perfect. That's like thinking that you have a football field full of random airplane parts, a tornado goes through the football field and when it leaves you've got a perfectly assembled airplane fueled up and ready to fly. It's never going to happen.
 
  • Like
Reactions: Huntn
Artificial intelligence is nothing more than software. All of the software I've ever seen in the past 30 years is buggy and needs human intervention to continue operating. Some people are afraid that artificial intelligence will suddenly become self-aware and "rise up" and be harmful to humans. Humans don't even understand how self-awareness works, there's no way we could possibly impart that to software. Calm down folks, artificial intelligence will never suddenly become self-aware. I think it is best to avoid operating in fear and instead educate yourself as to how things work. otherwise you will always be at the mercy of any human smarter than you, and that should never be allowed.
For me it is not the real ability of AI but the misplaced belief of many people who but too much faith in the capabilities of AI without acknowledging the real limitations of AI. The problem is not AI but with people, who don’t realize the limitations of a new tool.

An additional “people problem” is using the excuse of “training AI” as rationale to take other people’s work without just compensation. Once again, not a problem with AI but with people.
 
Last edited:

A number of AI video generators, mostly released by Chinese companies, lack the most basic guardrails that prevent people from generating nonconsensual nudity and pornography, and are already widely used for that exact purpose in online communities dedicated to creating and sharing that type of content.

IMG_3484.png
 
You could say similar thing about a human brain. It's just a bit of electricity, proteins and water.
No, just no. Have you ever studied how proteins are created and how amino acids work? Ask a biochemist to teach you how amino acids are used to create sustainable proteins and you’ll see why my post is relevant. And, yes, I went to medical school.
 
  • Like
Reactions: decafjava
I Have No Mouth, and I Must Scream by Harlan Ellison. An award winning Horror/Sci-Fi short story, critically acclaimed for its exploration of the potential perils of artificial intelligence and the human condition from 1967. Ellison later adapted it into a video game, published by Cyberdreams in 1995. He co-authored the expanded storyline, wrote much of the game's dialogue & voiced AM the artificial intelligence.

Read by Harlan Ellison
Fairly short at just over 40 minutes.

Play the game
Listen and or play and you will easily visualise the potential dangers of artificial intelligence... Cogito, ergo sum

AM, which was constructed by man, is thus flawed...

Q-6
 
  • Like
Reactions: turbineseaplane
Personally, I don't think that the danger of AI becoming self-aware will happen.

Rather, the danger(s) in AI are far more mundane. These include the manipulation of society (people), increasing the level of incompetence in many areas (writing, math skills, computer coding, production of art and music....the list goes on), propagating increasing amounts of misinformation, fostering "lazy" people who don't have critical reasoning and problem-solving skills, etc.


richmlow
 
As a little test, I asked Google Gemini 2.0 to tell me which museums here are open on Mondays.

It had some really nice text, suggesting that one should always check the museum websites in case of special opening hours, and especially if making a trip to a city to see a particular exhibition.

It then listed two museums here. One is open, one isn't. So a 50% hit rate. Worse than that, it missed about 4 museums which are open around here on Monday. The museum it got wrong is never, and has never been, open on a Monday.

In fact, just googling "<city> museums open on Monday" gave me an official website which correctly lists them all. Somehow, Google's Gemini couldn't even use Google's search engine to work out an answer.

So, at the moment, I totally agree with @richmlow above. The danger is in being given convincing, but totally crap, information.
 
Personally, I don't think that the danger of AI becoming self-aware will happen.

Rather, the danger(s) in AI are far more mundane. These include the manipulation of society (people), increasing the level of incompetence in many areas (writing, math skills, computer coding, production of art and music....the list goes on), propagating increasing amounts of misinformation, fostering "lazy" people who don't have critical reasoning and problem-solving skills, etc.


richmlow
The danger of A.I. is the use's mankind will put it to. Should there be a singularity event and A.I. becomes self aware it will basically ignore us as we will be of no threat to such an entity, unless we a species poise a physical threat...

Q-6
 
As a little test, I asked Google Gemini 2.0 to tell me which museums here are open on Mondays.

It had some really nice text, suggesting that one should always check the museum websites in case of special opening hours, and especially if making a trip to a city to see a particular exhibition.

It then listed two museums here. One is open, one isn't. So a 50% hit rate. Worse than that, it missed about 4 museums which are open around here on Monday. The museum it got wrong is never, and has never been, open on a Monday.

In fact, just googling "<city> museums open on Monday" gave me an official website which correctly lists them all. Somehow, Google's Gemini couldn't even use Google's search engine to work out an answer.

So, at the moment, I totally agree with @richmlow above. The danger is in being given convincing, but totally crap, information.
Same here: I probe multiple A.I. with similar results. Has it's uses but it's not going to replace us anytime soon. Have over 50GB of A.I. models onboard and they have their uses. Is offline and therefore private. I view the A.I. as more of a questionable database. Sometimes it's right and sometimes it's wrong ultimately the decision is mine to take.

Q-6
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.