Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

ThunderSkunk

macrumors 68040
Dec 31, 2007
3,880
4,187
Milwaukee Area
There is NO intelligence in AI, I mean nothing. AI is only about database and statistics.
You can say the same thing about organic intelligence. This barely even begins to enter the discussion about the nature and utility of intelligence that Ai is already out ahead of.
 

cthompson94

macrumors 6502a
Jan 10, 2022
809
1,163
SoCal
I listened to an interview this morning on NPR, where an expert from an algorithm institute in Canada (sorry did not catch the institute or his name) claimed that that we have entered the danger zone with AI. That AI can be programmed to seek and formulate it’s own goals and the danger is handing it agency, the ability to make changes Independently.

The expert cited an example where a Russian Early Warning System signaled an ICBM launch from the United States, and the officer who was in the position to push the button, did not, because he said it did not feel right. The early warning system was in error, there was no launch from the US, and a machine programmed to respond Independently would have sent nuclear missiles to the US.


A different interview:

Leading experts warn of a risk of extinction from AI​


In a recent interview with NPR, Hinton, who was instrumental in AI's development, said AI programs are on track to outperform their creators sooner than anyone anticipated.

"I thought for a long time that we were, like, 30 to 50 years away from that. ... Now, I think we may be much closer, maybe only five years away from that," he estimated.

Dan Hendrycks, director of the Center for AI Safety, noted in a Twitter thread that in the immediate future, AI poses urgent risks of "systemic bias, misinformation, malicious use, cyberattacks, and weaponization."
The risk is only there if the (mainly) militaries allow only a point blank system and not a redundant check and balance system. My time in the Navy the various weapon/weapon support systems used AI to determine a potential threat based on tons of parameters and information on various countries equipment, but the system was designed to have a popup that flagged said potential threat and the reasoning and it took human interaction to confirm to designate it as hostile and there were further checks and balances to actually fire something like a missile.

I personally feel it would take an incredibility well stupid leader to say that one systems AI is powerful enough to basically know what every person on earths thinking is because as long as it is say a human on one end of the fire button, no matter how powerful the AI is it will never know what that person is thinking, so all it can do is base things off parameters, but anomalies happen. Going back to my Navy example, I remember sometimes a really dense cloud formation could cause enough return noise that the AI would think that there is something there, but when looking at the RAW video from the radar and a trained human looking at it could tell it was a cloud formation. I know that in this quick scenario I am sure given thousands of clouds and data AI would get better, but it would still have to be able to look at that RAW data and the return noise which would be varying levels and make a decision, which again currently is actually made via human.
 

jamisonbaines

macrumors 6502
Dec 14, 2007
310
148
CA
The risk is only there if the (mainly) militaries allow only a point blank system and not a redundant check and balance system.

Glad someone finally mentioned weaponized drones after half a page discussing 10 year old robot girlfiriend movies.

At what point do some of the billionaires with private security start using this sort of thing? In some parts of the world it seems like it's only a matter of time.
 
  • Like
Reactions: drrich2 and Huntn

cthompson94

macrumors 6502a
Jan 10, 2022
809
1,163
SoCal
Glad someone finally mentioned weaponized drones after half a page discussing 10 year old robot girlfiriend movies.

At what point do some of the billionaires with private security start using this sort of thing? In some parts of the world it seems like it's only a matter of time.
Hahaha I was thinking the same scrolling through all those basically what if movies!

I think it would come down to fed/state/local laws when it comes to property especially if you are talking along the lines of defense. Even states that have castle doctrines have guidelines on what qualifies like in the event of shooting someone on your property. Personally, I doubt any company be it software (for the AI) or physical company in the example of private security would put their insurance to work like that in the event that their “solutions” basically now down people.

A less hostile version of AI that could be implemented in home/private security is real time AI basically bringing a smart home into like a lockdown mode to prevent breaking in
 
  • Like
Reactions: Huntn

Huntn

macrumors Core
Original poster
May 5, 2008
23,617
26,741
The Misty Mountains
AI becomes dangerous when it is given agency, the ability to act independently to take measures into it’s own hands that could harm people, infrastructure or in some way control or adversely effect our lives. I’m not saying this is currently happening.
 

LambdaTheImpossible

macrumors regular
Aug 22, 2023
114
512
I love how we come up with these scenarios for the end of the world due to ML (I'm not going to say AI because this is not intelligence).

The realistic outcome is probably far more boring. We will end up feeding the crap it generates back into more hallucinating models, which slowly disintegrate rational information like a repetitively re-encoded JPEG. Eventually no one will remember what came from ML and what came from humans, research and careful analysis and editing. All information will be compromised and useless because the signal to noise ratio is so poor. We'll enter a literary, intellectual and artistic dark age which I dub "the age of stupid". We're already there in some way due to the algorithmic self-decay of social media.

I suspect it'll lead to strict regulation at some point which will shut down most social media and make ML a regulated industry with limited social and technical uses.
 

Mousse

macrumors 68040
Apr 7, 2008
3,520
6,760
Flea Bottom, King's Landing
The risk is only there if the (mainly) militaries allow only a point blank system and not a redundant check and balance system.
That's the whole premise of how SkyNet came about. At least the Cyclons (original series) broke their programming and killed off their creators. Well, that danger is always possible.😬
At what point do some of the billionaires with private security start using this sort of thing? In some parts of the world it seems like it's only a matter of time.
At that point, I would find a time machine. Use that device to send someone, with a thick Austrian accent, back in time to kill off said billionaire's grandfather or mother.
 

cthompson94

macrumors 6502a
Jan 10, 2022
809
1,163
SoCal
That's the whole premise of how SkyNet came about. At least the Cyclons (original series) broke their programming and killed off their creators. Well, that danger is always possible.😬
Very true lol although personally I think a SkyNet situation is less likely, I can see a possibility being more on the lines of iRobot. I can see someone or company creating this amazing robotic and the founder(s) wanting a special one or just creating a more advanced one that just backfires. Now like @LambdaTheImpossible said does every scenario have to be doom and gloom..of course not because our fear of AI draws mainly from Hollywood and of course it has to be made interesting. I would like to think something along the lines of as stated before iRobot but instead of well attacking people and basically becoming like Planet of the Apes, just maybe not listening or something like that from one AI (robot in this case) being more advanced or whatever.
 

Thirio2

macrumors regular
Jun 27, 2019
182
110
Maryville, IL
AI is a tool! The problem is that it is becoming too easy to misuse. The genie is out of the bottle, so there really is no going back. The best we can hope for is a reliable way to detect misinformation. We have learned that we can’t rely on government or news media, so we will have to be super vigilant on our own. As an engineer, my “spider sense” goes off whenever I see a new correlation published. Too many people throw data into a mathcad program and blindly accept what it spits out whether it makes sense or not.
 
  • Like
Reactions: cthompson94

cthompson94

macrumors 6502a
Jan 10, 2022
809
1,163
SoCal
AI is a tool! The problem is that it is becoming too easy to misuse. The genie is out of the bottle, so there really is no going back. The best we can hope for is a reliable way to detect misinformation. We have learned that we can’t rely on government or news media, so we will have to be super vigilant on our own. As an engineer, my “spider sense” goes off whenever I see a new correlation published. Too many people throw data into a mathcad program and blindly accept what it spits out whether it makes sense or not.
I certainly agree AI is a major tool that will in many industries revolutionize progress I know one of them being the medical industry with running simulations, I believe there are tests now with using AI in the aid of combating cancer, or even something like gaming can benefit with more robust npcs rather than just prewritten phrases and whatnot. I do think we need to have a well written regulations though because of things like copywrite with the music, art, etc industries suffering a lot (not talking just big names either) due to people using AI to create things, but feeding the AI other people's work and at what point do you draw the line between copy and inspiration especially when it comes to AI.
 

Chuckeee

macrumors 68020
Aug 18, 2023
2,025
5,696
Southern California
No one’s has mentioned GIGO (garbage in, garbage out) that is one of the major limitations of machine learning in general, including AI. GIGO appears to be one of the sources of the problems with current controversial AI results. To make AI seem knowledgeable, they are feed huge amounts of data without any regards to the quality of input (a quantity verses quality concern).

But to me fair, that is a problem for some people too 😛
 
Last edited:

cthompson94

macrumors 6502a
Jan 10, 2022
809
1,163
SoCal
No one’s has mentioned GIGO (garbage in, garbage out) that is one of the major limitations of machine learning in general, including AI. GIGO appears to be one of the sources of the problems with current controversial AI results. To make AI seem knowledgeable, they are feed huge amounts of data without any regards to the quality of input (a quantity verses quality concern).

But to me fair, that is a problem for some people too 😛
Wouldn't this mainly only apply to mainstream/general public AI use for the most part? Scientists using AI for medical or research purposes would certainly be very selective on the data pool used. I think the main culprits for what you stated are the startups trying to just get their name out there to advertise to the masses and get their AI model in the ears of investors.

I certainly think AI has an interesting path ahead due to what you said about feeding AI lots of data as ive noticed a recent ruling from a judge that some AI generated image couldn't be copywritten since the data it used was not original and even if they used all open source material what would happen if another AI company uses the same set of images and creates the same but they are different AI systems that came up with the same image. I know that this scenario is extremely unlikely, but not impossible especially given the same data set.

I wonder if someone could copyright a AI created photo if the data used were all originals.
 

chown33

Moderator
Staff member
Aug 9, 2009
10,790
8,515
A sea of green
  • Like
Reactions: cthompson94

Chuckeee

macrumors 68020
Aug 18, 2023
2,025
5,696
Southern California
Wouldn't this mainly only apply to mainstream/general public AI use for the most part? Scientists using AI for medical or research purposes would certainly be very selective on the data pool used. I think the main culprits for what you stated are the startups trying to just get their name out there to advertise to the masses and get their AI model in the ears of investors.

I certainly think AI has an interesting path ahead due to what you said about feeding AI lots of data as ive noticed a recent ruling from a judge that some AI generated image couldn't be copywritten since the data it used was not original and even if they used all open source material what would happen if another AI company uses the same set of images and creates the same but they are different AI systems that came up with the same image. I know that this scenario is extremely unlikely, but not impossible especially given the same data set.

I wonder if someone could copyright a AI created photo if the data used were all originals.
One would hope medical use of AI would use a more selective source for training. While this appears true is some cases (e.g., mammogram cancer screening), I would still have concerns about GIGO using AI to justify quack medical claims, e.g., all those bogus COVID-19 treatments -colloidal silver solution, oleandrin, chloroquine, hydroxychloroquine and Ivermectin.
 
Last edited:

cthompson94

macrumors 6502a
Jan 10, 2022
809
1,163
SoCal
It depends on how it's claimed:

TLDR: a machine can't be the "author", but the human who created the machine can be.
Thanks for that link, it was a nice read
One would hope medical use of AI would use a more selective source for training. While this appears true is some cases (e.g., mammogram cancer screening), I would still have concerns about GIGO using AI to justify quack medical claims, e.g., all those bogus COVID-19 treatments -colloidal silver solution, oleandrin, chloroquine, hydroxychloroquine and Ivermectin.
Those things will happen regardless, but I understand the notion that hopefully it doesn't spark many more people to believe in false/untested "research". This one is hard to to guess what may happen as the average base for you example were/are against AI and already think technology is being snuck into our bodies via COVID shot, but will also ingest or do something just from word of mouth from someone "trusted" even though scientific research numbers will say otherwise. There is already a big distrust in what science says (funny enough only science that gets mixed in with politics) that I honestly don't know if AI would make it even worse, just look at you example, one virus and you were probably able to name off those 5 "treatments" easily without even taking into account the endless others that were probably tried without AI.
 

Queen6

macrumors G4
A.I. has the potential in the long run to be as good or as bad as we are, equally we are very far from that in 2024. Currently the A.I. models I've used are both impressive and interesting. Graphic up-scaling tools do what they do and are getting far better at recognizing specifics to produce the best image quality.

Latest Chat-bot's can be very convincing with general conversation/questions, even complex conversation. Get them on a track where they are not able to source data from the web i.e. something unique in nature, either words or images. They fall down fast as their only point of reference is your input. You will find that they will recycle your words as they no other point of reference.

I use images of my jade Dragon as that is unique, as I know the A.I. is not able to pool data from the web. The bot's only recourse is to rephrase my initial words as I press harder. with the question...

As a research tool A.I. is clearly invaluable as it can tap into so many resources near instantaneously. No doubts about it this is the next revolution...

Q-6
 
  • Like
Reactions: drrich2

bousozoku

Moderator emeritus
Jun 25, 2002
15,953
2,182
Lard
Is it Yoshua Bengio ?

Do you remember the Year 2000 Bug ? The experts scared everyone. It was the end of the modern world. Finally nothing happen.
Those few things that weren't fixed weren't huge. If databases hadn't been corrected, it could have been much worse. I spent a lot of time writing automated programs to correct source code, to minimize the hands-on work but there was still plenty to do.
 

jedimasterkyle

macrumors 6502
Sep 27, 2014
430
631
Idaho

AI is pandora's box and we're just salivating at unlocking it without thinking things through...
 

Queen6

macrumors G4
While Gemini is the more advanced model over Bard, it's also far easier to confuse and lacks the former's creativity. Maybe a paywall thing (colour me surprised), but as it stands likely a deliberate action on Google's behalf to limit free A.I. interaction.

I have a very specific question that Bard resolved satisfactorily. Gemini just went into a repetitious loop of words unable to clarify or expand making it a pointless waste of time to interact with...

With Gemini best just to use it as a research tool asking binary questions as here it excels. Obscure or unique questions best to use your own mind to resolve...

Q-6
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.