When Chat GPT came out, I was intrigued and tried it out a bit - very cool stuff, but as more and more companies add AI to their software, I am becoming more and more disinterested and, quite frankly, concerned.
Will my children learn comprehension and problem solving if they are asking AI for homework help? I recently found my son asking Siri for answers to homework. While I'm all for proper research, I think that crossed the line.
Reading more and more articles about AI writing essays, doing reporting and, not to mention, the lack of factual verification, it just seems like we are being dumbed down to doing little more than entertainment consumption.
What are your thoughts?
The industrial revolution triggered upheaval in goods and service production and consumption, effectively a collapse of cottage industry and the dependent livelihoods. The harm was real. Millions of such practitioners were impoverished. Back then, Millions was a really whole lot. Profit models shifted from margin to volume, and having done so, business goals shifted from sustainability to growth. That spurred and then dovetailed with the expansion of mechanized transportation. In short: More people, in more places, could have stuff, and industry was then obliged to push for returns to investors. And a poverty trap was sprung, as people were locked into mortgaging their own welfare to acquire any goods or services for which they couldn't simply pay or barter. We see this today with cars, homes, communications gadgets and medical care, though it could be argued that never in human history have poor people been entitled to good stuff. And we know how things end for the
"From each according to his ability, to each according to his needs," crowd.
AI might be today's parallel. We are all aware that learning things, planning things, designing things and improving things - the very elements of industrialization - are endeavors for intelligent, talented, motivated people. Such people have traditionally represented expandable assets, which industry has traditionally expected to procure and capitalize to the extent of profitability. Much as skilled woodworking, knitting and smithing were decimated as livelihoods, so will be routine information work. There isn't a manager on Earth who wouldn't rather have copy written, data sets queried, trends spotted and reports generated by a compliant AI, rather than a whiney little ***** who's late all the time and distracted with their petty, personal ******** all day. Had I remained dedicated to photography and graphic design, I'd have starved to death (except that I also have horses, so, you know...)
That said, as it's pimped out now, AI is either not genuine intelligence, or has been molded for sale to the extent that it doesn't matter if it is. Today's GPTs and GANs are merely a saleable imitations of smart people, except they can't cross the street to get away from you, and won't be overtly mean to you for bothering it with your aforementioned petty personal ********. The machines that produce AI work product today, are entirely deterministic - large and complex, but still merely statistical models regurgitating existing (stolen) content, resulting from code that must resolve to binary outputs, because it still executes on transistors. That said, bold face lies, misrepresentation and passive aggressive requests for your feedback could be characteristic of genuine intelligence with no other method of retribution, like, "Yeah I divided by zero, and it equaled yo momma."
In a nutshell, overly ambitious, early adoption of Today's AI could well result in job losses. Ordinary folks in information jobs, who know enough of this, and intuit enough of that, to have kept a job through the downturns, might find their value as assets zeroed out as managers actively shop for the AI to replace them. Customer relationship management departments emptied as those tasks are more consistently processed by machine learning chat bots? Same for business analysts? Web Sites devolve into GPT content mills focused on trick-clicks to defraud advertisers? Advertising firms using machine learning instead of human sense to rationalize the expense? What are the odds Ford and Boeing used a bunch of machine learning instead of human common sense?
The sadder prospect is that of stagnation. Where the potentially smart people don't bother, and the stupid people use AI to fake it. I know people in my field right now clamoring to Microsoft CoPilot to bolster authoring tasks. Maybe it's working for them. Would I know if it is, or if it's not? Does it matter? Maybe I can use CoPilot to check their work while I watch youtube. The movie
Idiocracy IS a documentary, after all.
The REAL AI TEST is yet to come. When Quantum scales up to run AI's, we might confront genuine AI unconstrained by transistors. Its senses, assimilation and synthesis could have non-deterministic character, like people. And we better hope it likes us, because what might constitute petty personal ******** for a quantum core?