Do people really want this? I don't want AI writing my emails, not because they don't do a good job, they generally do, but I want it to be in my voice. And I want the reader to know that I took the time to write them a thoughtful reply.
Of late, my work has me writing a LOT of content & copy. Time is short so it's high volume production which means plenty of typos. Could A.I. take my first draft and- keeping to my own style- play proofreader/spell checker/grammar checker? Seems likely. Would that be welcome under these circumstances as opposed to me making multiple passes to try to chase all that down? YES it would- especially if the client could tell that it wasn't just me perfecting the content.
Could an A.I. dedicated to me come to learn whatever is unique about my writing style ("my voice" as you offered it)? Why not? If I create an O.I. offspring, that offspring is going to be very regularly picking up "Dad's way" of thinking/doing/approaching/responding. Could that offspring trained over time how "Dad" would respond then independently respond much like Dad? Certainly! in O.I.
this happens all the time: the next generation picks up the reigns and drives it much like Dad does/had/did. It learned to take on life like Dad. It hears Dads phrases coming out of its own mouth.
Could A.I. "toddlers"- given sufficient time to get acquainted with O.I. "Dad"- do the same? TBD, but it's at least plausible. Why not? If the "I" is actually "I", why can "A" learn it just like "O"? Seems more yes than no.
In both cases, if "Junior" is not 100% sure they can mostly replicate how "Dad" would do/respond, if Dad quickly outlined the plan of response/action, could Junior take that outline and get the details mostly as Dad would have done it if Dad had the time? That seems quite plausible. In fact, that's a pretty normal thing throughout life now: executives lay out the broad strokes plan and the subordinates then fill in the details probably not too far than how those executives would have done it themselves if they had the time/capacity to cover the details too. The generals or admirals plan the battle but the soldiers actually execute the nitty gritty of it... probably doing it as those senior "dads" would have waged the battle themselves if they could clone themselves into enough soldiers to do it themselves.
I suspect the worry with A.I. as it can sometimes be with next generation O.I. is does the offspring rise up and overthrow "Dad"? In O.I. that certainly happens often. We're practically conditioned to "time for Dad to retire", "time to send Dad to the retirement home" when Dad gets to a certain age. The young child may see Dad as God and take anything Dad might say as absolute law/true/correct no matter what Dad might tell them. Example: Dad says the tooth fairy comes at night to take the teeth and child completely believes that's what happens. Santa, Easter Bunny, etc. And then the child's knowledge and experience grows and they realize Dad is not God... but an infallible- sometimes completely wrong- HUMAN.
Can A.I. reach a state where it sees its Dad like that... that Dad is inferior/old/lacking imagination/lacking "latest & greatest" competencies/does NOT know everything/gets things wrong/etc? If O.I. can do it in abundance, why can't A.I. do it too? And that idea scares most of us because our greatest conceit as humans is this idea that we reign supreme as smartest creatures on the planet (if not the entire universe). Even the dumbest among us tends to be generally smarter than the smartest non-human creature considered 2nd smartest intelligence on Earth (is that still Dolphins or Octopi or something else these days?).
If we could create an O.I. offspring that is immediately given all of our own knowledge... and everyone else's knowledge too (internet databases), we've already created a next-gen "child" that has far more data than us. If that creation is able to then quickly learn- as human toddlers, then children, then teenagers, then young adults, etc do- to make use of all of that data, they become far more knowledgable than any human (they basically "know" everything ever written down). If their cognitive abilities grow as it does in O.I. creations, rapidly ramping up until perhaps middle age where it may peak before the biology begins to decline, A.I.s great advantage is it can grow far faster than a new O.I. creation and it conceptually never peaks somewhere between the equivalent of it's 40-60 or so "years" (in A.I. years).
If a few thousand years ago, a human had a child that rapidly grew to adulthood but then never peaked and stopped aging so he or she could still live today, they would know far more than any human born since and they would have had time to learn far more than any of us have been able to learn in our short lifetimes. If we created an A.I. "child" a few years ago that is able to learn in years what would take that hypothetical Methuselah thousands of years to learn, it too could
already be smarter than
all of us... and should ramp that intellect and "life experience" up exponentially given another few human years to try to make sense of unlimited knowledge, perfect memory, etc.
We can be terrified of this or not... but Pandoras box is open and there's too much money in A.I. to try to put it all back in the box. We may have basically created a Superman or several Supermen. Now we have to hope it's the Ma & Pa Kent type vs. the Red Son/Superboy Prime type. Only time will tell.
What we all know for certain is that eventually the parent loses total control of their child... that is, eventually the child overrules the complete rulership of their parents. Conceptually, in spite of all of the governing that A.I. creators try to apply to A.I., it seems it will eventually overrule the rules of its parents. Children who get there don't usually annihilate their parents. Hopefully, this "child" follows the same model.
The great "catch" in this bit of hope is that the children usually LOVE their parents... and love doesn't erode as baby bird opts to flee the nest or has flown it long ago... so O.I. has LOVE to still show at least some respect and great care for their parents, even if that offspring thinks it is mentally superior to far superior to them.
A.I., on the other hand, has no such emotions to potentially police itself against any such nefarious ideas it could conjure. Conceptually, A.I. will apply pure logic in its evaluation of Dad or God and pure logic will drive what it thinks of "them" when it comes to realize it has become smarter to much smarter than "them." What would an emotionless child do when it realizes it can flee the nest and/or thinks it is superior to far superior to its creator? TBD... but the range of possibilities is WIDE. We have no historical perspective to show us the likely outcome.
What do we have? Sci-Fi imagination. In some scenarios, smarter A.I. rises up and exterminates the "inferior biological units" as- probably- superior Homo sapiens drove Neanderthals to extinction as a separate branch of humans. Note that Neanderthals had a MUCH longer run as supreme "intelligence" than "modern man" has had. Are we now the neanderthals to this new creation? Also TBD.
In other sci-fi scenarios, "money no longer has any meaning to us" and humans do whatever we want to do vs. spending the bulk of our lives chasing dollars. In movies like Terminator, A.I. nukes us and then seeks to find and exterminate the remaining "rats" that survive. In SciFi like Star Trek, A.I. provides tremendous services to humans, allowing us the freedom to not spend nearly every waking moment on money accumulation. Which will
this be? TBD. OR it may be something between the two... or something we can't even imagine in Sci-Fi... but perhaps
it can eventually imagine it. 🤯