I really appreciated this, Ben. I feel as though I don't have anywhere near enough of a decent understanding of AI's true capabilities to have a solid opinion, though I find myself veering towards a hesitant and uneasy feeling about it--largely because, I think, it seems so distant from the things that I trust most (eg. death, birds, the natural world) but I appreciate that AI grew out of the natural world in the same way that I did, fundamentally. I think that whatever happens, having positive visions of the potential outcome to hold is vital. So I'm grateful for you creating that, here.
Thanks Chloe. I’m with you in my apprehension. If I hadn’t worked in technology for so long, I would be radically opposed to AI. If I fear anything about the future it’s the utter cruelty of humans. At least a machine can be programmed to adhere to rules. We break the basic rules of human decency over and over again as if there’s no consequence.
Yes, absolutely, our species is far more terrifying. So, do you think, at the end of the day, it comes down to the morality of those who are essentially in charge of the various incarnations of AI?
What a good question. Since (most?) tools are inherently value-free, I would say it does come down to operator/inventor morality. Another limitation is that our currently system only rewards inventions that can be “monetized.” (Which is awful.) I find myself fantasizing it could be possible to play by those rules and still get Ben’s AI empathy friend to go viral.
It's THE question I think and there's no clear answer. While I believe most humans are inherently good and don't intentionally act malicious, I also believe we tend to be selfish and on the lazy side which makes it easy enough to blur the morality line when things get beyond our local scope or seem like a little to much work. If Artificial General Intelligence plays out as it theoretically should, it will act within the parameters it's programmed to and it will serve the individual it works for. You can already see there's a huge margin for error there ;-) but at least if an AI was designed to be "loving" and supportive to it's owner above all else and not to do harm to others, it would stick to that script. That's more than can be said for many human parents or partners.
Well said. I keep hearing, “I’m sorry, Hal, I can’t do that,” not as a fear of what machines might do, but as a projection of the unacknowledged darkness lodged in the human soul, including our own. Maybe that’s how it was intended and I’m only figuring it out now? 🤷🏼♀️
Your essay reminds me of my app that sent weekly empathy messages through Slack to development team members. Just a little special something to get them through the week. Thought it would change the world. Maybe in fiction 🤪
You know what, I really love this, Ben - I think like any tool or technology, we can invest it with the best or the worst of us, and we usually do both. Why not imagine a world where AI follows a protocol based in nurturing rather than destruction? If we can dream it, we can do it. I'm reminded of Data and Lor from Star Trek NG (we are just finishing up watching all seven seasons) - the two sides of that coin.
When I was about five years old, I had a Teddy bear. I had it with me all the time, especially at night when it helped protect me from the monsters under the bed.
I also had two loving parents who really did protect me. Not from mythical monsters, but from real life issues. They guided me and my three siblings. They did a helluva job, if I do say so.
Today, my Teddy bear is a fond memory, but how my parents raised me is the basis for who I am.
While AI has great advantages as a tool, too many people think of it as their Teddy bear, there to protect them from mythical monsters. Whatever 2030 or 2050 is like, it won't be so very much better as a result of software, regardless of how advanced it becomes.
It seems a little specious to speculate about the future. Let's speculate occasionally about the past. How was it then? How did we get from then to now? Are our lives better now than then? In what ways is life worse?
Coincidentally, I've recently written an essay that discusses some of that. I'll release it in about a week.
There is no sure way to predict the future. Did the Wright brothers envision today's air travel? They couldn't have. Did Alexander Graham Bell envision cell phones and all their apps? Not a chance. We can accurately predict very little. Don't stress too much about it. The Wrights and Bell had no clue the extent of the advantages they would bring to us, yet here we are.
Technology advances exponentially, yet we are still the same people. A hundred years from now, we will STILL be the same people. If we can all tell the difference between a Teddy bear and real love and compassion, we will probably be all right. But if we mistake ideologies, software, and fetishes for reality, if we let them set our direction, we are headed off a cliff.
I couldn't agree more, Chip. I look forward to reading your essay. Humanity is unlikely to fundamentally change as a result of any technological innovation which is why this was merely a thought experiment. Unlike the invention of electricity or flight, we are now meddling in a much deeper, more disorienting philosophical realm where we are ultimately striving to create a machine in our own image. Mobile phones and more importantly the apps that run on them are having an impact on our brains, specifically our attention and mood. We can be manipulated which is scary. I'm not sure where it's all going, but it's not slowing down. Thanks for reading and sharing your thoughts.
I don't want to over-indulge myself on your page, but just oner more thing. After posting my previous comment I recalled that I had written a 'futurist' piece regarding a press release from the presidential administration of 2035. This is not the essay that I mentioned, but it might be food for thought:
Ted Keller
Chief of Staff
White House
Sept 24, 2035
Gerry
I appreciate your getting back to me so quickly with a draft statement concerning the issues with the border wall. Certainly, considering recent escalations of tensions, we must take quick and decisive action to quell efforts to undermine the law and order that is essential to a fully functional democracy.
On the whole, I think your draft is in keeping with the president’s directives regarding securing the border, and in a manner that does not exacerbate tensions. The agitation by subversive interests must be stopped, but in a manner that does not create sympathy for their cause. In other words, they can not be seen as victims. They must be seen as vicious perpetrators that must be stopped before they destroy our democracy.
With that in mind, the president feels that the word ‘insurrectionist’ should not be used, as there are those who have come to believe that some of the claims of insurrectionists are valid. Also, while lethal action is approved by the president, it is best to not place it, in writing, in a directive. At the very least, word it a little more softly. While we must wonder how citizens can have sympathy for such people, the fact is, they do. So, perhaps some term other than ‘insurrectionist’ is preferable. Perhaps we could use ‘rebel’ or ‘traitor’. The main point is that, as we seal the border. There can be no sympathy, no sense of martyrdom for people who might be killed attempting to escape. Should citizens develop sympathy, they might question the desirability of sealing the border. We must at all times make sure that citizens retain a negative, even hateful resentment, against the escapees. In that way, we can continue the prosecution unimpeded by voter sentiment.
With these things in mind, I have made revisions to your draft that I think bring it closer to the president’s vision. Please consider these revisions, and resubmit ASAP. Thanks as always for your continued efforts to secure our great nation as a bastion of democracy.
Homeland Security
Confidential Draft: Staff Only
Sept 23, 2035
In light of the recent incident at the southern border, proximate to Laredo, Texas, in which violent efforts at breaching the border wall resulted in twenty-seven deaths and numerous injuries, the Department of Homeland Security, in consultation with the President, has instituted the following measures:
1. No citizen will be allowed within one mile of the wall. While every effort is to be made to peacefully maintain that clearance, where citizens challenge the authority of officers, officers are authorized to shoot to kill. Other methods of restraint should of course be attempted first, but this memorandum authorizes the use of deadly force in any event.
2. Claims to a right of chain migration, or any other claim of a right to exit, are hereby cancelled and or rescinded. While such claims have been considered in the past, this memorandum supersedes any previous authorization.
3. As efforts by insurrectionists to escape over or through the wall into Mexico have become more organized and violent, it is imperative that this movement be stopped by whatever means are necessary. We continue to be in an emergency as declared by the President; normal constitutional measures are suspended indefinitely. Claims of excessive force from citizens will not be processed by the Dept of Justice, or by any other federal agency. Please forward any such claims directly to the Dept of Homeland Security for proper dispensation.
4. The Department of Homeland Security remains steadfastly committed to the cause of democracy and freedom, and will take and all steps necessary to quell violent insurrection.
It was my How WE Can Save the World essay contest Ben wrote for, and had I been the decider of who got the prizes I would have given one to him. Here's the link to it, along with the other 20 finalists: https://suespeaks.org/essay-contest. My prize wouldn't have been for using AI, which was a clever ploy since it's the technology du jour, but because his AI application helped people love themselves and loving people create a loving world!
I'm down with this, Ben. It's hopeful. It's a great exploration of an ideal, and a believable one, too.
I'm tempted to say a version of such a chip was invented a good while ago -- a little pill with four capital letters that brought nothing but love for others -- that, sadly, became illegal.
This a wonderful exploration, Ben, and what's interesting is I've thought about this very thing. Mental health treatment is expensive and complicated and AI could help in significant ways. It won't replace human connection, but it could make it more attainable for those that struggle. Thanks for sharing!
Thanks, Brian. It is a counterintuitive thing to approach and there's a core part of me that abhors the idea of technology further encroaching in our lives. But when I separate my thinking from the Matrix/Terminator/2001 programming we've all grown up with, I see that the tools we're making have the potential to be better in many profound ways than we are.
I really appreciated this, Ben. I feel as though I don't have anywhere near enough of a decent understanding of AI's true capabilities to have a solid opinion, though I find myself veering towards a hesitant and uneasy feeling about it--largely because, I think, it seems so distant from the things that I trust most (eg. death, birds, the natural world) but I appreciate that AI grew out of the natural world in the same way that I did, fundamentally. I think that whatever happens, having positive visions of the potential outcome to hold is vital. So I'm grateful for you creating that, here.
Thanks Chloe. I’m with you in my apprehension. If I hadn’t worked in technology for so long, I would be radically opposed to AI. If I fear anything about the future it’s the utter cruelty of humans. At least a machine can be programmed to adhere to rules. We break the basic rules of human decency over and over again as if there’s no consequence.
Yes, absolutely, our species is far more terrifying. So, do you think, at the end of the day, it comes down to the morality of those who are essentially in charge of the various incarnations of AI?
What a good question. Since (most?) tools are inherently value-free, I would say it does come down to operator/inventor morality. Another limitation is that our currently system only rewards inventions that can be “monetized.” (Which is awful.) I find myself fantasizing it could be possible to play by those rules and still get Ben’s AI empathy friend to go viral.
It's THE question I think and there's no clear answer. While I believe most humans are inherently good and don't intentionally act malicious, I also believe we tend to be selfish and on the lazy side which makes it easy enough to blur the morality line when things get beyond our local scope or seem like a little to much work. If Artificial General Intelligence plays out as it theoretically should, it will act within the parameters it's programmed to and it will serve the individual it works for. You can already see there's a huge margin for error there ;-) but at least if an AI was designed to be "loving" and supportive to it's owner above all else and not to do harm to others, it would stick to that script. That's more than can be said for many human parents or partners.
Well said. I keep hearing, “I’m sorry, Hal, I can’t do that,” not as a fear of what machines might do, but as a projection of the unacknowledged darkness lodged in the human soul, including our own. Maybe that’s how it was intended and I’m only figuring it out now? 🤷🏼♀️
Your essay reminds me of my app that sent weekly empathy messages through Slack to development team members. Just a little special something to get them through the week. Thought it would change the world. Maybe in fiction 🤪
Indeed. Fiction is so much easier than changing anything in real life. Thanks for reading, Kim.
I'm a big fan of your mind, Ben. And the heart that keeps it turning.
Aw, thanks Meg. 🙏
You know what, I really love this, Ben - I think like any tool or technology, we can invest it with the best or the worst of us, and we usually do both. Why not imagine a world where AI follows a protocol based in nurturing rather than destruction? If we can dream it, we can do it. I'm reminded of Data and Lor from Star Trek NG (we are just finishing up watching all seven seasons) - the two sides of that coin.
Thanks, Troy. I mustered all the optimism my little brain could squeeze out to write this one.
Thanks, Ben, for your illustration.
When I was about five years old, I had a Teddy bear. I had it with me all the time, especially at night when it helped protect me from the monsters under the bed.
I also had two loving parents who really did protect me. Not from mythical monsters, but from real life issues. They guided me and my three siblings. They did a helluva job, if I do say so.
Today, my Teddy bear is a fond memory, but how my parents raised me is the basis for who I am.
While AI has great advantages as a tool, too many people think of it as their Teddy bear, there to protect them from mythical monsters. Whatever 2030 or 2050 is like, it won't be so very much better as a result of software, regardless of how advanced it becomes.
It seems a little specious to speculate about the future. Let's speculate occasionally about the past. How was it then? How did we get from then to now? Are our lives better now than then? In what ways is life worse?
Coincidentally, I've recently written an essay that discusses some of that. I'll release it in about a week.
There is no sure way to predict the future. Did the Wright brothers envision today's air travel? They couldn't have. Did Alexander Graham Bell envision cell phones and all their apps? Not a chance. We can accurately predict very little. Don't stress too much about it. The Wrights and Bell had no clue the extent of the advantages they would bring to us, yet here we are.
Technology advances exponentially, yet we are still the same people. A hundred years from now, we will STILL be the same people. If we can all tell the difference between a Teddy bear and real love and compassion, we will probably be all right. But if we mistake ideologies, software, and fetishes for reality, if we let them set our direction, we are headed off a cliff.
I couldn't agree more, Chip. I look forward to reading your essay. Humanity is unlikely to fundamentally change as a result of any technological innovation which is why this was merely a thought experiment. Unlike the invention of electricity or flight, we are now meddling in a much deeper, more disorienting philosophical realm where we are ultimately striving to create a machine in our own image. Mobile phones and more importantly the apps that run on them are having an impact on our brains, specifically our attention and mood. We can be manipulated which is scary. I'm not sure where it's all going, but it's not slowing down. Thanks for reading and sharing your thoughts.
I don't want to over-indulge myself on your page, but just oner more thing. After posting my previous comment I recalled that I had written a 'futurist' piece regarding a press release from the presidential administration of 2035. This is not the essay that I mentioned, but it might be food for thought:
Ted Keller
Chief of Staff
White House
Sept 24, 2035
Gerry
I appreciate your getting back to me so quickly with a draft statement concerning the issues with the border wall. Certainly, considering recent escalations of tensions, we must take quick and decisive action to quell efforts to undermine the law and order that is essential to a fully functional democracy.
On the whole, I think your draft is in keeping with the president’s directives regarding securing the border, and in a manner that does not exacerbate tensions. The agitation by subversive interests must be stopped, but in a manner that does not create sympathy for their cause. In other words, they can not be seen as victims. They must be seen as vicious perpetrators that must be stopped before they destroy our democracy.
With that in mind, the president feels that the word ‘insurrectionist’ should not be used, as there are those who have come to believe that some of the claims of insurrectionists are valid. Also, while lethal action is approved by the president, it is best to not place it, in writing, in a directive. At the very least, word it a little more softly. While we must wonder how citizens can have sympathy for such people, the fact is, they do. So, perhaps some term other than ‘insurrectionist’ is preferable. Perhaps we could use ‘rebel’ or ‘traitor’. The main point is that, as we seal the border. There can be no sympathy, no sense of martyrdom for people who might be killed attempting to escape. Should citizens develop sympathy, they might question the desirability of sealing the border. We must at all times make sure that citizens retain a negative, even hateful resentment, against the escapees. In that way, we can continue the prosecution unimpeded by voter sentiment.
With these things in mind, I have made revisions to your draft that I think bring it closer to the president’s vision. Please consider these revisions, and resubmit ASAP. Thanks as always for your continued efforts to secure our great nation as a bastion of democracy.
Homeland Security
Confidential Draft: Staff Only
Sept 23, 2035
In light of the recent incident at the southern border, proximate to Laredo, Texas, in which violent efforts at breaching the border wall resulted in twenty-seven deaths and numerous injuries, the Department of Homeland Security, in consultation with the President, has instituted the following measures:
1. No citizen will be allowed within one mile of the wall. While every effort is to be made to peacefully maintain that clearance, where citizens challenge the authority of officers, officers are authorized to shoot to kill. Other methods of restraint should of course be attempted first, but this memorandum authorizes the use of deadly force in any event.
2. Claims to a right of chain migration, or any other claim of a right to exit, are hereby cancelled and or rescinded. While such claims have been considered in the past, this memorandum supersedes any previous authorization.
3. As efforts by insurrectionists to escape over or through the wall into Mexico have become more organized and violent, it is imperative that this movement be stopped by whatever means are necessary. We continue to be in an emergency as declared by the President; normal constitutional measures are suspended indefinitely. Claims of excessive force from citizens will not be processed by the Dept of Justice, or by any other federal agency. Please forward any such claims directly to the Dept of Homeland Security for proper dispensation.
4. The Department of Homeland Security remains steadfastly committed to the cause of democracy and freedom, and will take and all steps necessary to quell violent insurrection.
I appreciated this. I bring up briefly how AI can elevate our empathy here
It will also just absorb us so we best get our acts together.
https://www.kevinmd.com/2024/05/broken-but-beautiful-healing-ourselves-and-the-world-podcast.html
Thanks for reading, Nessa. I’ll check this out.
It was my How WE Can Save the World essay contest Ben wrote for, and had I been the decider of who got the prizes I would have given one to him. Here's the link to it, along with the other 20 finalists: https://suespeaks.org/essay-contest. My prize wouldn't have been for using AI, which was a clever ploy since it's the technology du jour, but because his AI application helped people love themselves and loving people create a loving world!
I love the idea that this empathy chip could have the power to change the world for the betterment of all humankind. Like you said, a girl can dream!
Yes, it's a truly counter intuitive approach and I'm not sure I 100% believe it would work. It was an interesting thought experiment.
Love this. Sounds like, in a way, the chip makes us kids again--each with our own imaginary friend. Reminds me of the power of childhood.
I'm down with this, Ben. It's hopeful. It's a great exploration of an ideal, and a believable one, too.
I'm tempted to say a version of such a chip was invented a good while ago -- a little pill with four capital letters that brought nothing but love for others -- that, sadly, became illegal.
The laws might be changing on that front eventually. Tons of successful trials with MDMA, mushrooms, etc.
Yeah it's good to see all that going on.
This a wonderful exploration, Ben, and what's interesting is I've thought about this very thing. Mental health treatment is expensive and complicated and AI could help in significant ways. It won't replace human connection, but it could make it more attainable for those that struggle. Thanks for sharing!
Thanks, Brian. It is a counterintuitive thing to approach and there's a core part of me that abhors the idea of technology further encroaching in our lives. But when I separate my thinking from the Matrix/Terminator/2001 programming we've all grown up with, I see that the tools we're making have the potential to be better in many profound ways than we are.
I’m sad you didn’t win. Yours was def my favorite.