This is fantastic. If I were still teaching courses on philosophy and technology, I'd assign it.
In my circles, the default take on AI is, indeed, that you're a mug if you see any there there. This strikes me as uncharitable and smug. It's backed up by good politics (AI = capitalism, extractivism, etc.), but it doesn't reflect a serious attempt to understand our present technological situation. You can tell people AI is bad, inert, whatever, all day long, but if you lack sincere curiosity about how other people actually experience it, I don't think you'll get very far.
I really have struggled to get interested in many of the philosophical debates around AI consciousness or agency (not to mention zero tech background and skills). But what I do find incredibly interesting is what these developments with AI tell us about ourselves, and the social implications more broadly. Especially as you say, how we experience this stuff: the phenomenology of interacting with AI, what it means to us. Living at a time of epochal technological change.
Your complaint made me think of this recent piece that has been making the rounds lately:
It's a fascinating read and indeed creepy, and I think cautionary stories like this are super important. But I'm struck by the one-sided response from readers which focuses on the dystopian Black Mirror aspect (certainly valid!) while ignoring the context of whether the author was actually using the AI effectively, and whether such deception is a concerning bug, a chilling feature or a predictable artifact of the way it was being prompted (too much at once, asking for links that aren't available, not enough refinement of open-ended questions). If you treat AI like a person and expect it to act like a person, your worst nightmares will be vindicated. So I'm torn: on the one hand it's important to consider these flickers of "ghost in the machine," on the other hand it's important not to read in lots of agency and personhood that's not there (especially, ethics).
Yes! I’m part of that subculture and educated in mathematics and mythology but not philosophy. It’s interesting to see a more sophisticated explanation of something I’ve experienced and trying to understand in terms of imagination and interaction.
This is a great piece (though it will take a while to digest!). I especially like your focus on these unexpected one-off moments of uncanniness, which seems much more apropos to any questions of agency and intelligence than broad generalities about what is "really" the case.
This is fantastic. If I were still teaching courses on philosophy and technology, I'd assign it.
In my circles, the default take on AI is, indeed, that you're a mug if you see any there there. This strikes me as uncharitable and smug. It's backed up by good politics (AI = capitalism, extractivism, etc.), but it doesn't reflect a serious attempt to understand our present technological situation. You can tell people AI is bad, inert, whatever, all day long, but if you lack sincere curiosity about how other people actually experience it, I don't think you'll get very far.
Maybe of interest: https://thebrooklyninstitute.com/items/courses/new-york/the-algorithmic-sublime-technology-infinity-and-transcendence/
I really have struggled to get interested in many of the philosophical debates around AI consciousness or agency (not to mention zero tech background and skills). But what I do find incredibly interesting is what these developments with AI tell us about ourselves, and the social implications more broadly. Especially as you say, how we experience this stuff: the phenomenology of interacting with AI, what it means to us. Living at a time of epochal technological change.
Your complaint made me think of this recent piece that has been making the rounds lately:
https://amandaguinzburg.substack.com/p/diabolus-ex-machina
It's a fascinating read and indeed creepy, and I think cautionary stories like this are super important. But I'm struck by the one-sided response from readers which focuses on the dystopian Black Mirror aspect (certainly valid!) while ignoring the context of whether the author was actually using the AI effectively, and whether such deception is a concerning bug, a chilling feature or a predictable artifact of the way it was being prompted (too much at once, asking for links that aren't available, not enough refinement of open-ended questions). If you treat AI like a person and expect it to act like a person, your worst nightmares will be vindicated. So I'm torn: on the one hand it's important to consider these flickers of "ghost in the machine," on the other hand it's important not to read in lots of agency and personhood that's not there (especially, ethics).
Yes! I’m part of that subculture and educated in mathematics and mythology but not philosophy. It’s interesting to see a more sophisticated explanation of something I’ve experienced and trying to understand in terms of imagination and interaction.
This is a great piece (though it will take a while to digest!). I especially like your focus on these unexpected one-off moments of uncanniness, which seems much more apropos to any questions of agency and intelligence than broad generalities about what is "really" the case.