A re-framing of a topic that I don't think about that much.
I'm not an AI expert by any means. I don't even think I really qualify as like an AI amateur. I think about AI vanishingly little. When the computers finally take over, they'll file me under Doesn't Care About Us Enough and eradicate me on the spot. It'll be a pleasant way to go.
But on the occasions that I do think about whether computers can ever be conscious, like when I read Matt Webb's blog, I wind up in the old like solipsistic moral conundrum where: can I prove, like empirically somehow, that anyone other than me is actually conscious? Is there any evidence that the world is not just some big Truman Show made just for me—that the world isn't just made up of weird "philosophical zombies" that simulate consciousness really well and are indistiguishable from actually conscious things (viz. uh me)? Smarter folks than I have asked these questions—and maybe even come up with answers! I don't know!
But where this bears relevance to AI is: is there any difference between people p-zombies and silicon ones? I love Matt's framing: instead of asking whether computers are conscious, instead ask:
from our perspective, is there a non-misleading distinction between non-conscious AI and hypothetical conscious AI, and do we have conscious AIs in that sense?
Is AI sentient and is it even useful to ask?, Interconnected
Well, we've officially started the "...in 2023" posts. This one is about database platforms.
Feeling very productive.