Discussion about this post

User's avatar
Rob Nelson's avatar

Wonderful review of a book I need to read but didn’t know existed. Thank you. One thing I dont quite get is the idea that corporate entities are immortal. As Charlie Stross, pointed out, the average life span of a Fortune 500 company has been falling and few of them make it past their 25th birthday. I wish we’d make it a more common practice to execute them for the harms they cause, but they tend to die as victims of other corporations and are then consumed by their killers.

Expand full comment
Graham Strong's avatar

Wow, great topic, Mark! As you point out, not enough thought has been put into this by society. I suspect it won't get the thought it deserves until the Powers That Be have already made some decisions around it.

Your review got me to thinking though...

The US constitution talks about "truths that we take to be self-evident" but the reality is, any "rights" we have are a human construct. You might take an idea that all men are created as equals to be self-evident, but only if and when you choose to believe in the construct.

Of course, the US constitution had a problem (or a solution, if you want to be cynical) right from the start. Its definition of what the definition of "men" is certainly troublesome today. I'm not sure if it was then? Or maybe the definition of men was self-evident then too...

To be clear, I believe in human rights. I believe that all people should be treated equally. Stating that they are simply human constructs isn't an argument against rights or equality. However, it's easy to see even while holding that ideal in our minds that people are *not* equal in a practical sense. There is still a first among equals, a second, a third...

So to think that a sentient AI "being" could be considered an equal can only be in the most ideal senses. More likely, it would have mutable rights depending on location, situation, etc.

For example, today animals have certain rights, depending on where they live in the world. All Western countries have animal cruelty laws and all but one (the US) fully recognizes animal sentience. (https://www.worldanimalprotection.org/latest/blogs/encouraging-animal-sentience-laws-around-world/)

This leads up to another important point: when you talk about "society" and "laws", I believe you are referring only to US society and laws. This will of course become a global issue, and what is recognized as "sentient" in one country will not be in others, just as it is with animals.

For example, given the US's past history favouring corporations and capitalistic intent, and given its lack of recognition of animal sentience at a federal level, it's reasonable to guess that AI sentience would not be easily adopted into US law. I imagine it would be like the Hal example you cite above: any AI "beings" would more likely be see as property at best, and perhaps simply a patent.

Again though, any formal recognition of "AI sentience" would still be a human construct, a belief system we employ to categorize how we choose as a society to treat AI beings. Individually, we will likely choose to treat AI beings differently, just as we make our own choices about how to treat animals and about how to treat other people, individually or as a group such as a religion, an ethnic group, or a citizens of another country.

I wonder though: will the environment that an AI being lives in affect its behaviour? Or maybe even its sentinence? Maybe AI beings in New Zealand (one of the earliest countries to adopt women's rights, so why not AI rights) would be different to live with than AI beings in late-adopting countries?

If the debate about their very sentience affects how they live in and react to a society, that may be the greatest Turing Test of all. Except that, by that point our official designation or even simple recognition of their sentience might become irrelevant. First, a dog still thinks, whether you believe it or not. Second, maybe we will have to adapt to *their* society and fight for our rights within it, rather than the other way around...

Lastly, I think the question itself may be too difficult for the average person to grapple with. We don't know what AI is, or what it could be. That confuses the issue too much, I think. So why not frame it this way: imagine we find that there really are Martians and for whatever reason, they need to integrate into our society. Do we treat them as equals? Do we have different rights and freedoms set aside from them? How many of those decisions will be driven by fear? By compassion? Will we have the chance to make those decisions at all, or will the Martians just take over?

It won't be a clear apples-to-apples comparison, but it could open what Boyle calls the paths to argument.

Thanks for the Friday Deep Thought!

Expand full comment
3 more comments...

No posts