Wonderful review of a book I need to read but didn’t know existed. Thank you. One thing I dont quite get is the idea that corporate entities are immortal. As Charlie Stross, pointed out, the average life span of a Fortune 500 company has been falling and few of them make it past their 25th birthday. I wish we’d make it a more common practice to execute them for the harms they cause, but they tend to die as victims of other corporations and are then consumed by their killers.
This is an interesting book for lots of reasons, and one of them that I didn't point out is that IT'S FREE to download. Boyle was able to strike a deal with MIT Press through the good offices of TOME -- "Publication of this open monograph was the result of Duke University’s participation in TOME (Toward an Open Monograph Ecosystem), a collaboration of the Association of American Universities, the Association of University Presses, and the Association of Research Libraries." The book is licensed under a creative commons license, CC BY-NC-SA.
So you don't even need to buy it to read it. You can just download it. The URL is in the heading under the photo of the book cover on the web version of the review.
On the question of immortality. Yes, it's true that corporation do "die," and manage to do so with relative haste sometimes. But their "mortality" is not a matter of the legal definition. That is, corporations aren't legally required to kick the bucket after so many years. They can go on and on as long as they abide by their purposes. They can be put to death in the manner you describe, and Felix Cohen describes one way to take them to task "to call something a person in law [like a corporation], is merely to state, in metaphorical language, that it can be sued." You can sue the bastards, so to speak, but they also can call upon protections of the law by virtue of their "personhood."
I think that the Citizen United case really messed things up for mortal humans by granting corporations First Amendment rights. Reeling that decision back would be a good thing, but I doubt the Roberts Court would even entertain the notion. Come to think of it, the Roberts Court has handed the US some real doozies for us to slog through in the US.
If you're a paper kind of guy like me, I think you can find the Boyle book in physical form for under $30 US.
Wow, great topic, Mark! As you point out, not enough thought has been put into this by society. I suspect it won't get the thought it deserves until the Powers That Be have already made some decisions around it.
Your review got me to thinking though...
The US constitution talks about "truths that we take to be self-evident" but the reality is, any "rights" we have are a human construct. You might take an idea that all men are created as equals to be self-evident, but only if and when you choose to believe in the construct.
Of course, the US constitution had a problem (or a solution, if you want to be cynical) right from the start. Its definition of what the definition of "men" is certainly troublesome today. I'm not sure if it was then? Or maybe the definition of men was self-evident then too...
To be clear, I believe in human rights. I believe that all people should be treated equally. Stating that they are simply human constructs isn't an argument against rights or equality. However, it's easy to see even while holding that ideal in our minds that people are *not* equal in a practical sense. There is still a first among equals, a second, a third...
So to think that a sentient AI "being" could be considered an equal can only be in the most ideal senses. More likely, it would have mutable rights depending on location, situation, etc.
This leads up to another important point: when you talk about "society" and "laws", I believe you are referring only to US society and laws. This will of course become a global issue, and what is recognized as "sentient" in one country will not be in others, just as it is with animals.
For example, given the US's past history favouring corporations and capitalistic intent, and given its lack of recognition of animal sentience at a federal level, it's reasonable to guess that AI sentience would not be easily adopted into US law. I imagine it would be like the Hal example you cite above: any AI "beings" would more likely be see as property at best, and perhaps simply a patent.
Again though, any formal recognition of "AI sentience" would still be a human construct, a belief system we employ to categorize how we choose as a society to treat AI beings. Individually, we will likely choose to treat AI beings differently, just as we make our own choices about how to treat animals and about how to treat other people, individually or as a group such as a religion, an ethnic group, or a citizens of another country.
I wonder though: will the environment that an AI being lives in affect its behaviour? Or maybe even its sentinence? Maybe AI beings in New Zealand (one of the earliest countries to adopt women's rights, so why not AI rights) would be different to live with than AI beings in late-adopting countries?
If the debate about their very sentience affects how they live in and react to a society, that may be the greatest Turing Test of all. Except that, by that point our official designation or even simple recognition of their sentience might become irrelevant. First, a dog still thinks, whether you believe it or not. Second, maybe we will have to adapt to *their* society and fight for our rights within it, rather than the other way around...
Lastly, I think the question itself may be too difficult for the average person to grapple with. We don't know what AI is, or what it could be. That confuses the issue too much, I think. So why not frame it this way: imagine we find that there really are Martians and for whatever reason, they need to integrate into our society. Do we treat them as equals? Do we have different rights and freedoms set aside from them? How many of those decisions will be driven by fear? By compassion? Will we have the chance to make those decisions at all, or will the Martians just take over?
It won't be a clear apples-to-apples comparison, but it could open what Boyle calls the paths to argument.
One of the things that AI might do and that, in my opinion, needs doing is push more subtle and precise thinking about consciousness and, probably as a result of that, personhood. And people who are researching AI are thinking about that, too. I regularly have coffee with an AI researcher at Duke, and she has brought this issue up multiple times.
When I was putting this book review together I had collected articles relating to animal personhood, and the case of Happy the elephant was one that I found particularly illuminating. Happy is at the Bronx Zoo, and the law suit brought by animal rights advocates argued that she should be liberated (or something close as possible) on account of her intelligence and "personhood." Happy's case when to the New York Supreme Court. The courts decision is well worth the read because it draws together many related cases and quite obviously shows the court's profound discomfort with the constraints that the NY law placed on their decision. That is, the facts of Happy's situation and her "personal" attributes as a being just didn't square. The last page of Justice Alison Y. Tuitt's decision includes these words: "This Court is extremely sympathetic to Happy's plight and the NhRP's [Nonhuman Rights Project, which brought the case] mission on her behalf. It recognizes that Happy is an extraordinary animal with complex cognitive abilities, an intelligent being with advanced analytic abilities akin to humans beings.... This Court agrees that Happy is more than just a legal thing, or property. She is an intelligent, autonomous being who should be treated with respect and dignity, and who may be entitled to liberty [!]. Nonetheless we are constrained by the caselaw to find that Happy is not a 'person' and is not being illegally imprisoned.... The arguments advanced by the NhRP are extremely persuasive for transferring Happy from her solitary, lonely one-acre exhibit at the Bronx Zoo, to an elephant sanctuary on a 2300 acre lot. Nevertheless, in order to do so, this Court would have to find that Happy is a 'person' and, as already stated, we are bound by this State's legal precedent." The entire decision is here: https://www.nonhumanrights.org/wp-content/uploads/HappyFeb182020.pdf.
To my knowledge, Happy is still at the Bronx Zoo.
The book brings in a lot of the questions that you mention, and it's actually a pretty compelling read. Certainly not as dry and dusty as many legal texts are. That's by design, since the questions of law that the coming storm of personhood arguments will pose are the kinds of arguments that by nature aren't just for lawyers and judges to ponder. They'll touch us all. Or maybe more accurately, they'll be wrestled with by our grandchildren. Maybe our children, too.
I was under the impression that "sentient" implied "consciousness" and "awareness", but after looking it up it appears that's not the case!
There is compelling proof that many animals including elephants, apes, dolphins, and other animals are self-aware. I believe that it is probably the norm rather than the exception in nature -- at least among vertebrates. So when I say "sentient animals" in the above, I also mean self-awareness.
However, I'm not convinced that there is a direct causal connection between intellectual intelligence and self-awareness. I mean, it appears that the more intelligent an animal is, the more self-aware it is. But does that intelligence *cause* self-awareness, or simply enhance it? Does something have to be organic to be self-aware? Or, maybe something can be digitally self-aware, but that is fundamentally different than organically self-aware...?
My thoughts and questions are based on... well, nothing I can point to. What does your colleague believe? Is self-awareness as we know it possible in a digital form? Is there evidence of this already? And how could we tell that AI is actually self-aware or just the "stochastic parrot" of self-awareness?
All very interesting questions.
Oh, and yes, I will definitely pick up Boyle's book!
Wonderful review of a book I need to read but didn’t know existed. Thank you. One thing I dont quite get is the idea that corporate entities are immortal. As Charlie Stross, pointed out, the average life span of a Fortune 500 company has been falling and few of them make it past their 25th birthday. I wish we’d make it a more common practice to execute them for the harms they cause, but they tend to die as victims of other corporations and are then consumed by their killers.
This is an interesting book for lots of reasons, and one of them that I didn't point out is that IT'S FREE to download. Boyle was able to strike a deal with MIT Press through the good offices of TOME -- "Publication of this open monograph was the result of Duke University’s participation in TOME (Toward an Open Monograph Ecosystem), a collaboration of the Association of American Universities, the Association of University Presses, and the Association of Research Libraries." The book is licensed under a creative commons license, CC BY-NC-SA.
So you don't even need to buy it to read it. You can just download it. The URL is in the heading under the photo of the book cover on the web version of the review.
On the question of immortality. Yes, it's true that corporation do "die," and manage to do so with relative haste sometimes. But their "mortality" is not a matter of the legal definition. That is, corporations aren't legally required to kick the bucket after so many years. They can go on and on as long as they abide by their purposes. They can be put to death in the manner you describe, and Felix Cohen describes one way to take them to task "to call something a person in law [like a corporation], is merely to state, in metaphorical language, that it can be sued." You can sue the bastards, so to speak, but they also can call upon protections of the law by virtue of their "personhood."
I think that the Citizen United case really messed things up for mortal humans by granting corporations First Amendment rights. Reeling that decision back would be a good thing, but I doubt the Roberts Court would even entertain the notion. Come to think of it, the Roberts Court has handed the US some real doozies for us to slog through in the US.
If you're a paper kind of guy like me, I think you can find the Boyle book in physical form for under $30 US.
Wow, great topic, Mark! As you point out, not enough thought has been put into this by society. I suspect it won't get the thought it deserves until the Powers That Be have already made some decisions around it.
Your review got me to thinking though...
The US constitution talks about "truths that we take to be self-evident" but the reality is, any "rights" we have are a human construct. You might take an idea that all men are created as equals to be self-evident, but only if and when you choose to believe in the construct.
Of course, the US constitution had a problem (or a solution, if you want to be cynical) right from the start. Its definition of what the definition of "men" is certainly troublesome today. I'm not sure if it was then? Or maybe the definition of men was self-evident then too...
To be clear, I believe in human rights. I believe that all people should be treated equally. Stating that they are simply human constructs isn't an argument against rights or equality. However, it's easy to see even while holding that ideal in our minds that people are *not* equal in a practical sense. There is still a first among equals, a second, a third...
So to think that a sentient AI "being" could be considered an equal can only be in the most ideal senses. More likely, it would have mutable rights depending on location, situation, etc.
For example, today animals have certain rights, depending on where they live in the world. All Western countries have animal cruelty laws and all but one (the US) fully recognizes animal sentience. (https://www.worldanimalprotection.org/latest/blogs/encouraging-animal-sentience-laws-around-world/)
This leads up to another important point: when you talk about "society" and "laws", I believe you are referring only to US society and laws. This will of course become a global issue, and what is recognized as "sentient" in one country will not be in others, just as it is with animals.
For example, given the US's past history favouring corporations and capitalistic intent, and given its lack of recognition of animal sentience at a federal level, it's reasonable to guess that AI sentience would not be easily adopted into US law. I imagine it would be like the Hal example you cite above: any AI "beings" would more likely be see as property at best, and perhaps simply a patent.
Again though, any formal recognition of "AI sentience" would still be a human construct, a belief system we employ to categorize how we choose as a society to treat AI beings. Individually, we will likely choose to treat AI beings differently, just as we make our own choices about how to treat animals and about how to treat other people, individually or as a group such as a religion, an ethnic group, or a citizens of another country.
I wonder though: will the environment that an AI being lives in affect its behaviour? Or maybe even its sentinence? Maybe AI beings in New Zealand (one of the earliest countries to adopt women's rights, so why not AI rights) would be different to live with than AI beings in late-adopting countries?
If the debate about their very sentience affects how they live in and react to a society, that may be the greatest Turing Test of all. Except that, by that point our official designation or even simple recognition of their sentience might become irrelevant. First, a dog still thinks, whether you believe it or not. Second, maybe we will have to adapt to *their* society and fight for our rights within it, rather than the other way around...
Lastly, I think the question itself may be too difficult for the average person to grapple with. We don't know what AI is, or what it could be. That confuses the issue too much, I think. So why not frame it this way: imagine we find that there really are Martians and for whatever reason, they need to integrate into our society. Do we treat them as equals? Do we have different rights and freedoms set aside from them? How many of those decisions will be driven by fear? By compassion? Will we have the chance to make those decisions at all, or will the Martians just take over?
It won't be a clear apples-to-apples comparison, but it could open what Boyle calls the paths to argument.
Thanks for the Friday Deep Thought!
One of the things that AI might do and that, in my opinion, needs doing is push more subtle and precise thinking about consciousness and, probably as a result of that, personhood. And people who are researching AI are thinking about that, too. I regularly have coffee with an AI researcher at Duke, and she has brought this issue up multiple times.
When I was putting this book review together I had collected articles relating to animal personhood, and the case of Happy the elephant was one that I found particularly illuminating. Happy is at the Bronx Zoo, and the law suit brought by animal rights advocates argued that she should be liberated (or something close as possible) on account of her intelligence and "personhood." Happy's case when to the New York Supreme Court. The courts decision is well worth the read because it draws together many related cases and quite obviously shows the court's profound discomfort with the constraints that the NY law placed on their decision. That is, the facts of Happy's situation and her "personal" attributes as a being just didn't square. The last page of Justice Alison Y. Tuitt's decision includes these words: "This Court is extremely sympathetic to Happy's plight and the NhRP's [Nonhuman Rights Project, which brought the case] mission on her behalf. It recognizes that Happy is an extraordinary animal with complex cognitive abilities, an intelligent being with advanced analytic abilities akin to humans beings.... This Court agrees that Happy is more than just a legal thing, or property. She is an intelligent, autonomous being who should be treated with respect and dignity, and who may be entitled to liberty [!]. Nonetheless we are constrained by the caselaw to find that Happy is not a 'person' and is not being illegally imprisoned.... The arguments advanced by the NhRP are extremely persuasive for transferring Happy from her solitary, lonely one-acre exhibit at the Bronx Zoo, to an elephant sanctuary on a 2300 acre lot. Nevertheless, in order to do so, this Court would have to find that Happy is a 'person' and, as already stated, we are bound by this State's legal precedent." The entire decision is here: https://www.nonhumanrights.org/wp-content/uploads/HappyFeb182020.pdf.
To my knowledge, Happy is still at the Bronx Zoo.
The book brings in a lot of the questions that you mention, and it's actually a pretty compelling read. Certainly not as dry and dusty as many legal texts are. That's by design, since the questions of law that the coming storm of personhood arguments will pose are the kinds of arguments that by nature aren't just for lawyers and judges to ponder. They'll touch us all. Or maybe more accurately, they'll be wrestled with by our grandchildren. Maybe our children, too.
I was under the impression that "sentient" implied "consciousness" and "awareness", but after looking it up it appears that's not the case!
There is compelling proof that many animals including elephants, apes, dolphins, and other animals are self-aware. I believe that it is probably the norm rather than the exception in nature -- at least among vertebrates. So when I say "sentient animals" in the above, I also mean self-awareness.
However, I'm not convinced that there is a direct causal connection between intellectual intelligence and self-awareness. I mean, it appears that the more intelligent an animal is, the more self-aware it is. But does that intelligence *cause* self-awareness, or simply enhance it? Does something have to be organic to be self-aware? Or, maybe something can be digitally self-aware, but that is fundamentally different than organically self-aware...?
My thoughts and questions are based on... well, nothing I can point to. What does your colleague believe? Is self-awareness as we know it possible in a digital form? Is there evidence of this already? And how could we tell that AI is actually self-aware or just the "stochastic parrot" of self-awareness?
All very interesting questions.
Oh, and yes, I will definitely pick up Boyle's book!