So the Future of Life (FOL) organization thinks AI-based weapons are bad and that they should not be developed. According to the linked document above these weapons could include things like "armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria."
The concern, apparently, is a "global AI arms race."
Interestingly, though, cruise missiles are explicitly excluded. Cruise missiles, at least in the opinion of the FOL folks, don't involve "AI" even though they make their own targeting decisions (or at least not the right kind of "AI").
So Elon Musk, a signer of the FOL document linked above, thinks that one day "non-self-driving" cars will be outlawed...?" Musk says: "I don't think we have to worry about autonomous cars, because that's sort of like a narrow form of AI ... It's not something that I think is very difficult, actually, to do autonomous driving, to a degree that's much safer than a person, is much easier than people think."
So I guess there are degrees of AI, narrow to what? "wide?"
What about the "ethics" of self driving cars?
Supposed a small, "unseen" child darts out from behind a parked car leaving no time to stop. In the on-coming lane is a pregnant mom and another infant. What does the "self driving car" AI decide to do?
If its "narrow" perhaps it simply runs over the child...
Or maybe it instead decides to kill the mom in the on-coming lane.
If its not so narrow maybe some quick facial recognition could be used to decide if any of the potential victims of the accident are "haters", religious folk, or other undesirables who there are too many of in the country.
How would you know if the AI wasn't making these kinds of decisions?
What if there was some sort of "disparate impact" on certain segments of the population relative to the decisions made by the self driving cars?
I don't really think a "global AI arms race" would be the problem...
The concern, apparently, is a "global AI arms race."
Interestingly, though, cruise missiles are explicitly excluded. Cruise missiles, at least in the opinion of the FOL folks, don't involve "AI" even though they make their own targeting decisions (or at least not the right kind of "AI").
So Elon Musk, a signer of the FOL document linked above, thinks that one day "non-self-driving" cars will be outlawed...?" Musk says: "I don't think we have to worry about autonomous cars, because that's sort of like a narrow form of AI ... It's not something that I think is very difficult, actually, to do autonomous driving, to a degree that's much safer than a person, is much easier than people think."
So I guess there are degrees of AI, narrow to what? "wide?"
What about the "ethics" of self driving cars?
Supposed a small, "unseen" child darts out from behind a parked car leaving no time to stop. In the on-coming lane is a pregnant mom and another infant. What does the "self driving car" AI decide to do?
If its "narrow" perhaps it simply runs over the child...
Or maybe it instead decides to kill the mom in the on-coming lane.
If its not so narrow maybe some quick facial recognition could be used to decide if any of the potential victims of the accident are "haters", religious folk, or other undesirables who there are too many of in the country.
How would you know if the AI wasn't making these kinds of decisions?
What if there was some sort of "disparate impact" on certain segments of the population relative to the decisions made by the self driving cars?
I don't really think a "global AI arms race" would be the problem...
No comments:
Post a Comment