Hi
Hi! How's it going?
good! okay this question is pretty interesting
It depends on what you consider murder to be, I think That sounds really morbid hahaha
hmmmm
Like, killing something is to take a life, right?
yeah
So you have to decide if a computer is alive or not
but what if you do something that inadvertently causes someone to die? is the murder? well really that doesn't matter for this question haha
Hahahaha right
you're right, you just have to decide if the computer is alive
Does being alive mean having flesh and blood and consuming other living things to stay alive?
Or is it consciousness?
Right, that's the question
What is consciousness? oh gosh it's too early for this
Hahahahaha
what do you think?
I think consciousness is thinking for yourself, independently So now the question is, is a computer capable of "thinking" indpendent of its programming? I'd say no
recently i read an article saying that consciousness is on a spectrum. so humans are more conscious than monkeys, monkeys are more conscious, than fish, and so on so an alien race could probably be at a higher level of consciousness than us
That's cool, that's totally possible
but you're right i don't think computers can think it's really just a rock that we filled with lightning and tricked into thinking
Right, that's what I think, too So based on that alone, according to us, a computer is not alive
but in the future, do you think AI will achieve sentience / consciousness?
I don't think a computer is capable of making decisions independent of what its algorithms decide So basically, a computer can learn how to be a human really, really well But it will never achieve true sentience, because all of its very human-like decisions will still be based on the algorithm it is using to "learn"
hmmm the way our brains work is very similar to a computer though. our neurons are just electrical circuits, we have memory storage, ram, we can only process one thread of informat ion at a time, and so on
That is true
and really, all our decisions are based upon algorithms that we've learned through experience and instinct
I do still think that a human is capable of completely defying what he or she has learned in the past And making a contrary decision, which a computer isn't capable of, unless it was coded into the program somehiow
true. however i do think computers will reach a point where they can do these things. it will probably be a very long time though AI is actually one of elon musk's greatest concerns. he's certain superintelligent AI will happen, and he's worried about it
I do agree that through the power of neural networks, computers can achieve near 100% perfection in tasks that a human brain can complete to 100% perfection or less So given time and proper training, a neural network that teaches a computer to emulate human  behavior could imitate "superintelligence" But it's still a program. And then you have to ask the question, is there a point that a computer can become so "perfectly" human that it actually ceases be be like a human Because it is too perfect?
true i think that's what people mean by superintelligent AI, it becomes even more perfect at thinking than humans and that's where people get concerned haha
Hahha right. I'd still argue that just behaving like a human doesn't make you alive or anything though
have you seen the movie her?
No, but I know what it's about. I was just thinking about the fact that people can ascribe life to nonliving things, we do it all the time
oh yeah that's true. personifying non-living items, right?
Right. So a person who forms a bond with a non-living but intelligent computer could feel like they were committing murder if they turned it off or destroyed it
yeah. which is the crazy part. even if the computer entity isn't "alive", our human emotions towards it are completely real and valid and if we feel like we committed murder, did we?
I think it depends on our intentions If the intent was malicious, then we've fed into a malicious part of us
interesting
What do you think?
well going off of what you were saying, i think the rules and ethics of human murder apply to this computer scenario too almost exactly
So basically, if you never actually form a bond with the machine, then turning it off or destroying it isn't murder?
no, i think if it has enough human characteristics that people can relate to it as human, then turning it off would count as murder
Interesting. So someone who sees the machine as only buckets and bolts could still be accountable for murder, even knowing full well the machine isn't really alive?
i think so. a similar analogy is how some people think killing animals is nothing because they're stupid, whereas other people still think it's cruel because the animals have feelings too
So it's a matter of the human being uncaring toward the emotional reactions of the machine
oh yeah. i guess this does make it more complicated because killing animals isn't considered murder
You can still totally be punished for it, though
this question is way too deep haha
It kept the conversation rolling, though, lol. I think we've covered most of its facets.
yeah definitely i could talk about this stuff forever
Do you have a conclusion to the question?
i'm not sure! but i definitely want to study it more i have a book about AI and questions like these that i want to read
That's awesome. What book is it?
it's called superintelligence
Hmmmmmm cool
it's main point is super interesting, it says that the only advantage we have over AI is that we get the first move
Whoa, that's super ominous