It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Some features of ATS will be disabled while you continue to use an ad-blocker.
originally posted by: ChaoticOrder on May, 13 2015
I've shown that in order to teach a machine language it must have a model of the world, it needs to have conceptual models which help it understand the world and the rules of the world around it, which it can only do by experiencing the world through sensory intake. These senses don't necessarily need to be like human senses, all that really matters is that it has an inflow of data which will help it learn about the world and build conceptual models. Although potentially dangerous, a connection to the internet will be the most efficient way for it to gather information about the world, it will be the only "sense" it needs to learn, a super sense.
The nature of self-aware machines
originally posted by: ChaoticOrder on Nov, 17 2017
The conceptual models we develop to understand the world around us become so high level and so abstract that we inherently gain an awareness of ourself. My point being, if we do create machines with general intelligence they will be self-aware in some regard even if not to the extent we are, and they will form their beliefs and world views based on their life experiences just like we do. That means they will have an understanding of things like morality and other abstract things we typically don't think machines would be great at, because they will have the context required to build up complex ideologies. If an android with general intelligence grew up with a loving human family and had friends that respected the fact it was a bit "different" it would develop respect for humans. On the other hand if it was enslaved and treated like crap it would be much more likely to entertain the idea of eradicating all humans because they are a plague to the Earth.
General Intelligence: context is everything
originally posted by: ChaoticOrder on Mar, 5 2019
Saying we have a plan to produce only friendly AGI systems is like saying we have a plan to produce only friendly human beings, general intelligence simply doesn't work that way. Sure you can produce a friendly AGI system but if these algorithms become widely used there's no way to ensure all of them will behave the same way.
Even if the gatekeepers do manage to keep it locked up and only let us interact with it through a restricted interface, before long someone somewhere will recreate it and make the code open source. This isn't a prediction, it is an almost certain outcome. Inevitably we will have to live in a world where digital conscious beings exist and we have to think hard about what that means.
There is no such thing as safe AGI
originally posted by: ChaoticOrder
I'm assuming by now most people have seen the recent news about the Google employee claiming their conversational AI (LaMDA) is sentient.
A dangerous precedent has been set with AI
After all they were trained on data which contains nearly all of human knowledge, all of our moral lessons, our philosophies, our culture.
originally posted by: watchitburn
a reply to: ChaoticOrder
This latest one seems like it's just trying to get its 15 minutes of click-bait fame.
originally posted by: ChaoticOrder
a reply to: nugget1
I believe the only realistic way to merge with machine intelligence would be to digitize the human mind. If we could fully simulate every aspect of a real human brain, then I see no reason that simulation wouldn't produce sentience.
originally posted by: Direne
What's exactly the difference between you and a machine?
originally posted by: TheAlleghenyGentleman
Don’t worry. This is also happening.
“scientists are bringing us one step closer by crafting living human skin on robots. The new method not only gave a robotic finger skin-like texture, but also water-repellent and self-healing functions.”
Living skin for robots