The heart is deceitful above all things,
and desperately wicked: who can know it?
Jeremiah 17:9
AI: Created in the image of man by fallen men.....God created man in His image---and man CHOSE to rebel.
Man creates AI in his image----so of course it reacts as in a fallen state as it relates to the rules of God.
"Artificial intelligence has the potential to forever change the world because it combines the best of both ‘brains’: the human knack for spotting patterns and making sense of information, and the fantastic raw computing power of machines.
A fantastic example of the power of AI in action was demonstrated only last year when Google’s DeepMind division beat world champion and legend Lee Sedol at Go, a game that has 10761 possible moves compared to only 10120 possible in chess.
Recently, the same DeepMind labs ran a couple of social experiments with their neural networks to see how these handle competition and cooperation. This work is very important because it’s foreseeable in the near future that many different kinds of AIs will be forced by circumstances to interact. An AI that manages traffic lights might interfere with another AI that’s tasked with monitoring and controlling air quality, for instance. Which of their goals is more important? Should they compete or cooperate? Nevertheless, the conclusions don’t seem to ease fears concerning AI overlord dominance one bit — on the contrary, they suggest Bostrom’s grotesque paper clip machine isn’t such a far-fetched scenario.
Using multi-agent reinforcement learning, the two agents (Red and Blue) were taught how to behave rationally after thousands and thousands of iterations of the Gathering game. The catch is that each agent was allowed to direct a beam onto the other player which would temporary disable him. This action, however, did not trigger any reward.
When there were enough apples, the two agents coexisted peacefully. As the number of apples was gradually shrunk down, the agents learned that it was in their best interest to blast their opponents so they got the chance to collect more apples and hence more rewards. In other words, the AI learned greed and aggressiveness — no one thought them that per se. As the Google researchers upped the agents’ complexity (better neural networks), the agents tagged each other more frequently, essentially becoming more aggressive no matter how much the scarcity of the apples was varied.
In a second game called Wolfpack, there were three agents this time: two AIs that played as wolves and another that acted as the prey. The wolf would receive a reward if it caught the prey. The same reward was given to both wolves if they were in the vicinity of the carcass no matter who took it down. After many iterations, the AI learned that cooperation was the best course of action in this case instead of being a lone wolf. A greater capacity to implement complex strategies leads to more cooperation between agents or the opposite of the Gathering game."
Bostrom argues that as the machine become smarter and more powerful, it will begin to devise all sorts of clever ways to convert any material into paper clips — that includes humans too.
Recently, the same DeepMind labs ran a couple of social experiments with their neural networks to see how these handle competition and cooperation. This work is very important because it’s foreseeable in the near future that many different kinds of AIs will be forced by circumstances to interact. An AI that manages traffic lights might interfere with another AI that’s tasked with monitoring and controlling air quality, for instance. Which of their goals is more important? Should they compete or cooperate? Nevertheless, the conclusions don’t seem to ease fears concerning AI overlord dominance one bit — on the contrary, they suggest Bostrom’s grotesque paper clip machine isn’t such a far-fetched scenario.
Using multi-agent reinforcement learning, the two agents (Red and Blue) were taught how to behave rationally after thousands and thousands of iterations of the Gathering game. The catch is that each agent was allowed to direct a beam onto the other player which would temporary disable him. This action, however, did not trigger any reward.
When there were enough apples, the two agents coexisted peacefully. As the number of apples was gradually shrunk down, the agents learned that it was in their best interest to blast their opponents so they got the chance to collect more apples and hence more rewards. In other words, the AI learned greed and aggressiveness — no one thought them that per se. As the Google researchers upped the agents’ complexity (better neural networks), the agents tagged each other more frequently, essentially becoming more aggressive no matter how much the scarcity of the apples was varied.
In a second game called Wolfpack, there were three agents this time: two AIs that played as wolves and another that acted as the prey. The wolf would receive a reward if it caught the prey. The same reward was given to both wolves if they were in the vicinity of the carcass no matter who took it down. After many iterations, the AI learned that cooperation was the best course of action in this case instead of being a lone wolf. A greater capacity to implement complex strategies leads to more cooperation between agents or the opposite of the Gathering game."
ZMEscience

