Indonesian shopkeepers told not to sell contraceptives to teens

In the city of Makassar on Sulawesi island, police raided convenience stores and seized condoms that were readily available in most parts of Indonesia, a secular country whose state ideology enshrines religious diversity.

In other parts of the country, students were banned from celebrating Valentine's Day, saying the romantic tradition encourages casual sex and is against their cultural norms.

“These raids were doneafter we received reports from residents that the minimarts were selling condoms in an unregulated way, especially on Valentine's Day,” Makassar police official Jufri was quoted as saying in a local media report.

He added employees of the minimarts were told not to sell contraceptives to teenagers, the report said.

Growing restrictions Indonesia's highest Islamic clerical council declared Valentine's Day forbidden by Islamic law in 2012, saying it was contradictory to Muslim culture and teachings. But the vast majority of Indonesia's more than 220 million Muslims follow a moderate form of Islam in a country with sizeable Christian and Hindu minorities.

Rights groups have expressed concerns over the growing influence of Islamist groups, who have targeted how people lead their lives. A hardline group went around malls in East Java late last year to check whether outlets had ordered Muslim staff to wear Christmas apparel such as Santa hats.

In Indonesia's second largest city, Surabaya, government officials ordered schools to ban students from celebrating Valentine's Day “in or outside of school” because it ran counter to “cultural and social norms”, according to a copy of the letter on the city's official website.

Under Indonesia's decentralised system of government, regional authorities are allowed to issue bylaws without approval from the central government.

Scientists explain ‘dancing lights' mystery




The expanse of endless space and the relative lack of other humans in space is no comfort when the astronauts begin seeing strange lights. That's exactly what happened to astronaut Don Pettit in 2012, who described seeing flashing lights in space. He claimed to have seen blue flashes as he was dozing off at his sleep station, saying they were “like luminous dancing fairies”.

Danish astronaut Andreas Mogensen on board the ISS in 2015 captured an incredible light show on Earth that could explain the strange blue lights. The culprit, scientists believe, is a weather phenomenon that is obstructed from view on Earth but is visible from space. “I am very pleased with the result and that researchers will be able to investigate these intriguing thunderstorms in more detail soon,” Mogensen said.

The thunderstorms supposedly generated opposing charges in the atmosphere, which caused giant columns of brilliant blue electricity to discharge over seven miles above the clouds, into space. Ranging between 83 to 125 milliseconds, Mogensen needed a high-speed camera to capture images of the lightning columns, easily explaining why astronauts could barely catch a glimpse of the phenomenon. Even though the source of the “fairy lights” has been identified, scientists still can't explain it.

Google is pitting AI against AI to see if they work together

 Researchers are experimenting with social dilemma to see how AI agents interact
Google is pitting AI against AI to see if they work together


With the advancements researchers have made so far in the field of artificial intelligence, it's not hard to imagine a future where large swathes, if not all of human society, is managed by AI systems. So it's only prudent that, if we're planning to hand over day-to-day functionality to multiple different AI, we ensure they're capable of working together for the greater good.

Google subsidiary DeepMind is doing just that, by essentially locking two AI systems into a social situation, to see whether they cooperate or instead resort to sabotaging each other to further their individual goals. These social experiments involve rules where individuals can profit from selfishness but if everyone is selfish, they all lose. Both experiments the researchers attempted were framed as basic video games that the AI agents played. In the first situation, called ‘Gathering', both AI agents were given a task to gather virtual apples from a slowly replenishing central pile. Additionally, they also have the option to “tag” the other player and temporarily take them out of the game, in which case the first player would have a precious few moments to collect more apples.

In the second situation, ‘Wolfpack', two players have to hunt a third in an arena filled with obstacles. In this case, points are gained not only by the player doing the capturing, but by all players near the prey when it's caught.

The researchers found that both AI systems decided at different times to work together, and at other times to cooperate, depending on the situation. For instance, in the apple gathering game, both AI started out collecting apples and essentially ignored the other. However, the more stocks dwindled, the more the AI took to tagging each other out of the game in order to get more apples. Additionally, when a more powerful AI was introduced into the situation, it began zapping the other players regardless of the stockpile size available.

So, does that mean the smarter AI thought that going on the offensive was the best strategy? The researchers suggest that since the tagging required more computations, taking up valuable apple-gathering time, the smarter AI was likely first to fire as it knew it was capable of the task without falling behind on resource gathering. The lesser AI were instead happy to cooperate when it was easier to do so.

Conversely, in the Wolfpack scenario, the smarter an AI, the more likely it was to cooperate with other players, as learning to work together to capture the prey also required more compute power.

These experiments cast light on the unique challenge AI developers will face in the years to come. It shows that AI can change their behaviour based on rules they face. If an AI is rewarded for being aggressive towards other systems (zap another player to get more apples for yourself) it will adopt that as early as possible. If it is rewarded for cooperating (everyone gets equal points when the prey is captured), it will attempt to work together with other systems. That means developers have to keep these rules in mind. An unchecked AI, faced with a situation involving limited resources, might be prone to prioritising itself above other systems, no matter the overall consequences. And if multiple AI are working together to handle things like a country's electric grid, economy, or traffic systems, in-fighting could have disastrous consequences.