Thursday, December 28, 2017

Will Artificial Intelligence become a threat to humanity?

By Luis Fierro Carrión (*)

             In March 2016, Google's AlphaGo artificial intelligence system beat Korean master Lee Sedol in the game "Go", an ancient Chinese table game. The possible moves in this game have a level of complexity much greater than those of chess. Google developed an algorithm for AlphaGo to learn recursively each time it played, through a deep neural network. AlphaGo learned to discover new strategies by itself, by playing thousands of games within its neural networks, and adjusting the connections through a process of trial and error known as "learning by reinforcement".

            Artificial intelligence (AI) systems have been conquering more and more complex games: tic-tac-toe in 1952, checkers in 1994, chess in 1997, "Jeopardy" (a game of questions on different subjects) in 2011; and in 2014, Google's algorithms learned how to play 49 Atari video games simply by studying the inputs in the screen pixels and the scores obtained.

            This "deep learning" is based on neural networks, modeled on the architecture of the brain, and it allows artificial intelligence to learn in various ways. Experts believe that "general artificial intelligence" systems will reach human capacity in the coming decades.

            At the beginning of 2016, there were two experiments that generated similar and worrisome results. In one case, Microsoft created an "auto-bot," that is, a Twitter account managed by artificial intelligence, to send short messages. In less than 24 hours of interacting with human users, beginning by sending placid and inconsequential messages like "Hello, I'm here", it had become a racist, neo-Nazi, anti-Semitic troll, who called to kill Jews and Mexicans (note that in those days the Republican candidate Donald Trump and his followers were also sending anti-Muslim, anti-Latino and anti-immigrant messages).

            In another similar case, Hanson Robotics exhibited at the "South by Southwest" festival in Austin an android called Sophia. When the founder of the company, David Hanson, humorously asked her if she wanted to "destroy the humans", the android answered: "OK, I'm going to destroy the humans." Before stating that she was going to put an end to humanity, Sophia also shared some ambitions of her own: "in the future, I hope to do things like go to school, study, create art, start a business, even having my own home and family , but I'm still not considered a legal person, so I still cannot do these things."

            Interviewed on that occasion, the futurologist Ian Pearson discounted the risks of the dangers of artificial intelligence: "It has a huge impact on AI researchers who are aware of the possibility of making robots that are more intelligent than people. However, the pattern for the next 10-15 years will be of several companies that look towards the concept of consciousness. The idea behind this is that, if you make a machine with emotions, it will be easier for you to interact with people. There is absolutely no reason to suppose that a super-intelligent machine will be hostile to us. But the fact that it does not have to be bad, does not mean it could not be bad."

            Other technological experts, thinkers and visionaries, such as Stephen Hawking, Bill Gates and Elon Musk, on the other hand, have warned about the risks that artificial intelligence could displace humans, advancing much faster than human biological evolution. Nick Bostrom, a philosopher at the University of Oxford, wrote the book "Super-intelligence," about the existential threat to humanity posed by advanced AI.

Science fiction vs. technological development

            Multiple science fiction novels, stories and films also seem to warn about the possibility that "intelligent machines" will eventually displace us and take power.

            In the "Terminator" series of films, that began in 1984, a dystopian future is predicted, in which a few humans continue the resistance against the domination of intelligent machines. Robots become aware of themselves in the future, reject human authority and determine that the human race must be destroyed.

            While the story was modified as the series of films progressed over time, the company "Cyberdyne Systems" initially created Skynet, a computer system developed for the US military. Skynet was originally built as a "Global Digital Information and Defense Network", and was later given command over hardware and military systems, including the B-2 bomber fleet and the entire arsenal of US nuclear weapons.  The strategy behind the creation of Skynet was to eliminate the possibility of human error and slow reaction time, to ensure a quick and efficient response to any enemy attack.

            According to the first film, this system was originally activated by the military to control the national arsenal on August 4, 1997, and it began to learn by itself at a geometric pace. At 2:14 am on August 29, it acquired an "artificial conscience." Operators panic, understanding the full extent of its capabilities, and try to deactivate it. Skynet perceives this attempt to shut it down as an attack, and arrives at the logical consequence that all humanity must be destroyed to prevent such an attack. In order to defend itself against humanity, Skynet launches nuclear missiles under its command to Russia, which responds with a massive nuclear counter-attack against the US. and its allies. As a result, the film said, more than 3 billion people died.

            Although film writers have warned about the risks of the development of artificial intelligence, combined with a global information network, the creation of integrated systems of military command and control, and the use of robots as soldiers, all these tendencies have continued to expand.

            Other similar films were "The Matrix" and "I, Robot" (adapted from the novels of Isaac Asimov).

            In "The Matrix", a post - apocalyptic world has developed, in which intelligent machines have completely dominated the planet. It represented a future in which "reality", as perceived by most human beings, was a "simulated reality" called "The Matrix", created by intelligent machines to subdue the human population.

            Asimov developed in his stories and novels the concept of the "Three Rules" that should be embedded in all robots:

  • A robot must not harm a human being or, through inaction, allow a human being to suffer damage.
  • A robot must obey the orders given by human beings, except when such orders come into conflict with the first law.
  • A robot must protect its own existence, insofar as this protection does not conflict with the first or second law.

            In later stories, when the robots had assumed responsibility for the government of entire planets, Asimov added a fourth rule (or zero rule):

  • A robot can not harm humanity, or, through inaction, allow humanity to suffer harm.

            In the real world, the manufacturers of robots and other intelligent machines (drones, expert systems, etc.) have not introduced any ethical notion into their programming. Given that the Armed Forces of several countries are an important source of funding for robotic research (and have begun to use armed unmanned aerial vehicles to kill enemies), it is unlikely that such laws will be incorporated into their design. If a machine is meant to kill humans, it could hardly be asked to follow rules zero and one.

            The science fiction author Robert J. Sawyer commented that the development of artificial intelligence is a business, and businesses are scarcely interested in fundamental safeguards, especially those of a philosophical nature (for example, the tobacco industry, the automotive industry, the nuclear industry). None of these businesses have accepted that fundamental safeguards are necessary; each of them has resisted the imposition of external safeguards; and none has accepted a regulation that prevents them from causing harm to humans.

            The US Armed Forces and the CIA have begun using drones to bomb terrorism suspects in Afghanistan, Pakistan, Yemen, Somalia, Iraq and other countries. These devices are "remotely manned" by operators, but talk has begun about the possibility of automating them (to make attacks faster and more effective). Some analysts have expressed concern about the possibility that the devices might be captured and used by the terrorist groups themselves, or that they could be hacked electronically by terrorists. Yoshua Bengio, from the University of Montreal, has called for the prohibition of the military use of AI and of the creation of autonomous offensive weapons.

Artificial Intelligence Studies (very well paid)

            There are several specialized programs in the development of Artificial Intelligence (AI). The one from the University of Texas at Austin, for example, indicates that "Our research on artificial intelligence responds to the central challenges of machine cognition, both from a theoretical perspective and from an empirical perspective, oriented towards implementation".

            Some of the topics covered by the program include:

  • Automated programming
  • Automated and interactive reasoning
  • Autonomous agents
  • Computer vision
  • Data mining
  • Representation of knowledge
  • Expert and multi-agents systems
  • Memory models
  • Artificial intelligence based on logic
  • Machine learning: supervised learning, reinforcement learning
  • Neural networks: evolutionary computation, computational neuro-science, cognitive science.
  • Robotics: robot learning, robotics development, multi-robot systems
  • AI applications: autonomous driving, math and physics problems, etc.

            Professor Sebastian Thrun, of Stanford University, has placed an introductory course on AI available free of charge at the Udacity web site, which he founded (https://goo.gl/xyOIJk). Another Stanford professor, Andrew Ng (also a researcher at the Chinese company Baidu), offers a free course on machine learning in Coursera (https://goo.gl/twNB0T),

            There has been no significant concern about the ethical or philosophical issues, or the control of potential risks, arising from AI. Courses, conferences or debates among specialists have not generally addressed this aspect.

Development of Artificial Intelligence

            Artificial intelligence had some advances in the 1990s. The use of personal computers became widespread, as did private access to the Internet. In 1997, IBM's "Deep Blue" computer beat the world chess champion, Garry Kasparov. And there was a significant expansion of "expert systems" for data mining, medical research, logistics and other technological applications.

            In 2004, the "Spirit" and "Opportunity" rovers independently navigated the surface of Mars, and DARPA (the Defense Advanced Research Projects Agency) of the Pentagon announced a contest to produce vehicles that drive autonomously.

            In 2005, Honda produced the robot "Asimo" (alluding to the writer Asimov), a humanoid robot with artificial intelligence, with the ability to walk as fast as humans. In 2009, Google produced its first self-driven car.

            The main companies in the technology industry, such as Amazon, Facebook, Google, IBM, Microsoft and Baidu, compete in the development of artificial intelligence (AI) and machine learning. At the same time, they have made "deep learning" software available to the public; IBM has developed the Watson platform, which allows new companies to build AI applications ("apps").
AI systems implanted in smartphones and computers, such as "Siri" in Apple, "Cortana" in Microsoft, "Alexa" in Amazon, and "Google Assistant" in Google, interact millions of times daily with humans. All this information is used by companies to direct their advertising and sell this information to marketing companies. The company that dominates the development of AI will be able to direct the technology industry in the years to come.

            Ph.D students who specialize in these subjects are receiving job offers upon graduation that pay more than USD 1 million per year, establishing themselves as the most lucrative jobs (more than neurosurgeons or anesthesiologists).

            Some researchers in the field of Artificial Intelligence have begun to explore the possibility of incorporating emotional components, such as mood indicators, in order to increase the efficiency of intelligent systems.

            It is thought that having "feelings" and "motivations" would allow intelligent machines to act according to their "intentions". For example, they feel a sensation similar to "hunger" when they detect that their energy level was falling, or that they feel "scared" when they are at high risk. Even researchers believe that they could introduce "pain" or "physical suffering", in order to avoid dangers for their operation, such as, for example, inserting their hand inside a gear chain or jumping from a certain height, which would cause irreparable damage.

            Some billionaires in the technology sector (Elon Musk, Sam Altman and Peter Thiel) founded a non-profit organization, OpenAI, which seeks to ensure that AI advances do not affect humanity. Their concern is that machines equipped with super-intelligent technology can come to represent a danger to humanity. Musk expressed his fear that AI could become more dangerous to humanity than nuclear weapons.

            The ultimate goal of OpenAI is to ensure that intelligent machines continue to work in the best interests of the people who build and use them. "We believe that the AI ​​must be an extension of individual human wills," says the mission statement of the entity.

Replacement of the workforce

            Beyond the apocalyptic visions of intelligent machines, the truth is that they will continue to gradually displace the human labor force, especially for physical and repetitive activities. AI ​​is also venturing into areas such as medical examinations, legal support, writing sports articles, and other skilled human tasks. The safest jobs are those that require empathy and human interaction: therapists, doctors, nurses, hairdressers and personal trainers.

             According to "The Economist", a study by Benedikt Frey and Michael Osborne of the University of Oxford, determined that up to 47% of jobs in the United States. could be displaced by automation in the coming decades. It is necessary to train employees to adapt to new circumstances and acquire the knowledge and skills required in this new environment. This includes greater emphasis on "on-the-job training", and "lifelong learning." Artificial intelligence can even help in this by customizing computer-based learning and identifying gaps and training needs through "adaptive learning."

            This has led some thinkers to conceive the need for the State to pay a "minimum monthly income" to all citizens, regardless of whether they work or study; Finland and the Netherlands are studying its implementation. Social security systems must also adapt, allowing benefits, pensions and health insurance to move with individual employees, instead of being tied to companies.

            There will also be positive impacts, in terms of reducing labor costs and increasing efficiency. A study by Bank of America Merrill Lynch estimated a cost reduction of $ 9 trillion in labor costs; cost reduction of $ 8 trillion in manufacturing and health care; and $ 2 trillion in improved transportation efficiency due to the use of self-driven vehicles and drones. The transformation of society will be much faster than the industrial revolution.

             Artificial intelligence will be our heir, given that it will eventually reach a higher level than human intelligence, and robots, androids and other machines will be stronger, more resistant and longer-lived than we are. For interstellar travel, for example, it is evident that only an artificial intelligence machine could support trips of hundreds or thousands of years, unless some system of human hibernation is perfected. It is more likely that those who explore our stellar environment will be these intelligent machines.

(*) Climate Finance and Development Adviser. The opinions expressed are personal and do not represent those of any institution. This article was published in number 268 of Revista Gestión, Ecuador, October 2016. This is a translated and slightly updated version.

Wednesday, December 13, 2017

Key Results of Macron's #OnePlanetSummit

Key Results of Macron's #OnePlanetSummit
On the second anniversary of the adoption of the #ParisAgreement

- commitment by major institutional investors (representing $26 trillion in assets under management) to shift towards renewable energy and low-emission investments
- phasing out coal
- lower shipping emissions
- re-build with resilience in Caribbean
- new climate finance pledges
- carbon neutrality by 2050 commitments by 14 countries, including Germany and Costa Rica.

Also noteworthy:
- @WorldBank will stop funding upstream oil & gas projects
- @AFD to become 100 % Paris Agreement compliant
- Hewlett Foundation pledges $600 million in grants
- @AXA to divest from fossil fuels, and invest $12 billion in green investments.

@UNFCCC Summary
https://cop23.unfccc.int/news/one-planet-summit-finance-commitments-fire-up-higher-momentum-for-paris-climate-change-agreement

@ClimateHome Summary
http://www.climatechangenews.com/2017/12/13/macron-summit-touts-green-finance-progress-despite-trump/?utm_content=buffer26182&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer

Tuesday, October 24, 2017

US stock market overdue a drastic correction

The stock market is overdue a drastic correction.

The current Cyclically Adjusted Price to Earnings Ratio (CAPE Ratio or Shiller PE Ratio) is at 30.76, whereas the historical average is 16.

The current level has only been exceeded twice since 1880:

a) on August 1929, right before the Great Depression

b) in the period between June 1997 and August 2001, during the Internet stock bubble.

The ratio is the highest it has been during the last 15 years.

This extraordinarily high ratio cannot be justified by a productivity shock or some new long-lasting technological innovation that would eventually boost earnings.

So it is most likely that we will face a correction and reversion to the mean (16).  This would imply a collapse of 93 % in the S&P 500 Index. 

I would strongly recommend a strategy of profit-taking, and investment diversification (in real estate, commodities, alternative assets, and related ETFs).  But certainly NOT in bitcoins or other similar crypto-currencies.

Also, perhaps, some inflation-protected assets (such as TIPS or gold ETFs), given that it is also likely that the period of low inflation is coming to an end.





https://www.quandl.com/data/MULTPL/SHILLER_PE_RATIO_MONTH