Discovering the world of artificial intelligence with Axiom Zen.
Driving Fight Club Into Fountains
Your laugh for this week comes care of a DC security bot, which quit its job this week by diving head-first into a fountain. No word on survivor benefits for the bereaved.

Plus, China has released a report on the future of AI in the country, researchers hope to use machine learning to help stop childhood PTSD, and a human army is helping teach AI how to drive.
Article of the Week
China Bets Big on Artificial Intelligence
“Artificial intelligence has become the new focus of international competition,” the report said. “We must take the initiative to firmly grasp the next stage of AI development to create a new competitive advantage, open the development of new industries and improve the protection of national security.”

China is racing ahead on new technological innovation, from smashing records for quantum entanglement to having the newest mag-train tech. Now they have their sights set on artificial intelligence, with plans for the industry to generate more than 400 billion yuan ($59 billion) by 2025. Government backing will be a huge advantage for what is largely still a nascent industry, as regulatory hurdles slow down similar expansion in countries like the United States and Britain.
The Bleeding Edge
"Russia’s Internet giant Yandex has launched CatBoost, an open source machine learning service. The algorithm has already been integrated by the European Organization for Nuclear Research to analyze data from the Large Hadron Collider, the world’s most sophisticated experimental facility."

Knowledge sharing vs protecting intellectual property and innovation is always a push-and-pull in the AI community. Yandex's decision to release CatBoost (categorical boosting) makes the first time a major Russian machine learning technology has been open source. It's an important contribution to the machine learning community, and wonderful to see people who understand that a rising tide raises all boats.
Google’s AI Fight Club Will Train Systems to Defend Against Future Cyberattacks
"This AI fight club will feature three adversarial challenges. The first (non-targeted adversarial attack) involves getting algorithms to confuse a machine learning system so it won’t work properly. Another battle (targeted adversarial attack) requires training one AI to force another to classify data incorrectly. The third challenge (defense against adversarial attacks) focuses on beefing up a smart system’s defenses."

It feels like cybersecurity is growing both more important and more difficult every day. The reality is that it isn't growing more difficult, but rather the consequences of failing to stop an attack are growing more dire. Remember when a virus would infect your computer with programs that opened pop-up windows in your browser, and nothing else? Those were the days. In light of recent attacks, the community knows AI needs to be better equipped to fight back, and that's what Google and Kaggle hope to achieve with this competition. May the best AI win!
"There is a growing literature indicating that PTSD may be prevented, if a child at risk (i.e. the 10-40%) is identified early enough. How can we know which child is at risk for PTSD – as early as possible after trauma exposure – so that this risk may be mitigated?"

There are few things that feel more important than protecting children from trauma, and machine learning is being leveraged to do just that. The idea is that resource-strapped service organizations can't help everyone, but using machine learning we can identity at-risk kids and make sure those are the ones who get the help they need.
Artificially Intelligent Criminal Justice Reform
"As AI technology continues to improve over the next several decades, it will likely expand from simple risk assessment tools to more complex technologies with potential to remove human bias in the field itself."

The criminal justice system is broken. Especially in America, but similar problems exist around the world. Bias in policing leads to bias in sentencing leads to bias in our criminal populations, which creates cycles of system poverty. Solutions are complex, and overhauling the entire system simply isn't an option. One path through the morass is introducing artificial intelligence. The problem there becomes creating AI that doesn't have the same biases, or finding training data that isn't flawed. But it's a problem being actively worked on, and the first glimpse of hope for many who have been advocating for change for decades.
"Is that a small child, or a large dog? Or a trash can? Any artificial intelligence controlling a two-ton chunk of steel must learn how to identify such things, and make sense of an often confusing world."

Training data is still the most resource-intensive thing to require, and training AI takes buckets of it. That's why Mighty AI is hiring humans to do the heavy lifting. They pay people a few cents a task to build up a library of data, which they then provide to their clients to train AI. 
“By far the greatest danger of artificial intelligence is that people conclude too early that they understand it.” ― Eliezer Yudkowsky
AI@AZ · #350-980 Howe Street · Vancouver BC V6Z 0C8 · Canada
Unsubscribe | View in browser