Artificial intelligence, in the broad sense, is one of the hottest news topics of 2018. Will it create jobs or destroy them? How are researchers, end-user companies, and governments responding to the global race for AI? This week, we look overseas to examine China’s latest efforts to become the global leader, not only as a user of automation, but also in producing the AI chips in robots.
Robotics Business Review has partnered with Abishur Prakash at Center for Innovating the Future to provide its members with cutting-edge insights into recent developments in international robotics, AI, and unmanned systems. Are you ready to be updated?
China develops AI chips for competitive advantage
New innovations show that China is making progress in developing its own AI processors to gain global leadership. As machine learning and related technologies have become mainstream in recent years, such chips have mainly come from the U.S.
Thinker is an advanced AI chip the Tsinghua University Institute of Microelectronics in China. It can operate for an entire year on just eight AA batteries. Thinker can easily switch between tasks that require multiple layers of neural networking, such as going from identifying faces to processing verbal commands in Mandarin.
Less than 12 months ago, the U.S. had the clear advantage in AI due to its control over the design and manufacture of AI chips. There was little taking place in China (publicly), with most chips either being knockoffs or unlikely competitors.
Thinker or other AI chips from China probably won’t beat U.S. products in winning business anytime soon. However, China could well use Thinker and similar AI chips to satisfy its own domestic industry. This will reduce China’s dependency on foreign technology (a geopolitical win) and cause foreign companies, like Nvidia or ARM, to lose out in China.
Fearful workers in India turn to chatbot therapists
Workers in India’s information technology sector, worried about losing their jobs to competition or automation, are turning to chatbots. One called Wysa works as a chatbot therapist, helping workers share their concerns and feelings.
Another service, named YourDOST — “dost” means “friend” in Hindi means friend — connects people with a network of therapists and psychologists, for a fraction of what it usually costs for human staffers to do so.
Digital chatbots acting as therapists may not seem newsworthy, but they point to a type or stage of automation I have talked about in the past: indirect automation.
While YourDOST connects people with real therapists — for now — Wysa may reflect the future of therapy better. As more people turn to opening their feelings to digital chatbots, they might not need human therapists as much, if at all.
Will governments view this as a kind of automation, whereby therapists aren’t actually losing their jobs, but their clients are turning to robots instead? As therapists and other professions in and out of the healthcare industry are affected by indirect automation, there is no telling how they will respond.
Perhaps they will talk about robots in the same harsh way that port workers in Long Beach, Calif., talk about being replaced by mobile robots.
EU funds new project under Horizon 2020
The European Union has co-funded the REELER project since 2016 through its Horizon 2020 initiative. Unlike other projects, which focus on agriculture or workplace automation, REELER is one of a kind. It stands for “Responsible Ethical Learning with Robots.”
The project’s objective is to develop robots that have “empirically-based knowledge of human needs and social concerns.” This intelligence would be developed by monitoring “proximity-based human-machine ethics.”
In other words, REELER will work by studying how humans interact with machines on an ethical level and then program these findings into the robots. This is part of what AI strategist Aseem Prakash has called “Coexisting with Robots.”
What kind of understanding will REELER enable in robots? For machines to learn from humans interactions, then this study should be international. It must look at human-machine interactions in every culture.
If this doesn’t happen, then robots using REELER’s findings will have an incomplete and biased view of people from different places. If a robot using such “ethical” programming malfunctions in South America, the Middle East, Africa, or Asia, governments in those regions may blame the Western company for not properly understanding their culture and society.