In this section of the paper, we propose to utilise the information so far displayed alongside some key insights about the mechanisms that may help understanding of the differential trajectories of South Korea in comparison with some other Asian growth economies. We shall begin with outlining some key features of “innovation governance” in the advanced regional innovation systems listed at the end of the previous section before comparing and contrasting such governance with what has typified or diverged from that emergent new “innovation governance” mode. First, we may say that high-tech platform ecosystems or complexes like Silicon Valley, Cambridge and Israel do not display strong top-down governmental modes of economic decision-making in policy or strategy. In other words there is seldom, if ever, a peak committee in which economic deliberations that directly affect specific platform industries by producing detailed action-lines that favour or disfavour specific technologies. That is not to say that in a general way, certain bundles of “cross-cutting” new technology capabilities or problems that may indeed occur in the form of “technology pathologies” may be fashioned. These may evolve as broad frameworks for alerting or sensitising “actors of consequence” of a clearer priority of recognition by “policy champions”. A good example is “Homeland Security” which consists of many diverse but technologically interlocking targets, problems and opportunities. In the US as many as 17 different information and intelligence agencies engage directly with intelligence gathering at home and abroad. These involve mobilising “Big Data” gathering and analysis, algorithm writing, cybersecurity, cyberwarfare, including cyberforensics, drone design and applications and multiple kinds of tracking, verifying, intercepting and, if necessary, arresting or otherwise preventing “technology pathologies” from threatening individual lives and communities. Without labouring the point, such “crossover” innovation opportunities also occur, in different combinations but including overlaps across the boundaries of “Big Platforms” such as Biomedicine, Elderly Healthcare, Artificial Intelligence, Renewable Energy and Sustainable Mobility, sometimes “fuzzily” designed to meet “Societal Grand Challenges”.
Such often “post-political” activity bundles are moulded by “policy champions” of various kinds. For example, Artificial Intelligence, with its close linkages to Robotics and Nanotechnology has a few “protean” influential champions in the US such as Ray Kurzweil, an apologist for AI for decades (Ford, 2015; Barrat, 2013). Kurzweil himself is widely seen as an attention-seeking entrepreneur and proselytiser for only the positive implications of AI. He is influential, having his pedagogical efforts sponsored by, amongst other Californian businesses, Google, Genentech and Cisco Systems. His inventive effort has touched such technologies as optical character recognition, computer-generated speech and music synthesis, all of which relate to augmentation of human senses. He was awarded 20 doctoral degrees from the likes of Babson College, Bloomfield College, Clarkson University, DePaul University, Hofstra University, Michigan State University, Rensselaer Polytechnic Institute and Worcester Polytechnic Institute, and been honoured by US presidents Johnson, Reagan and Clinton. Among his awards to the technological, humanities and musical communities are the following: 2000 The Lemelson-MIT Prize. This $500,000 award is the largest in the U.S. in invention and innovation. 1999 The National Medal of Technology, the nation’s highest honor in technology. 1998 The Stevie Wonder / SAP “Vision Award” for Product of the Year a $150,000 prize (being used by the Kurzweil Foundation to provide scholarships to blind students), and the 2008 American Creativity Association Lifetime Achievement Award. It can obviously be agreed that the optimist Kurzweil is widely seen as a “crossover” innovator and an AI “champion” despite his cultist association with Silicon Valley’s “Singularity University” (reminiscent in some ways of L. Ron Hubbard and “Scientology”) which Kurzweil founded in 2008.
Without contemplating the “cultist” evangelizing of Kurzweil’s obsession with a fictitious fake version of the astrophysical phenomenon of the “singularity” when even light can no longer escape from a black hole in space, three things that follow are pertinent to our utilisation of his curriculum vita in support of the function of “champions” as arbiters of post-political action framing. First, it is noteworthy the extent to which Kurzweil’s innovative career expresses crossover innovativeness with respect to: the invention of a classical music synthesizing computer involving designing computer technologies such as machine reading to assist the disabled and to enrich the arts, including winning awards for film production. Second, the institutional nodes with which Kurzweil’s interaction occurs are solid entities in the worlds of academic research entrepreneurship, government and large corporations. After long advisory roles with firms listed above, he was in 2013 appointed head of engineering at Google. He had worked with Google’s co-founder Larry Page on special projects over several years. His executive appointment occurred as Google began assembling the largest artificial intelligence (AI) laboratory in existence. Acquisitions involved military robotics firm Boston Dynamics, thermostat maker Nest and cutting-edge Cambridge (UK) AI firm DeepMind. These were added to smaller purchases of Bot & Dolly, Meka Robotics, Holomni, Redwood Robotics and Schaft, and another AI startup, DNNresearch. It also hired Geoffrey Hinton, a British computer scientist who is rated the world’s leading expert on neural networks (Cadwallader, 2014). Finally, Kurzweil is an avid publicist for his serious and more questionable analyses and predictions having published seven books translated into 11 languages.
No other technology – specifically AI (with robotics [Ford, 2015] and nanotechnologies) – has anywhere near as “protean” the influence on key decision actors ranging from DARPA to Google as the aforementioned Ray Kurzweil but others take on relevant roles from other more sceptical viewpoints. Three of these, cited in Barrat (2013) include I. J. Good, Eliezer Yudkowski, and Stephen Omohundro. Good, who died at 92 in 2009, was a British expatriate mathematician and former Bletchley Park codebreaker colleague of Alan Turing. Good was responsible for coining the term “information explosion” to describe the impact of AI on human intelligence when it could be anticipated. Stanley Kubrick turned to Good as the adviser on the 1968 film 2001: A Space Odyssey. It was Jack Good with his insights on intelligent machines, who helped create the infamous character of HAL, the AI computer in the film. In Good’s seminal paper “Speculations concerning the first ultra-intelligent machine” he defined this – a forerunner to “Singularity” thinking - as follows:
Let an ultra-intelligent machine be defined as a machine that can far surpass all the intellectual activities of any man (sic) however clever. Since the design of machines is one of these intellectual activities, an ultra-intelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultra-intelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. (Good, 1965)
Accordingly, Good was a “champion” and influential at the highest governmental, academic and corporate levels with crossover theoretical interests from Bayesian mathematics to computer programming design and manufacturing to film consultancy. Moreover, he was careful not to take an over-optimistic line on the controllability of AI unless - as he wrote – “docility” could be built into the resulting technology. Other more sceptical AI “champions” who take a more practical but still pessimistically inclined view regarding the difficulty of ensuring “docility” from future AI or “artificial general intelligence” (AGI) as they term it, include gurus such as Eliezer Yudkowski, and Stephen Omohundro, noted earlier and as profiled extensively in Barrat (2013). Omohundro is optimistic, but this is based on his underlying notion that all AI is lethal because of the well-known software engineering problem that much programming is bad work, i.e. sloppy and incompetent, as Microsoft Word users have known for decades for its almost constant de-bugging upgrades. Bad programming is estimated to cost the US economy $60 billion per year. This implies a vast need for “self-improving software” a variety of “evolutionary programming” that may evolve from currently practised “machine learning”. Article space disallows fuller explication of such potentially influential views, save to say that Yudkowski – who invented the AI Box – a kind of Turing machine that led some players of its “game” to believe that a “thinking engine” had been invented, insists AGI would be catastrophic for humanity unless it is designed to be “Friendly AI”, But as Barrat (2013) observes critics argue that progressing towards AGI is necessitated by the even greater dangers of “artificial specialised intelligence” (ASI) falling into the hands of:
“ so many reckless and dangerous nations on the planet – North Korea and Iran for example – and organised crime in Russia and state-sponsored criminals in China launching.....cyberattacks, relinquishment would simply cede the future to crackpots and gangsters” (Barrat, 2013, 200–01).
Hence we see the origins of the engineer’s linear determinist thinking enlarged prodigiously and apocalyptically. The initial “mindlessness” of contemporary incremental innovators is captured in the following statement from Uber founder and Chief Technical Officer (CTO) Oscar Salazar who admitted:
“We are adding technology to a society without thinking about the consequences. I think government, industry and society need to work more together, because it is going to get crazier and crazier.” (Fairchild, 2017).
Here – belatedly - is recognition that as governments fail adequately to regulate technological experiments, good champions are also hard to find when their infantile aspirations are mainly “disruptive” (Christensen, 1997) and informed by the likes of Facebook’s Mark Zuckerberg’s earlier mission statement to “move fast and break things” (the origin of bad programming; Taplin, 2017). It has finally dawned on the Ubernauts that, as Fairchild (2017) also notes:
“Advances in artificial intelligence and automation could mean as many as 50% of today’s US jobs will go away, according to some estimates. Joined on stage by other high-profile members of the tech community, (chair Kara) Swisher forced her panelists to defend Silicon Valley’s seeming incapability to take responsibility for the downstream effects of its innovation. (Ibid)
Most governments and tech entrepreneurs excuse their mindlessness regarding the effects of AI automation upon workforces by stressing the importance of retooling and reskilling the workforce for tech jobs in the future. As engineers, in the main, they completely fail to see the paradox that they are responsible for the future absence of positions that it will be futile to train anyone for (Streeck, 2016). We shall return to this conundrum of engineering’s linear model of non-reflective obtuseness later, but for the moment we cite Frey & Osborne’s (2013) estimate of 64 million US jobs (47% of the total) having the potential to be automated and thus disappear within “perhaps a decade or two” (Frey & Osborne, 2013).