Silicon Valley Versus Washington: Why Tradecraft Must Be Equal To Technology – Forbes

Silicon Valley Versus Washington: Why Tradecraft Must Be Equal To Technology - Forbes 2

Getty Images


Silicon Valley at times seems on an endless march to replace all of humanity with code. As the data-driven revolution has given way to the deep learning revolution, the Valley’s automation mindset has come for nearly every corner of the modern enterprise. One of the challenges of this sweeping movement towards machine-based analytics is that technology has become more prized than tradecraft. As companies emphasize the idea of one-click analytics that remove human intuition entirely from the data analytics workflow, are we rushing towards a dangerous world in which we no longer understand what our data is or how our tools yield the results they do.

There is no more striking contrast between the worlds of Silicon Valley and Washington than the divide between technology and tradecraft.

Silicon Valley prizes technology above all else, rushing to mechanize every conceivable human activity and working to strip away every last human input for maximal efficiency. The ultimate outcome envisioned by many companies is one-click analytics where algorithmic output is available without human interpretation, verification or validation. To the Valley, a human is an expensive and fallible obstacle to be overcome.

In stark contrast, the vastly higher stakes of Washington have historically prioritized tradecraft over technology. If a streaming site gives a user a sub-optimal video suggestion, the only consequence is perhaps a few fractions of a penny of lost ad revenue. If an intelligence analyst confuses a school with a weapons factory the results could lead to war. This historically led to a mindset where algorithms have been viewed as powerful tools that can assist human analysts, but where they are part of a symbiotic human-machine analytic process in which every datapoint is verified, every algorithm interrogated, every analytic result triangulated.

Technology companies are interested in understanding how to improve their algorithms and to better understand when they go particularly wrong, but the typical algorithmic decision-making process is largely a black box. In contrast, an intelligence assessment must be accompanied by a careful explanation of the entire analytic workflow from start to finish. Depending on the stakes at hand, this explanation might include the data used, how that data was verified for completeness and comprehensiveness for the specific problem, each of the algorithms and analytic processes it was subjected to and their known limitations, independent triangulation assessments through other datasets and methodologies and how they were determined to be independent and an overview of the computational and human verification and validation steps.

A brief update on how many tweets a head of state sent over the past week condemning a trade policy might include simply the tool used, whether it represents an exact count or an estimation and a notation that the given tool has been validated for departmental use. A narrative assessment determining that a head of state’s social media posts suggest they are preparing for military action that could require US intervention would be accompanied with far greater detail justifying its assessments.

Regardless of the level of detail in any given assessment, the bottom line is that Washington places a heavy emphasis not just on analytic results, but on how those results were arrived at.

In contrast, while digital lab books like Jupyter Notebooks are increasingly common in the data science community, there is still a great deal of work, especially at the bleeding edge, that is accomplished through decentralized pipelines of tools and packages scattered across arrays of machines and performed over weeks or months in which the final process of discovery may be impossible to fully recount.

Even with a lab notebook, data scientists frequently reach for the dataset that is easiest at hand, rather than asking the hard questions of whether a given dataset is well-aligned with the question they wish to ask of it. Twitter may be the easiest place to source a particular head of state’s statements, but are there also other speeches that they do not live-tweet that may be far more consequential in terms of detail and allusion? Such questions lie at the heart of intelligence assessments, in which sourcing is everything, while in the commercial world they are far less examined.

In fact, one of the leading proponents of the need for explainable deep learning is the intelligence community, which must restore auditability and verification to its quantitative pipelines.

AI bias exists today because Silicon Valley has embraced the idea of black box analytics in which machines produce extraordinary results but without their own developers understanding how they arrive at their findings. So long as these algorithms generate revenue, this tradeoff has historically been acceptable to the Valley, while in Washington such a leap of faith is much harder to accept.

In the end, while most of the attention over the past decade has been on forcing Silicon Valley’s innovation mindset onto the bureaucracy of Washington, there is much the Valley could learn from Washington’s prioritization of tradecraft over technology.

Source >>> Originally published at here

Thanks! You've already liked this