Mechanical induction and heuristics in AI: upon opening the black box and...

Mechanical induction and heuristics in AI: upon opening the black box and finding it full of guesswork

1463

This post, published to coincide with the launch of the Monash Data Futures Institute, is part of a series on AI in education. Previous entries are here, here and here.

Carlo Perrotta, May 2019

Featured image by Daniel Friedman

Milind Tambe is a world authority on AI, currently holding the prestigious title of Gordon McKay professor of Computer Science at Harvard University. He was the founding co-director of the USC Centre for Artificial Intelligence in Society and, currently, he is the director of the CRCS Center for Research on Computation and Society at Harvard. Recently, Prof Tambe visited Monash University for a series of talks about “AI for social good”, the highlight among them being the Inaugural lecture on 2 May 2019 hosted by the new Monash Data Futures Institute (DFI), which I was lucky enough to attend. The presence of Prof Tambe was not an isolated occurrence at Monash. My institution, like many others, is investing in AI as it tries to gain a foothold in what is widely regarded as an area of great economic and social importance, set to grow exponentially over the next decades. Cross-faculty initiatives have been launched and the buzz around the potentials and the pitfalls of predictive modelling, autonomous agents and algorithmic personalisation is becoming increasingly audible across the university.

Tambe‘s work is being recognised as an example of how AI can be used to positively improve the world. He is leading a team of exceptional researchers who are tackling, through a series of interconnected studies, big challenges like anti-terror security strategies in airports, illegal poaching in Africa, and the prevention of HIV epidemics among homeless youth. This ambitious research programme is driven by a pragmatic overarching goal: the optimisation of limited intervention resources. As Tambe reasoned, we live in a world where the risks are many and multiplying while the resources allocated to tackle them are limited and, in many cases, dwindling. Airports can be huge, overcrowded spaces and the ability of security forces to monitor all vulnerable areas can be outmanoeuvred by the tactics of a few murderous agents. Conservation areas in a country like Uganda extend over miles of rough, sparsely populated terrain which makes the work of anti-poaching rangers very difficult. The social aspects that influence decisions about personal health among homeless youth are complex and multidimensional, and social workers struggle to cope with such complexity.

Machine learning is proving a powerful ally in these challenges. It all starts, according to Tambe, with an awareness of the weakness and fallibility of human nature, “the weakness of our assumptions” and of “human schedules”, as he put it. Our understanding of the complex situations in which we live is limited and the constraints imposed on our agency are manifold. For example, rational attempts to schedule crucial activities such as routine patrols and surveillance procedures can be read and exploited, because our assumptions about the world are systematically weakened by its abundant uncertainty. Under such conditions, our best hope is to “satisfice”, i.e. to settle for a good enough solution and constantly tweak it, rather than pursuing an unattainably perfect one.  Such focus on satisficing and constant tweaking lies at the core of the entire machine learning project, manifest in its overriding emphasis on optimisation as an aspirational goal, which is never quite attainable.

In the airport security case described by Tambe, constant optimisation meant a computational process which sought to artificially engineer unpredictability and randomness in the surveillance schedules of security agents (human and non-human), in order to confound and wrong-foot the “bad guys”.  The process is based on game theory and, according to Tambe, is the first successful algorithmic application of this approach in a real life scenario. Game theory is a model of human behaviour rooted in economics and rational choice philosophy. It posits that many real life interactions can be conceptualised as “games” between rational agents who seek to maximise utility, either for themselves through individualistic competition, or for other players as well, thus achieving some sort of equilibrium.  Game theory has been critiqued for its tendency to trivialise complex social phenomena and for an ideological, unproven assumption about people as benefit-maximising rational agents. Its development and testing involves abstract modelling, logical-mathematical proofs and, sometimes, actual experiments. Unsurprisingly, it is a popular paradigm in military and intelligence strategy where there is a long –running tendency to see the world as an arena of conflictual and opposing interests, often with very high stakes and life/death consequences.

In Tambe’s case study, game theory was used to frame antagonistic relations between “adversaries” (terrorists) and “defenders” (security forces), modelling through data the possible variations in the latter’s surveillance and patrolling schedule that could be exploited by the former. The data was then used to train a machine learning algorithm, which worked its “magic” by randomising schedules so that it would be impossible, or at least much harder, to foil security forces to carry out a terror attack.    

At the end of the talk, Tambe had to field the inevitable question of evaluation: how do you know that this is working? After all, the highly sensitive nature of the problems Tambe and his team are tackling makes it nigh-on impossible to undertake systematic empirical validation, which means that the evaluation of AI in these situations must rely on proxy measures: expert evaluations carried out by humans or computer simulations. While these forms of evaluation can provide valuable indications, they are not wholly reliable and certainly do not tend to consider unintended consequences. The positive aspects on display in Tambe’s work are undeniable, but by removing opportunities for adversaries to exploit human weaknesses and the associated limited resources, AI manufactures – or otherwise paves the way for – new possibilities for exploitation and abuse. Game theory is, after all, predicated on the assumption that players may eventually reach some kind of equilibrium since:

  1. all agents are entirely rational;
  2. each agent knows, in principle, that the other is entirely rational, and therefore
  3. each agent could come into possession of advantageous information to get the upper hand. 

Could the AI methods that randomise and optimise the schedules of security forces and wildlife rangers be used by terrorists and poachers in broadly similar ways, maybe using off-the-shelf AI frameworks like TensorFlow, where neural networks can be tested using affordable, widely marketed computing resources such as gaming GPUs and even smartphones?  Could these methods be used by an authoritarian state to wrong-foot various forms of dissent and political disobedience? Could they be embraced by the surveillance industry, where many scenarios can be simplified as antagonistic games between actors? In education, anti-cheating companies like Turnitin could probably find useful applications of AI-powered game theory.

It is not my intention to dismiss the validation of game-theoretical approaches by proxy through expert evaluations and simulations, although an analysis of the unintended consequences and misuses of these systems should, arguably, be an important requirement of any evaluation. I am more interested in the heuristic aspects that seem to pervade the entire process, i.e. the fact that underneath the “magic” of AI there are often partially or entirely unproven assumptions about how the world works or should work, which are shaped by social factors and cultural preferences. There are, in other words, pre-existing frames of reference which inform the work of data science, computer science, decision science, cognitive science and all the other types of science directly or indirectly implicated in the field of applied AI. In Tambe’s cases, the choice of game theory as the underlying frame of reference is not a coincidence, but a clear design choice informed by heuristic assumptions, which might be absolutely correct (as heuristics often are), but are nonetheless based on an arbitrary generalisation process informed by an uncertain, partially unproven worldview. On the one hand, game theory seems perfectly (heuristically) suited to the antagonistic problems of security and surveillance, but there is also no denying the alignment with the preferences of military and intelligence officials, whose mindsets make them inclined to give credence to such an approach despite the absence of “hard” evidence about its effectiveness.

There are two interwoven dynamics at work here, feeding off each other and implicated in all forms of applied AI, including the educational ones: the first one is concerned with the ordering, classification and quantification of reality according to pre-existing frames of reference baked into the algorithms and the datasets they are trained on. The second, more subtle dynamic, is concerned with mechanical induction as an epistemological paradigm: the automated inference process that moves from those limited, partially unproven and biased observations towards generalisations and “knowledge”, which is then used as a basis for rational action and decision making.    

These two interwoven dynamics delineate a scenario where critical analysis becomes possible.

The conversations I am currently having about AI in education revolve around the need to problematize this emerging phenomenon in a way that education researchers can productively engage with. I think it is possible to articulate a proposal, made of two parts. The first is a critique of quantification and classification. The second is a critique of mechanical induction as a framework to guide inferences about the world. I will describe the significance of both.

Critiquing AI in education: sociologies of quantification, classification and prediction

Attempts to “sociologize” big data are now relatively common. These accounts have emerged in the social sciences at the intersection of science and technology studies, infrastructure and platform studies and the digital humanities. Notwithstanding theoretical and methodological differences, these contributions all share an important assumption: they understand data as the result of a process of quantification and categorisation shaped by politics and culture: “what the categories are, what belongs in a category, and who decides how to implement these categories in practice, are all powerful assertions about how things are and are supposed to be.” (Bowker and Star, in Gillespie, 2014: 171). It follows that by clarifying the ideological preferences and the political agendas (that is, the biases) that underpin algorithms, platforms, and datasets we will be able to bring these technologies into critical view.  It is, in other words, a matter of opening up black boxes in order to ask questions about the conflicts of interest, the monopolies, the exploitative tendencies and the various types of prejudice baked into data and computational methods.  

In his book “The Black Box Society” Frank Pasquale interrogates the “neat and compact presentations of a messy and sprawling world” concocted by machine learning in the digital news media landscape. His questions are applicable to all forms of algorithmic personalisation, ranking and filtering, no matter the domain:

“Was a story selected for its statistical prominence among news organs, or because a personalization algorithm picked it out for us? If the selection was based on statistics, then which statistics—the number of mentions of the story, the authority of the news outlets promoting it, or something else entirely?

This tendency for methodological secrecy is condemned by Pasquale (and many others) as a democratic risk, as the large internet monopolists regularly escape the pressures for transparency and accountability that would render them more open to public scrutiny. This position assumes that it is possible to dispel the underlying opacity of algorithms to uncover some kind of “truth”, thus holding powerful people and groups more accountable. While morally unassailable, this view is also epistemologically problematic. Indeed, some scholars argue that calls to make algorithms more open and transparent are based on a misguided assumption of truth as a “revelation”, rather than the result of relations and dialecticism.

Looking at algorithms relationally and dialectically means holding together several analytic foci, in order to examine how these technologies are formed through (and therefore need to be understood in terms of) links to other computational forms and analytic entities, with no clear “centre” to reveal. Algorithms are, in other words, “caught up in webs of relations so complex that it is impossible (and perhaps methodologically undesirable) to isolate an object and study it apart from its surroundings.” (Kitchin 2014: 24).

Here is an educational example: predictive modelling in education (i.e. “personalised learning”) is one of the main current applications of AiEd and an area of great interest examined by several critically minded writers and scholars. How does one research this topic in a relational way?  First, I suggest, by identifying several analytic foci:

  1. The individual online platforms that generate, analyse, and ultimately aim to extract value from student data. In education, these can be LMSs, MOOCs, and personalised learning platforms like Khan Academy or Knewton.
  2. The individual computational methods gathered in the sprawling body of machine learning as a discipline: supervised, unsupervised, decision trees, neural networks etc.
  3. The interface between computation and the explicit or implicit theories of learning that underpin personalisation. That is, the fact that algorithmic personalisation requires a model of human learning amenable to decomposition, operationalisation and, ultimately, measurement. This model can be based on an explicit theoretical preference, like for “cognitive presence” or “cognitive load”, or an implicit allegiance to a disciplinary consensus about how a particular type of knowledge (e.g. secondary school algebra) is acquired and demonstrated in formal school settings. The key point here is that personalisation algorithms in education never operate in a complete theoretical vacuum, even when learning theories are not explicitly acknowledged. Indeed, the underlying, implicit (heuristic?) theories in a personalised learning platform can be about individual agency and responsibility. For example, in most personalised learning systems learning is conceptualised as a binary and individualistic pursuit: a lone student either knows something or not (there are no degrees), and her learning can only develop along a predefined, rational path of prerequisite achievements leading to expected next steps, without divergences except those allowed by design. In some cases, like the ASSISTments Intelligent Tutoring System (widely used in the US), this “learning” explicitly reflects the pass/fail binary of high stakes testing, which is then reinforced computationally by relying on probabilistic models that can only predict “knowledge states” according to a base-2 logic of 0 (knowledge is present) or 1 (knowledge is absent).
  4. Finally, an analyst might focus on the political economies that are emerging as predictive knowledge in education becomes valuable, and therefore absorbed in the traditional manoeuvres of personal and corporate gain, benefiting some but not others. Here, a study could consider the problematic aspects of data science as a “discourse” where predictive performance is valued as an indication of personal and professional accomplishment; or it could investigate how commercial entities extract value from predictions: the above-mentioned platforms, large service providers like Google with its off-the-shelf predictive tools and frameworks, but also institutional and political actors like networks of universities and government bodies developing education policies.

All of these aspects are entangled in complex relations that, together, shape contextual manifestations of the personalised learning phenomenon.  The challenge for a relational critique of AI in education is to consider several of these analytic foci at the same time, examining the interactions and tensions between them. In the example above, a mutually influencing relation exists between two radically different forms of learning: the computational process of learning from data and the human one of learning in a structured knowledge domain, e.g. mathematics.  In particular, these relations will involve:

a) The learning enacted by users of a specific online platforms –e.g. Khan Academy –   who follow individual progression paths according to structured knowledge models, derived from a disciplinary consensus about how subjects like algebra and geometry should be taught and learnt

And

b) The learning enacted by a specific computational method (e.g. a neural network) as it seeks patterns in the student performance data from the same platform, in order to predict future knowledge states in algebra and geometry, and the specific skills that would be required to improve performance. 

Indeed, some recent studies have indirectly examined this relationship. On the surface, it appears to be a tension between machine autonomy and human-led design: can computers know as much or even more than teachers and maths subject specialists? Or, put differently, can a machine “discover” and then predict new ways to learn algebra? Below the surface, there is a tension between epistemologies: an inductive one that claims it is possible to discover new knowledge about learning from student data, and a deductive one which reflects a narrow and prescriptive disciplinary consensus about learning algebra as an individualistic pursuit of mastery. Both are problematic. However, the problem lies in the tension between the two, rather than in just one or the other. In essence, it is a competitive tension between predictive paradigms vying for supremacy, evident in the way data scientists and online learning platforms chase predictive knowledge as an overriding epistemological and corporate goal. In such a tension, student data is just the battleground (and the fuel) for skirmishes and competitions between data scientists, platforms and computational methods. “Learning” (computational and human) is thus reconfigured and made to fit forcibly into the constrained spaces afforded by this tension, whist student data, reduced to granular transactions between inputs and outputs, becomes an inert “body of evidence”: a site for experimentation and dissection, sometimes reanimated through the computational equivalent of bioelectric galvanization .


Data scientists, online learning platforms and student data: a visual analogy

Mechanical induction as a philosophical problem

In the previous section, I described attempts to see data and algorithms through a sociological lens focused on the political and ideological substrate of technologies. Now, I would like to suggest that that a similar critical lens should be applied to another order of phenomena: the probabilistic and heuristic models of what counts as “rational behaviour”, which underpin predictive modelling to varying degrees. My thesis is that bringing into view these forms of probabilistic and/or heuristic inferencing will help us cast a critical light on the uncertainty that permeates applied AI at all levels: from the choice of a frame of reference (e.g. game theory) as a general explanation of the problem under study, to the guesswork that occurs when models are being optimised. Indeed, the role of probabilistic and heuristic forms of rationality in the process of knowledge production enacted through and within big data algorithms (and increasingly AI) is so pervasive, that it is somewhat surprising they have not been subjected to a more vigorous critical treatment in philosophical and sociological terms.

The crux of the problem is the relationship between Bayesianism on the one side, and mechanistic models of behaviour and decision making on the other.  

Bayesianism is a philosophical stance rooted in a probabilistic understanding of the world. Its origins are in the statistical theorem from which it borrowed its name (Bayes’ theorem), but as many “isms” do, it builds on a body of valuable  – in this case statistical – knowledge to develop an ideological edifice and a set of normative criteria to govern social life. Some have called this phenomenon “Bayesian fundamentalism”. Bayes’ theorem and related methods propose statistical approaches to revise the probability that a certain event might occur on the basis of prior observations, the likelihood they are accurate and the level of subjective belief in them. In turn, these approaches assume a model of rational behaviour applicable to humans and machines alike, according to which it is possible to mathematically account for prior probabilities and degrees of subjective belief, in order to “learn from experience” and use this learning as a basis for future action. This Bayesian “mindset” sits at the very centre of the entire machine learning endeavour.  

Bayes’ framework has experienced tremendous success thanks to mathematical advancements in the past few decades, but it is affected by a tendency to favour internal mathematical consistency at the expense of alternative explanations. As a normative account of rationality, Bayesianism engenders a narrow reliance on mathematical and computational proofs in small probabilistic worlds, where “rational behaviour” is conflated with “rational decisions derived mathematically from prior probabilities”.

Proposers of mechanistic models of behaviour seek to moderate this overreliance on rational estimation and mathematical proofs, and cite plentiful empirical evidence that shows how agents (human and non-human) behave according to adaptive mechanisms that are difficult to treat statistically but have a researchable biological and neurological substrate. Crucially, these mechanisms are often nothing more than heuristic strategies that can outperform more demanding and sophisticated statistical estimation models, but only in environments that exhibit little levels of variation.

Bayesianist and mechanistic/adaptivist views of rational behaviour are driving the big AI debates currently occurring at the interface of computer science, statistics and cognitive science. They are entangled in a constructive and mutually influencing tension that is pushing innovation and experimentation in algorithm design and model optimisation, whilst shaping how the field conceptualizes the pivotal notion of computational intelligence.  Both, however, share one crucial tenet: they assume we all live in “small worlds”. The notion of small world was introduced by Leonard Savage in his seminal book The Foundations of Statistics. Savage proposes a broad theoretical distinction between large worlds in which “grand decisions” (i.e. the decision of how to live one’s life) are too complex and multidimensional to be contained within a narrow frame of rational behaviour, and small worlds which represent subsets of grand worlds, or more isolated decision situations. A small world, in Savage’s words, is one which contains a limited selection of the objects “about which a person is concerned, and a model of what that person might be uncertain about”. Furthermore, each object and the degrees of uncertainty around it will only “admit a certain class of descriptions that might thinkably apply to it” (p8). A small world is one where, when people face uncertainty, they can always “take a look” before leaping; a large world is one where some conditions are unknowable and where, as the saying goes, bridges can only be crossed when one comes to them. Further clarifying Savage’s distinction, the mathematician Ken Binmore states that: 

“Bayesianism is understood [as] the philosophical principle that Bayesian methods are always appropriate in all decision problems, regardless of whether the relevant set of states in the relevant world is large or small. For example, the world in which financial economics is set is obviously large in Savage’s sense, but the suggestion that there might be something questionable about the standard use of Bayesian updating in financial models is commonly greeted with incredulity or laughter.”

A rather unclear point in Savage’s original notion of a small world is about size. While he states that small worlds are subsets of large worlds created according to criteria of simplification and numerical (algebraic) tractability, it is reasonable to infer that their smallness is not a reflection of their size but, to use modern machine learning terminology, of the algebraic tractability of their feature spaces: the n-dimensional spaces where each of the variables used to describe a certain phenomenon represents a distinct coordinate. The more the features, the more the dimensions.  Small worlds are not small in numerical terms – they can be enormous with millions of data points, but the variance, collinearity and number of features that can be used to describe them are relatively low or, indeed, tractable. On the contrary, large worlds are those which, in addition to being numerically large, are also characterised by a large degree of collinearity and variance and by exponentially large feature spaces where several dimensions may not be amenable to algebraic tractability.

A very similar problem plagues the use of heuristic strategies, which tend to eschew statistical estimation in favour of an adaptive approach to formulating inferences. This approach is still deeply rooted in inductivism, but on occasion is significantly more efficient as it cares much less about variance, i.e. it is even less concerned with the variability of large worlds, while pursuing a sort of Darwinian competitive advantage that comes from successfully anticipating outcomes in a small world. If one can use a representativeness heuristic to effectively predict that a random board member in a global corporation will be a white male (a small world decision space), why should one care about the contextual and structural factors (the large world aspects) that make women of colour less likely to sit on corporate boards?  “Your heuristic is biased” says the critical observer. “I know, replies the other, that’s why it’s so effective! However, should the nature of corporate governance change, then my heuristic will adapt accordingly”. Except it may not, because heuristics have found a way to withstand criticism and updating by hiding in complex predictive infrastructures made of proprietary algorithms, niche technical expertise and flawed data. These infrastructures create “worlds of artifice”, as Theodore Porter put it, which use quantification and the language of science (data science, computer science, and cognitive science) to endow predictions with a manufactured veneer of objectivity. As a result, facts about the world (“there are no women of colour sitting on boards”) are not discovered or predicted, but actively reinforced and created.

Heuristic infrastructures and the labour of truth

At the centre of the previous discussion lie several philosophical considerations which might take this article (essay?) in uncharted waters, where unresolved debates about the nature of reason, knowledge and (above all) truth can be found.  These debates are, however, inevitable in the medium and long term, as the new hyper-complex technologies of quantification and prediction cannot hope to escape the “regulating forces of philosophy”, as David Berry put it.  In this sense, it could be contended that truth is the main casualty of a fixation with Bayesian or adaptive/heuristic rationality in small worlds. We may hold completely different ideas about truth – it can be viewed as relational and dialectic, or it could be framed as the “essence” of things to be revealed through empirical investigation. Whatever our philosophical inclinations, it is a topic that cannot be ignored or brushed under the carpet, as it will always return to behave as the proverbial “elephant in the room”.  

David Miller, a modern philosopher from the popperian strand of deductivist philosophy of science, gave a rather damning assessment of the relationship between Bayesianism and truth:

“Baysianism provides a solution to the problem of induction only by wholly abandoning interest in the battle for truth, and opting for a passivist theory of human knowledge that may roughly describe, but certainly does not explain, what happens when science is done”. In a similar quote, Miller also states that “probabilists have, it seems, been enticed into thinking that, because certain truth = truth attained with certainty, it is also the case that nearly certain (probable) truth = truth attained with near certainty (probability): and hence that the collation of probable truths has something to do with the collation of truths. It has not. A statement may be probably true but not be true, and the second equation fails”.

Without getting bogged down in the quandaries of philosophical argumentation, I believe it is reasonable to contend  that the rise of predictive infrastructures that seek internal consistency in small worlds is leading to “truth deficits” in the various domains in which they are being applied: policing, content moderation in social media, diagnostic prediction in healthcare and of course education.  In all of these domains, predictive infrastructures struggle to reconcile their inner workings with societal and empirical expectations to account for what is true, as opposed to what is probable or biased. Predictive infrastructures are becoming pervasive because of the competitive advantage that comes from probable and biased predictions, which do no need to be “true”, but just accurate most of the times, or even just to a sufficient degree. When they fail of break down, as they have done recently in several high-profile cases, calls for truth accompanied by a modicum of social outrage become louder and demand some kind of vague change.

What has the response to these calls been so far? A tendency to circumvent the issue by creating a parallel human economy of truth in algorithmic systems. A sector of outsourced labour is emerging where companies like Cognizant provide social media platforms with disposable human eyes who make decisions about objectionable content, on the basis of corporate “sources of truth”, as well as cultural ones which reflect established forms of consensus around decency and civility (“truths” that Bayesian/adaptive algorithms struggle to comprehend). Similarly, we could describe as “labour of truth” the work carried out by underpaid AI workers who train systems to improve their accuracy. The so-called “scalers” for example, who spend hours staring at a screen to label pedestrians, cars and cyclists in the video data feeds used to develop self-driving vehicles.

As an aside, we should probably acknowledge that this economy has an epistemic counterpart in the academic world, with the emergence of scholars and “AI ethics experts” who are competing with each other by advocating their own sociological, humanistic or philosophical versions of truth that predictive infrastructures should, somehow, take into account to moderate their biases.

Conclusion

The fact that Bayesianism and heuristic induction are not interested in truth but in degrees of plausibility is a philosophical problem that, in my opinion, has not been given due consideration. It alters the terms of the political discourse by exonerating and empowering actors in very specific ways that successfully trade on abandoning the pursuit of truth as an ideal. Algorithmic systems work not because they reconstitute truth through bias and narrow interests (although they also do that), but because they are not interested in truth.  It has become commonplace to argue that algorithmic methods are increasingly obscure even for those who develop them and implement them – except that this opacity is by design, a necessary condition computationally enforced to reduce dimensionality and to assist optimisation.

My own current work in this area aims to investigate the rise of “heuristic infrastructures” in education, where predictive analytics and AI are creating similar “truth deficits” as those described in this article. However, my main thesis is that educational institutions are addressing these deficits not by outsourcing labour (although that is an entirely realistic prospect), but by reconfiguring the labour and the sociality of teaching and learning, with the main consequences borne by “frontline staff” and core users: increasingly precarious and overworked lecturers and teachers, as well as students pressurised to perform according to “truth sources” that define the criteria of academic and professional success.   

[to be continued]