Google’s TensorFlow and the “democratization” of artificial intelligence: what does it mean...

Google’s TensorFlow and the “democratization” of artificial intelligence: what does it mean for education?

2070

By Carlo Perrotta

Featured image by Daniel Friedman

This post is part of a series on machine learning (applied AI) in education. The previous posts are here and here. In this short article, @carloper shares some of his current research on the emerging AI infrastructures in the predictive analytics sector, considering the implications for education.

One of the most important developments in the predictive analytics industry is the rise of open source tools and readily available computational power, which aim to make machine learning a more viable proposition for organisations or even individuals. As Kate Crawford and Vladan Joler note in their impressive study of Amazon’s Alexa, the idea that AI may be becoming “democratised” as a result of the convenience and accessibility of off-the-shelf tools is false, as the underlying logics of these technologies and the datasets for training them are still controlled by few actors and corporations.

There is something that can explain the rise of these AI infrastructures: the intrinsic flexibility and domain-agnosticism of machine learning and, in particular, deep learning algorithms, which make them portable and scalable but also need large training datasets and appropriate hardware resources. A phenomenon has been gathering pace over the past few years: the appearance of general-purpose predictive infrastructures, accompanied by a growing interest among technology companies for internal and outsourced AI frameworks.

A cursory glance at the job openings published on LinkedIn between November and December 2018 (always a helpful thing to do to take the pulse of the fast-moving technology industry) reveals a growing interest in machine learning in the upper echelons of the platform economy, and confirms that the development of digital infrastructures is a social process as much as an engineering one. For instance, UBER is seeking someone to “evangelize” ML across the organisation, while ensuring that internal teams are successfully deploying this technology. Spotify is looking to integrate open source and internal machine learning and data processing units, but also appears interested in competitive intelligence gathering to find out how competitors are increasingly turning their attention to “both structured and unstructured content such as text, audio, images and behavioural data”.

Folks at Amazon are so enamoured with “Alexa’s magical experience”  that they want to extend it by developing a “third party ecosystem”, where their ML stack will be made open to developers who can then create “state-of-the-art abstractions to deliver immersive voice experiences with ease”. This means that an undisclosed number of economic actors will be given access to Alexa’s trained algorithms to “personalise” experiences and products that rely on voice-based interaction.

One of the most interesting developments, with several educational implications, is Google’s TensorFlow, which released in 2015 under an Open Source license and quickly became a prominent leader in a lively marketplace.  TensorFlow is a library for deep learning computations written in Python. Its main selling point is that it can scale up from handheld devices to spread-out server farms. While TensorFlow is flexible enough to test a model on a local computer or even a smartphone , once the model reveals sufficient promise it must be upgraded by including larger datasets, thus requiring more computational power. TensorFlow makes this possible by parallelizing computation across numerous Graphic Processing Units (GPUs), which boast better performance than Computing Processing Units (CPUs) but are considerably more power-hungry.  The parallel use of processing units (GPUs and CPUs) usually takes place on distant servers where power is burnt at the rate of several gigawatts. The speed with which these distributed frameworks can be mobilised, coupled with their portability, means that prediction can be propagated across infrastructures, which in turn extend their ramifications into many contexts and, inevitably, into the “full stack” of the natural environment with its limited, increasingly depleted, resources.

The following diagram describes a typical distributed cloud-based TensorFlow architecture, according to Google.

What about education?

It is certainly premature to assume that there will be widespread adoption of deep learning frameworks in this sector.  At the same time, higher education systems (e.g. in the UK and Australia) have been investing for some time in large-scale data infrastructures where various forms of student data will converge. One could argue that, in an increasingly marketised HE sector, the interest of administrators and university leaders in these data appears less fuelled by a desire to improve pedagogy than by the promise of prediction of (and control over) key performance indicators: throughput, dropout, satisfaction, failure and so forth. Indeed, one of the current trends in educational data mining is an increasing reliance on heterogeneous signals that have demonstrable predictive power in relation to the afore-mentioned KPIs: library card swipes, LMS logins, access to learning materials, campus Wi-Fi logins and so forth. It is very likely that, at some point, these large datasets will be used to train predictive models and the availability of off-the-shelf powerful solutions will be hard to ignore.

Moreover, Google has made TensorFlow an integral part of its cloud-based productivity suite (G Suite), which has surpassed Apple and Microsoft as the classroom technology of choice in Australia and New Zealand, among several other countries. As students around the world do their homework or interact with each other using Gmail and Google Docs, deep learning algorithms work under the bonnet to minimise “time spent working on tasks that do not directly relate to creative output” – this is what Google calls “overhead”.

One might even suggest that a more alarming trend of surveillance is already visible in this space, as Safety Management Platforms (SMPs) become integrated into educational applications like G Suite. These platforms  claim to be able to find signs of antisocial behaviour in the mass of text that students generate daily as they engage with software at school. Gaggle is an SMP provider that promises full integration with G Suite for Education, Microsoft Office 365, as well as learning management systems such as Canvas. Their claims include the ability to “gain insight into student behaviour”, “sort incidents by severity, school or tool” and “create a positive school climate”. Gaggle developed an algorithm that can analyse the “sentiment” of students’ written outputs and even social media posts, automatically pinging IT administrators when evidence is detected. It is not too far-fetched to envisage some kind of synergy developing between TensorFlow’s open source libraries and the internal AI frameworks of a company like Gaggle. Without doubt, the ethical ramifications of the growth of AI infrastructures will require the full attention of educational researchers and practitioners.

Carlo Perrotta | January 2019.