Page 110 - Carbon Capitalism and Communication Confronting Climate Crisis
P. 110

8  THE NEXT INTERNET  97

            university research facilities (a difference in project time alone of between 6
            weeks for the cloud and 6 months for the old Internet) (Stein et al. 2015).
              The cloud is more like a data factory than a storage warehouse because it
            processes data to produce services, such as marketing, accounting, cus-
            tomer relations as well as legal and financial services. That makes companies
            and government agencies partners in service provision with the companies
            that own and manage data centres. It also marks a major step toward
            creating a centralized, globalized and fully commercial Internet that
            resembles giant water and electric utilities. The major cloud providers are
            almost all large corporations led by Amazon, by far the world’s largest
            cloud business. It is trailed by Microsoft, IBM and Google. Through
            service contracts, most of these are well integrated into the military,
            intelligence and surveillance arms of government. Amazon, for example,
            provides cloud computing storage and services for both the CIA (through
            a $600 million contract) and the NSA. Meanwhile government agencies
            demanding heightened levels of security are building their own cloud
            facilities, including the NSA, which in 2015 opened one of the world’s
            largest, in a remote mountain location in Utah.
              Big data analytics makes up the second leg of the Next Internet. In spite
            of the proliferation of fancy new titles, like data science professional, that
            fuel enthusiasm, there is very little that a social scientist would find novel in
            the big data approach. It generally involves taking a large, often massive
            and almost always quantitative data set, and examining the specific ways the
            data do or do not cohere or correlate, in order to draw conclusions about
            current behaviour and attitudes and go on to make predictions. The aim is
            to produce algorithms or a set of rules that specify conclusions to be drawn
            or actions to be taken under specific conditions.
              Facebook, for example, takes the data generated by its 1.7 billion or so
            users and relates the ‘likes’ associated with posts about everything from
            celebrities, companies and politicians to views about society, products and,
            of course, cats. These enable the company to develop profiles on its sub-
            scribers, which Facebook sells to marketers who target users with cus-
            tomized ads sent to their Facebook pages, what years ago in the pre-social
            media age, Gandy (1993) called the panoptic sort. Google does the same
            for search topics as well as for the content of Gmail, and Amazon creates
            profiles of its users based on searches and purchases on its site. Given the
            limitations of quantitative correlational analysis, especially the absence of
            historical context, theory and subjectivity (qualitative data is ignored or
            poorly translated into numbers), such analysis is not always accurate and
   105   106   107   108   109   110   111   112   113   114   115