Monday, 7 October 2019

Turning Life Into a Game


Saker and Evans' article examining the impact of an application such as Foresquare or Dodgeball on human interaction with what is broadly referred to as "space", regarding essentially our environment, provides an interesting starting point for examining the use of wider social media.

Concerning Dodgeball, Nicole Lee writes in engadget: "Dodgeball, the predecessor to Foursquare that founder Dennis Crowley created in 2000 with fellow NYU student Alex Rainert. Born from the frustration of not knowing where people were partying, Dodgeball was a fairly basic service. Simply text your location to a city-specific Dodgeball email address (say, sf@dodgeball.com or ny@dodgeball.com), and friends would instantly be alerted to where you were as long as they were signed up to the service as well. Crowley called it the "Friendster for cellphones." (2014)

Foursquare, which is still operational, describes itself as: "Foursquare City Guide will lead you to the perfect spot -- anywhere in the world. Get helpful, positive tips from our trusted global community and keep track of where you you've been and where you want to go, all in one place." (2019)

As the article being discussed here explains, the idea of these two apps is to motivate people to visit physical spaces via digital immersion, predicated on goals situated within the realm of social media. 

Interestingly, this piece was published in 2016 and doesn't mention Pokemon GO, which as a phenomenon was credited with doing a similar thing: bringing those who may not typically leave their house out using success in a digital space as a motivator. Unsurprisingly, Dodgeball is dead and Pokemon GO is all but the same.

What I personally took away from this discussion is that whilst these games might be dead, the principle idea behind them has managed to infiltrate common social media use. Foursquare, mainly, is built on "check-ins", and subsequent rewarding of using check-ins, playing into the naturally competitive nature of people. Modern social media usage still incorporates check-ins heavily as part of its general use, without the explicit play component. 

While these games may have succeeded, in a sense, in bringing people out to new places, firstly I could be said this still wasn't for the right reasons- as one could argue that visiting new places because you want to win the 'most new places' contest isn't exactly the point of doing so, nor is it super healthy (see Pokemon GO). Secondly, check-in culture, which has somewhat birthed what we could Instagram culture, is all about being seen somewhere rather than engaging with it; tying in nicely to the over-arching issue with a lot of social media behaviour rooted in the idea of insincerity.

If everything we do is part of a digital contest based in being "seen" places rather than engaging with them, I feel this may continue to lead to a somewhat meaningless and hollow social landscape.






Monday, 30 September 2019

I am living with nodes. I just have to pull back, because I am limited. Because I have nodes.


First off, I freaking hated this reading. It was extremely complicated for someone with only a surface level understanding of the computational language used. As well as that, Franklin has a habit of writing extremely long sentences and over-explaining a lot of what is said, although never actually breaking down any of his concepts in the interest of accessibility. The reader envisioned here already has a complex understanding of how Cloud technology works, which feels redundant. The most helpful part of the article, to me, was the beginning where Aristophanes' play The Clouds is used to illustrate where the concept of the Cloud in computation originates. I'm aware this may be Art student bias at work.

One [small] section of this article that I did find interesting and actually did make some semblance of sense was the section on "nodes." (p.458) Franklin makes the claim that the Cloud being completely removed from how we understand the Internet is impossible. The base idea being refuted is that the Cloud aims to uncouple connectivity from node infrastructure in order to create a completely free and limitless digital space. Nodes referring to points of communication, such as PC's or phones. The suggestion is that the Internet as we know it is still a tangible network because it needs these nodes to be in operation, as these points of communication are what allow for the Internet to interact with users and thus serve it's assumed purpose.

The belief that the Cloud embodies ubiquity in a way that would mean we do not need these points of communication, in an elimination of hardware, is false- as explained here by Franklin:

"Where the web-type network assures the possibility of measurement and representation by counting nodes and edges, then, the cloud eliminates the representation but not the existence of these constitutive units." (p. 458)

True, the Cloud presents as a type of autonomous digital organism, whereas the Internet presents as an ecosystem of many such digital organisms, it's still a thing that is ultimately routed in the physical, as without any physical space to exist from or being interacted with, it fails to be. The Cloud is an always will be a digital thing which means it draws its life from computers, which, at the end of the day, are physical things. The evolution of computers is at this stage impossible to really envision a limit to, but I feel it is safe to say that neither them, not any digital information network, will ever exist without some physical link.

Monday, 23 September 2019

Self-tracking as Small Hurdle for Big Data


Tamar Sharon and Dorien Zandbergen present an interesting enough discussion regarding the general pro's and con's of data and it's power. Starting by addressing the fact that there is in fact contention regarding Big Data and the society formed as a result, the following article focuses on a central component of data society with the notion of "tracking."

Tracking, in this sense, is essentially the point of data in contemporary society. As Sharon and Zandbergen state, data in our world has "its value framed in terms of political power, insofar as it enhances various forms of government surveillance, and in terms of monetary resource, as it benefits corporate profit." (p.1696). Tracking is this application of data collected to visualise the movements and potential use of groups of people.

Getting into the different ways that people are tracked, we are introduced to case studies wherein people found creating their own means of tracking themselves, generating individual data theoretically separate from that Big Data that is logged, was a "liberating" experience. This may seem like a resistance against data-driven society is being formed, but it isn't. This is more like a reverse-psychological phenomenon; not wanting to be part of the big data pool by creating your own data is still generation of data and, method dependant, this information can still be used and manipulated. 

These self-tracking people believe that they are outside the influence of Big Data via their practices. However, I would argue a paradox in that the meaning associated with data is its ability to track the 'movements' of people. Self-tracking achieves the same goal of Big Data, just in a different way. An argument can be made for the inaccessibility of that individual data because it is generated away from the same grid, but as surveillance increases this data is likely to end up in the same pool.

Monday, 16 September 2019

The Arts Degree Problem


It is difficult to look at The future isn't working, the chapter from Nick Srnicek and Alex Williams, and not find yourself considering the problems with what they're talking about without falling into a narrow "the problems with Marxism in general" mindset.

Partly due to this disclaimer, among other reasons such as not having done enough research to appropriately back my responses, in looking at this reading I actively avoided coming up to a response to the whole article. If I was to try, the piece is at least interesting, and they are obviously passionate about what they are saying. They are definitely optimistic which is nice in the arts world, and they are right about the title statement.

What I am comfortable talking about was an issue that comes up around page 90-91; something I will call the Arts Degree Problem. The writers are breaking down the "composition of the surplus population" they claim there are four different strata.

1. the capitalist segment: the unemployed and underemployed within typical capitalist circumstance.
2. the non-capitalist segment: the same segment minus any social safety net, people who cannot afford to be without work for long because of this.
3. the latent segment: qualified working people who might suddenly become not that through social development.
4. the inactive segment: disabled people, prisoners, students, etc.

Looking specifically at that third segment, Srnicek and Williams explain that:

"a third latent group exists primarily in pre-capitalist economic formations that can be readily mobilised into the capitalist labour market. This includes the reservoir of proto-proletarians, but this group also includes unwaged domestic labourers, as well as salaried professionals who are under threat of being returned to the proletariat, often through deskilling."

That last bit, the salaried professionals, that's us. The people who hold jobs that are produced by art degrees are the first thing to become obsolete with economic evolution. Less and less companies need sociological thinkers while the increase in demand for technically specified people is intense. This is an indirect result of automation, as discussed in the article, in that menial tasks are automated, with the automation itself creating a demand for technicians, and in the middle is a no-mans land of people whose education focused on thought and creativity, rather than technical and practical application, that are simply not relevant in a world based on linear practicality.

This is why people with engineering degrees give us a hard time, they saw that we basically got degrees in something fun, rather than something useful, based on that future we're going into that isn't working. To me, this means I need to look at adapting what I've learnt from arts to work as something I probably never thought I would.

Monday, 26 August 2019

Are We Really Better Than Algorithms?


It's interesting looking at (two in particular) this week's scholarship on algorithms, situating them in the contexts of everyday life and us as human beings, respectively.

Algorithms are defined and re-defined in many ways in both texts, but the most explicit case was: "incredibly relational- it is the relation that defines, describes and shapes how that data are then (re)presented. These relations are defined and designed by the architects of the algorithm according to a design brief, a particular desire or identified output, and shaped by technical specificity, commercial incentive and social predispositions, bias and cultural understandings." (Willson, p.148)

A recurring idea is that algorithms are these models built on collected data and used to manipulate and achieve various goals. There is a tone of warning across the two texts, which is situated in a general global attitude that an algorithm-heavy world removes humanity and autonomy from existence.

At risk of sounding super cynical, which is almost a cliche in the Arts faculty, I feel as if algorithms just reflect human nature. The argument that what separates algorithms from people is the ability for us to register and appreciate "concepts", "context" and "judgement" so that the right result, as opposed to the correct result, in a given situation is reached feels off.

Algorithmic bias doesn't seem to be the result of an algorithm's lack of contextual appreciation, but more as a result of the design of the algorithm and- as it stands- people design said algorithms. Hence, the bias comes from the people not the algorithm. In addition to that, people are biased as all hell. People often use notions like fairness and objectivity regarding things like statistics when making a case for the rationality of their decisions, usually to debate accusations of bias. But this process usually involves a selective use of data. Algorithms, although unable to consider compassion in a case-by-case basis like we would ideally believe that humans can and would do, consider all this data that we have access to- much faster- and formulate "opinions" with consideration to all aspects of a scenario.

In essence, numbers don't pigeon-hole people; people pigeon-hole people, and use numbers to make it look like they don't.

Monday, 19 August 2019

Are you sure about that?


I noticed a particularly clear, which is unusual for me in this class, shared theme between some of the reading material this week. The Illiadis and Russo reading, as well as the introduction chapter from Mark Andrejevic, both addressed the idea of certainty in the world of ubiquitous media.

In breaking down Critical Data Studies (CDS), Illiadis and Russo illustrate Big Data as not just the environment of information, but more realistically as an archive of fiction and fact. They present the idea of data disorder, a multiplicity and subsequent conflict of primary, secondary derivative and meta data. The central point being that the infinite broadness of a big data world creates a lack of clarity and, in doing so, a lack of substantive conclusions. Going on to point out how, under the veil of "openness", such a multitude of supposed information (the word itself placing outside the idea of fact and fiction in published content) can be counter-intuitively weaponised in a war against absolute understanding.

The article, overall, illustrates a causative relationship between big data and uncertainty. Uncertainty discerned from a growing inability to make absolute conclusions, stemming from an increasing multitude of conflicting statements facilitated by modern media.

Andrejevic takes this a bit further. He more directly discusses this idea established here by labelling it as a paradox. The paradox itself being expressed as: "increased access to information means it becomes impossible to comprehend it all", or "all the info means no info."

This continues into a lack of trust in news media because of an increase in counter news, as well as the peoples distrust in mainstream media based on the idea of partiality; this reflexive awareness of incompleteness. He also directly discussed the "borrowed kettle" media metaphor. The metaphor referring to confusing stories by using multiple narratives. A culture of multiple, intended-use instructive narratives rather than a dominant narrative.

One thing Andrejevic does say that I'm not sure I agree with is his discussion of decision "paralysis." David Shenk makes this claim that there is a paralysis of decisions in the world we live in, essentially that people are avoiding conclusions because the amount of information available is too daunting. Andrejevic goes on to try debunking this with the claim that people continue to draw decisions all the time, especially given pressure to do so.

He misses Shenk's underlying point. The "paralysis" refers to the idea of being uncertain of any decision we make, questioning if there are even real decisions made in big data era, not the actual making of decisions. Essentially, we are paralysed in our decision making because we are uncertain if our decisions matter because we can't be sure of any information we are exposed to's authenticity.

For example, the scarcity of information in Athenian democracy lead to certainty. One source, one understanding.

Monday, 12 August 2019

A Crash Course on Embodiment


Taking the philosophical idea of embodiment and placing it in the context of ubiquitous media might be the single most central component of actualising the future pursued by the subject.

Although extremely heavy on theory rather than real-world media technology, Paul Dourish's article outlines a clear picture of what "embodiment" actually translates to.

The conclusion of the piece states that Dourish's preliminary understanding of embodiment was as things that occur in real time and space. He develops this to say that embodiment is the idea of our engagement with that reality that results in meaning; what might be called life. He then brings in the link to technology by explaining that embodied interaction is application and influence of this life-meaning with artefacts; or media.

This is built via a crash course in phenomenological academia regarding the notion of embodiment. Phenomenology still not being entirely clear to me- I think it's the study of things and how that reflects existence rather than typical philosophy which is usual about what is the wider nature that constitutes things regarding existence.

This crash course essentially follows this syllabus:

Edmund Husserl; how the life-world is based in everyday embodied experience

Alfred Schutz; how the ‘life-world’ could be extended to address problems in social interaction

Martin Heideggar; embodied action is essential to our mode of being and to the ways in which we encounter the world

Maurice Merleau-Ponty; the body is critical in mediating between internal and external experience

Altogether, when considered in relation to the vision for ubiquitous media, the philosophy of embodiment would apply in the sense that, presumably, technology would be as tacit as any other day-to-day action and thus would be part of ‘life’ itself, therefore being the legitimate thing with which we take part in life. I would almost argue, following this notion, would mean that the idea of internal and external experience needs to be rethought.



Dourish, P., & Dourish, P. (2004). "Being-in-the-World": Embodied Interaction. In Where the action is: The foundations of embodied interaction (1st ed., pp. 99-126). Cambridge: MIT Press.