Monday, 7 October 2019

Turning Life Into a Game


Saker and Evans' article examining the impact of an application such as Foresquare or Dodgeball on human interaction with what is broadly referred to as "space", regarding essentially our environment, provides an interesting starting point for examining the use of wider social media.

Concerning Dodgeball, Nicole Lee writes in engadget: "Dodgeball, the predecessor to Foursquare that founder Dennis Crowley created in 2000 with fellow NYU student Alex Rainert. Born from the frustration of not knowing where people were partying, Dodgeball was a fairly basic service. Simply text your location to a city-specific Dodgeball email address (say, sf@dodgeball.com or ny@dodgeball.com), and friends would instantly be alerted to where you were as long as they were signed up to the service as well. Crowley called it the "Friendster for cellphones." (2014)

Foursquare, which is still operational, describes itself as: "Foursquare City Guide will lead you to the perfect spot -- anywhere in the world. Get helpful, positive tips from our trusted global community and keep track of where you you've been and where you want to go, all in one place." (2019)

As the article being discussed here explains, the idea of these two apps is to motivate people to visit physical spaces via digital immersion, predicated on goals situated within the realm of social media. 

Interestingly, this piece was published in 2016 and doesn't mention Pokemon GO, which as a phenomenon was credited with doing a similar thing: bringing those who may not typically leave their house out using success in a digital space as a motivator. Unsurprisingly, Dodgeball is dead and Pokemon GO is all but the same.

What I personally took away from this discussion is that whilst these games might be dead, the principle idea behind them has managed to infiltrate common social media use. Foursquare, mainly, is built on "check-ins", and subsequent rewarding of using check-ins, playing into the naturally competitive nature of people. Modern social media usage still incorporates check-ins heavily as part of its general use, without the explicit play component. 

While these games may have succeeded, in a sense, in bringing people out to new places, firstly I could be said this still wasn't for the right reasons- as one could argue that visiting new places because you want to win the 'most new places' contest isn't exactly the point of doing so, nor is it super healthy (see Pokemon GO). Secondly, check-in culture, which has somewhat birthed what we could Instagram culture, is all about being seen somewhere rather than engaging with it; tying in nicely to the over-arching issue with a lot of social media behaviour rooted in the idea of insincerity.

If everything we do is part of a digital contest based in being "seen" places rather than engaging with them, I feel this may continue to lead to a somewhat meaningless and hollow social landscape.






Monday, 30 September 2019

I am living with nodes. I just have to pull back, because I am limited. Because I have nodes.


First off, I freaking hated this reading. It was extremely complicated for someone with only a surface level understanding of the computational language used. As well as that, Franklin has a habit of writing extremely long sentences and over-explaining a lot of what is said, although never actually breaking down any of his concepts in the interest of accessibility. The reader envisioned here already has a complex understanding of how Cloud technology works, which feels redundant. The most helpful part of the article, to me, was the beginning where Aristophanes' play The Clouds is used to illustrate where the concept of the Cloud in computation originates. I'm aware this may be Art student bias at work.

One [small] section of this article that I did find interesting and actually did make some semblance of sense was the section on "nodes." (p.458) Franklin makes the claim that the Cloud being completely removed from how we understand the Internet is impossible. The base idea being refuted is that the Cloud aims to uncouple connectivity from node infrastructure in order to create a completely free and limitless digital space. Nodes referring to points of communication, such as PC's or phones. The suggestion is that the Internet as we know it is still a tangible network because it needs these nodes to be in operation, as these points of communication are what allow for the Internet to interact with users and thus serve it's assumed purpose.

The belief that the Cloud embodies ubiquity in a way that would mean we do not need these points of communication, in an elimination of hardware, is false- as explained here by Franklin:

"Where the web-type network assures the possibility of measurement and representation by counting nodes and edges, then, the cloud eliminates the representation but not the existence of these constitutive units." (p. 458)

True, the Cloud presents as a type of autonomous digital organism, whereas the Internet presents as an ecosystem of many such digital organisms, it's still a thing that is ultimately routed in the physical, as without any physical space to exist from or being interacted with, it fails to be. The Cloud is an always will be a digital thing which means it draws its life from computers, which, at the end of the day, are physical things. The evolution of computers is at this stage impossible to really envision a limit to, but I feel it is safe to say that neither them, not any digital information network, will ever exist without some physical link.

Monday, 23 September 2019

Self-tracking as Small Hurdle for Big Data


Tamar Sharon and Dorien Zandbergen present an interesting enough discussion regarding the general pro's and con's of data and it's power. Starting by addressing the fact that there is in fact contention regarding Big Data and the society formed as a result, the following article focuses on a central component of data society with the notion of "tracking."

Tracking, in this sense, is essentially the point of data in contemporary society. As Sharon and Zandbergen state, data in our world has "its value framed in terms of political power, insofar as it enhances various forms of government surveillance, and in terms of monetary resource, as it benefits corporate profit." (p.1696). Tracking is this application of data collected to visualise the movements and potential use of groups of people.

Getting into the different ways that people are tracked, we are introduced to case studies wherein people found creating their own means of tracking themselves, generating individual data theoretically separate from that Big Data that is logged, was a "liberating" experience. This may seem like a resistance against data-driven society is being formed, but it isn't. This is more like a reverse-psychological phenomenon; not wanting to be part of the big data pool by creating your own data is still generation of data and, method dependant, this information can still be used and manipulated. 

These self-tracking people believe that they are outside the influence of Big Data via their practices. However, I would argue a paradox in that the meaning associated with data is its ability to track the 'movements' of people. Self-tracking achieves the same goal of Big Data, just in a different way. An argument can be made for the inaccessibility of that individual data because it is generated away from the same grid, but as surveillance increases this data is likely to end up in the same pool.

Monday, 16 September 2019

The Arts Degree Problem


It is difficult to look at The future isn't working, the chapter from Nick Srnicek and Alex Williams, and not find yourself considering the problems with what they're talking about without falling into a narrow "the problems with Marxism in general" mindset.

Partly due to this disclaimer, among other reasons such as not having done enough research to appropriately back my responses, in looking at this reading I actively avoided coming up to a response to the whole article. If I was to try, the piece is at least interesting, and they are obviously passionate about what they are saying. They are definitely optimistic which is nice in the arts world, and they are right about the title statement.

What I am comfortable talking about was an issue that comes up around page 90-91; something I will call the Arts Degree Problem. The writers are breaking down the "composition of the surplus population" they claim there are four different strata.

1. the capitalist segment: the unemployed and underemployed within typical capitalist circumstance.
2. the non-capitalist segment: the same segment minus any social safety net, people who cannot afford to be without work for long because of this.
3. the latent segment: qualified working people who might suddenly become not that through social development.
4. the inactive segment: disabled people, prisoners, students, etc.

Looking specifically at that third segment, Srnicek and Williams explain that:

"a third latent group exists primarily in pre-capitalist economic formations that can be readily mobilised into the capitalist labour market. This includes the reservoir of proto-proletarians, but this group also includes unwaged domestic labourers, as well as salaried professionals who are under threat of being returned to the proletariat, often through deskilling."

That last bit, the salaried professionals, that's us. The people who hold jobs that are produced by art degrees are the first thing to become obsolete with economic evolution. Less and less companies need sociological thinkers while the increase in demand for technically specified people is intense. This is an indirect result of automation, as discussed in the article, in that menial tasks are automated, with the automation itself creating a demand for technicians, and in the middle is a no-mans land of people whose education focused on thought and creativity, rather than technical and practical application, that are simply not relevant in a world based on linear practicality.

This is why people with engineering degrees give us a hard time, they saw that we basically got degrees in something fun, rather than something useful, based on that future we're going into that isn't working. To me, this means I need to look at adapting what I've learnt from arts to work as something I probably never thought I would.

Monday, 26 August 2019

Are We Really Better Than Algorithms?


It's interesting looking at (two in particular) this week's scholarship on algorithms, situating them in the contexts of everyday life and us as human beings, respectively.

Algorithms are defined and re-defined in many ways in both texts, but the most explicit case was: "incredibly relational- it is the relation that defines, describes and shapes how that data are then (re)presented. These relations are defined and designed by the architects of the algorithm according to a design brief, a particular desire or identified output, and shaped by technical specificity, commercial incentive and social predispositions, bias and cultural understandings." (Willson, p.148)

A recurring idea is that algorithms are these models built on collected data and used to manipulate and achieve various goals. There is a tone of warning across the two texts, which is situated in a general global attitude that an algorithm-heavy world removes humanity and autonomy from existence.

At risk of sounding super cynical, which is almost a cliche in the Arts faculty, I feel as if algorithms just reflect human nature. The argument that what separates algorithms from people is the ability for us to register and appreciate "concepts", "context" and "judgement" so that the right result, as opposed to the correct result, in a given situation is reached feels off.

Algorithmic bias doesn't seem to be the result of an algorithm's lack of contextual appreciation, but more as a result of the design of the algorithm and- as it stands- people design said algorithms. Hence, the bias comes from the people not the algorithm. In addition to that, people are biased as all hell. People often use notions like fairness and objectivity regarding things like statistics when making a case for the rationality of their decisions, usually to debate accusations of bias. But this process usually involves a selective use of data. Algorithms, although unable to consider compassion in a case-by-case basis like we would ideally believe that humans can and would do, consider all this data that we have access to- much faster- and formulate "opinions" with consideration to all aspects of a scenario.

In essence, numbers don't pigeon-hole people; people pigeon-hole people, and use numbers to make it look like they don't.

Monday, 19 August 2019

Are you sure about that?


I noticed a particularly clear, which is unusual for me in this class, shared theme between some of the reading material this week. The Illiadis and Russo reading, as well as the introduction chapter from Mark Andrejevic, both addressed the idea of certainty in the world of ubiquitous media.

In breaking down Critical Data Studies (CDS), Illiadis and Russo illustrate Big Data as not just the environment of information, but more realistically as an archive of fiction and fact. They present the idea of data disorder, a multiplicity and subsequent conflict of primary, secondary derivative and meta data. The central point being that the infinite broadness of a big data world creates a lack of clarity and, in doing so, a lack of substantive conclusions. Going on to point out how, under the veil of "openness", such a multitude of supposed information (the word itself placing outside the idea of fact and fiction in published content) can be counter-intuitively weaponised in a war against absolute understanding.

The article, overall, illustrates a causative relationship between big data and uncertainty. Uncertainty discerned from a growing inability to make absolute conclusions, stemming from an increasing multitude of conflicting statements facilitated by modern media.

Andrejevic takes this a bit further. He more directly discusses this idea established here by labelling it as a paradox. The paradox itself being expressed as: "increased access to information means it becomes impossible to comprehend it all", or "all the info means no info."

This continues into a lack of trust in news media because of an increase in counter news, as well as the peoples distrust in mainstream media based on the idea of partiality; this reflexive awareness of incompleteness. He also directly discussed the "borrowed kettle" media metaphor. The metaphor referring to confusing stories by using multiple narratives. A culture of multiple, intended-use instructive narratives rather than a dominant narrative.

One thing Andrejevic does say that I'm not sure I agree with is his discussion of decision "paralysis." David Shenk makes this claim that there is a paralysis of decisions in the world we live in, essentially that people are avoiding conclusions because the amount of information available is too daunting. Andrejevic goes on to try debunking this with the claim that people continue to draw decisions all the time, especially given pressure to do so.

He misses Shenk's underlying point. The "paralysis" refers to the idea of being uncertain of any decision we make, questioning if there are even real decisions made in big data era, not the actual making of decisions. Essentially, we are paralysed in our decision making because we are uncertain if our decisions matter because we can't be sure of any information we are exposed to's authenticity.

For example, the scarcity of information in Athenian democracy lead to certainty. One source, one understanding.

Monday, 12 August 2019

A Crash Course on Embodiment


Taking the philosophical idea of embodiment and placing it in the context of ubiquitous media might be the single most central component of actualising the future pursued by the subject.

Although extremely heavy on theory rather than real-world media technology, Paul Dourish's article outlines a clear picture of what "embodiment" actually translates to.

The conclusion of the piece states that Dourish's preliminary understanding of embodiment was as things that occur in real time and space. He develops this to say that embodiment is the idea of our engagement with that reality that results in meaning; what might be called life. He then brings in the link to technology by explaining that embodied interaction is application and influence of this life-meaning with artefacts; or media.

This is built via a crash course in phenomenological academia regarding the notion of embodiment. Phenomenology still not being entirely clear to me- I think it's the study of things and how that reflects existence rather than typical philosophy which is usual about what is the wider nature that constitutes things regarding existence.

This crash course essentially follows this syllabus:

Edmund Husserl; how the life-world is based in everyday embodied experience

Alfred Schutz; how the ‘life-world’ could be extended to address problems in social interaction

Martin Heideggar; embodied action is essential to our mode of being and to the ways in which we encounter the world

Maurice Merleau-Ponty; the body is critical in mediating between internal and external experience

Altogether, when considered in relation to the vision for ubiquitous media, the philosophy of embodiment would apply in the sense that, presumably, technology would be as tacit as any other day-to-day action and thus would be part of ‘life’ itself, therefore being the legitimate thing with which we take part in life. I would almost argue, following this notion, would mean that the idea of internal and external experience needs to be rethought.



Dourish, P., & Dourish, P. (2004). "Being-in-the-World": Embodied Interaction. In Where the action is: The foundations of embodied interaction (1st ed., pp. 99-126). Cambridge: MIT Press.

Monday, 5 August 2019

On the Wedge of Glory



It would make sense to take a simple concept and spend an entire chapter elaborating on it when the inferred idea of the "wedge" concept is established in the first page.

Richard Coyne establishes early on that the physical form of a wedge is being used as a symbol for innovation early in the text. The statement “it surely is an instrument of adjustment” in response to the question of what is the most pervasive device through modern development; while the continuing to point out the importance of “small scale interventions” to the success of said development.

The idea of a wedge as a small tool used to make adjustments that contribute to the overall cohesion of development through history is set up as a metaphor for the small changes made as part of the technological evolution taking place in the pursuit of ubiquitous media. The implication is that the sort of innovation needed cannot be done in leaps and bounds. This notion is correct in a couple of senses.

The first, is that there are limitations that immediately bar human capacity for large jumps in innovation of this kind. What I mean, and what Coyne painstakingly spells out for the reader, is that if humankind discovers fire on Tuesday, they won’t have coal engines on Friday. The wooden wedge metaphor being applied here, maybe the more appropriate example is if you have a stable surface on Tuesday you won’t have apartment buildings on Friday. The wedge represents the little innovations in the journey towards completing big ones. In this way, technological innovation is the same, albeit more rapid than other technological advances in our history.

The second, and less clearly addressed, is that if innovations too grand were introduced to humankind there would likely be confusion and fear that would create a backlash against said introduction. This is more implied in the chapter from Coyne when talking about calibration and tuning. What I discerned from this is the importance of considering the environmental factors of innovation. Things need to cooperate together, rather than having anyone thing advance significantly past another, otherwise the overall result is incompatibility. This idea is more sociological than physical.

Tuesday, 30 July 2019

I've Been Everyware, Man- and it's Stressful

It's intrinsically interesting reading about a concept as now readily assumed as everyware in the context of it's far off potential, as it was seen thirteen to fifteen years ago; as seen in the texts presented.

Focusing on the weightiest of the three texts, Adam Greenfield submits what is essentially a pitch for an idealised everyware-structured society. While the ominous nature of this kind of techno-evolution is touched on, it is not further discussed in a conscious choice to focus on the more positive attributes of this notion of everything we are and come into contact with being applicable as a form of information translation.

He frames the nature of a world where even our "sweaters" (?) are embedded with info-processing technology as the bright future ahead, introducing Mark Weiser and PARC's idea of "calm technology" and Greenfield's own recurring theme of 'hassle-free' hardware and inter-connectivity. 

This is what gives away the 2006-ness of his piece. Discussion on progressions of this kind a decade later tend to focus more on the dangers of such a mediated world; mostly since we are now deep in an irreversible state of that world. The idealism is also prevalent in the somewhat misguided attitude regarding the language used. The fact that all the various social, work, education facets of my life are all interconnected through the devices I use, and the fact that the software involved in any and all of these processes raises an increasing number of questions regarding privacy, mental and physical well-being- as opposed to what is referred to in the article as the "hassle" of the PC-oriented world being moved away from- is far from calm or without hassle.

Knowing that I cannot be without technology or that I cannot truly exit the constant network of stuff in my life is a reasonable contributor to the underlying anxiety that flavours contemporary existence. The stirred in with the issues of climate, politics, economy etc. This notion of calm technology is presented as synonymous with invisible technology and this is false. The article discusses the immediate processing power of everyware, using the Mastercard PayPass as a case study, saying that the chips used- what we would now regularly use and identify as "paywave" technology- process all the necessary information to authorise a transaction in 0.2 seconds. This may be invisible in that we are not exposed to all that goes in that time frame, but this is not calm. This is a stressful amount of media noise being exchanged between different electronic bodies. Just because we as users don't hear it doesn't mean that the pollution of that noise isn't something to consider.

The other  concern is the idea of "hassle" as it is presented here. The piece suggests that using a PC was a hassle because it stagnated the fluidity of technology use during day-to-day life. Anyone who has a smartphone now knows just how much of a hassle THAT technology has become, given that studies have suggested the average American checks their smart-phone every 12 minutes (Asurion, 2018). If our connection to the internet of things was limited to an hour or two a day, maybe some of that anxiety I spoke about would subside.