FOUND SUBJECTS at the Moonspeaker
Private, Singular (2017-05-07)
There is no shortage of argument about what counts as public versus what counts as private, and what these two words even really mean, especially when there is a significant, if still numerically small group of people who want to persuade the rest of us that privacy is a right only of the rich and powerful, and no more than a luxury at best for the rest of us. It seems obvious to me that anything that a small cadre of capitalists with delusions of new businesses in privacy creation and paranoid insecurity specialists must be immediately labelled, "don't agree to anything these snake oil salesmen suggest." Nevertheless, that doesn't mean we shouldn't give some more careful thought to these terms. By this I don't mean to engage in the mug's game of "deconstruction" beloved of postmodernists hoping to persuade us all that everything oppressive is good for us and there is no such thing as systemic oppression that can't be disarmed by individual action. Rather, my suggestion is to have a closer look at etymologies and what we actually expect when we label a situation or thing public or private. For example, if we consider the latin roots of these words, some interesting and familiar concepts are brought back to the forefront.
Take private, which is basically an anglicization of privatus, a descriptive term for a man who has withdrawn from "public life" which in this case means participation in the course of offices in the roman government, especially in the republican period. The term itself comes from the same root as the verb privare meaning "to deprive, take away" and the noun privus, "individual." This is quite interesting, because it implies at least when placed together here, that an individual is originally a person who has lost or otherwise had other people separated from them. That is, being privus wasn't a neutral state so much as an outcome of some sort of process that disconnected the person in question from others. This is a connotation that has been creeping into the current term "individual" as well. Yet it comes mainly from the meaning of the latin perfect passive participle of privare, which is privatum. Stepping back from such value judgements and accidents of grammar, for myself, my sense of the current meaning and content of "private" as an adjective is that whatever is private is what is held separate from other people. The private idea, thing, experience, or whatever, is not for sharing and is kept away from them. The ability to set aside certain things as private is important to both personal psychological and social wellbeing.
"Public" is a bit more complicated in its etymology, because it has wended its way from the latin term populus meaning "people," and the related term pubes "adult, male able to bear arms," through old french and middle english. Today the simplest definition of the term is "concerning the people as a whole" an impressively short one from my trusty OED. So by contrast with "private," whatever is public is what is shared with and of concern to the wider community. This is the spot where things get trickier, because at least in "the west" many countries still labour under the various egregious forms of coverture that often have their official legal roots in the roman legal digests. "Coverture" is the formal term from now superseded british common law which claimed that a married woman ceased to be a separate person under the law since her husband could now do all the decision-making concerning her. This subsumed status was not just for the wife, hers was just the one that had the fancy name. Any minor children, all unmarried daughters, widowed mothers and sisters, and slaves originally also had no official decision-making powers independent of the male head of household under roman law and various other legal systems attempting to refuse personhood to the vast majority of the population. How this now discredited concept has made the notion of "public" so confused is that so many extra people got attached to a single male. As long as those extra people were subsumed under one male, then that group became an effective "person" who could make certain things private.
Despite so many social and legal changes, the disaggregation of all those people into individual persons has not been fully absorbed into the mainstream notions of public and private. Or at least, there is a significant number of people fighting an ongoing rearguard action against the implications of that disaggregation, which includes the recognition that they do not have the right to impose what they think should be done on individuals other than themselves or those whom they are directly responsible for. Then of course there is the insinuation made so often by tech boosters that no one has the inherent right to privacy, and that privacy is some sort of unnecessary thing now that computers can be used to violate it so completely. But again, that ties back to the question of what should be considered public, and who is to be recognized as a person, or if we are being latinate about it, a pubes in its first sense of "adult." (Top)
Is Analogue the New Luxury? (2017-05-07)
Now that I've finished David Sax's book The Revenge of Analog, which is well worth the time for anyone curious about digital versus analogue formats and tools, I find myself left with plenty to think about from the final chapter, which deals with what silicon valley types think of "analogue." Contrary to what the ever louder insistence in the mainstream media that analogue really is dead except for among a few people overrun by nostalgia and fear of the future, it seems that tech companies and venture capitalists could not agree less with this assessment. In fact, the latter see opportunities to make money, the former revere analogue tools where they function better than anything else. I actually am inclined to agree with the valuation of the best tool for the particular job at hand. There is no way to make my cell phone as convenient as a quick note and sketch taking device as my notebook and pencil – well, unless I stick it in a case with a notebook and pencil attached to it, I suppose. The venture capitalists on the other hand, and not a few of the tech company denizens, framed "analogue" as "luxury" because analogue goods can be sold for higher prices based on more or less rational bases. This strikes me as a disturbing reframing of "analogue" as a general concept as opposed to a subset of things or services that we could label with the word. It is not a surprising reframing per se, since many of the people interviewed for the chapter are members of the notoriously toxic male-dominated silicon valley start up culture. But the problematic reframing should not be lightly dismissed as nerd posturing.
It would be tempting to locate the problem in the elitism inherent in treating analogue as the new luxury. Elitism is officially not well-regarded in western mainstream culture, even though it is unfortunately rampant and getting dangerously worse right now, especially as reflected in the growing and remarkable contempt for any sort of labour that demands human relationship (teaching, service jobs) or physical activity (manufacturing, trades). Then the problem would lie simply in how a particular good or service is rendered a luxury by its relative scarcity and therefore how much more expensive it is than others. The digital is easily mass produced and therefore cheap, a status often equated with disposability and lack of real value. Analogue is not easily mass produced and paradoxically depends on increased and otherwise despised manual or relationship labour. The resultant goods are not only worth curating, they are built to last. The services of course are not curatable in themselves, but they provide lasting memories instead. This all sounds familiar and plausible, but in fact it is no explanation or reason but a tautology.
Instead, the issue is that the reframing is more of a recreation of "analogue" in a way that leans towards appropriation of whatever is designated "authentic." "Analogue" not as something that isn't digital, but as an expression of the authentic. "Authenticity" is a contested term, although in my various encounters with it in the context of goods and experiences, it often refers to objects made by hand and experiences that can be construed as "direct" in some way. Indigenous communities already know all about how this works. Colonizers and other outsiders swoop in, pay ridiculously low prices or nothing to scoop up "authentic" Indigenous items and have authentic wilderness or cultural experiences, then resell the objects or experiences at spectacular mark up for their own profit. Meanwhile, the labour, goods, and cultures of the Indigenous communities are strip mined. This process has been turned on rural communities as well, especially since the end of world war two.
There's no reason for "analogue" to become the newest iteration of what Deborah Root describes in Cannibal Culture: Art, Appropriation, and the Commodification of Difference. As Sax wrote and I have already mentioned, there is a better path already available for "analogue" anything: to value objects and experiences that are not primarily products of digital tools and methods when they are the best options, and ensure those better options are accessible to as many people as possible. It will be tricky to take that path if we simply accept a new equation between analogue and luxury, however. (Top)
We Are Not Yours (2017-05-06)
I have been thinking about how to write this particular thoughtpiece for some time. I went back and forth, trying to find a good picture to illustrate it with, wishing I could find an online archive of the works of an amazing Indigenous political cartoonist whose work regularly made it into the AMMSA newspapers for awhile. After wasting some time on that, I finally realized that this thoughtpiece is about something that needs to damned well be taken seriously. I still get a merry earful every other day about "reconciliation" and "decolonization" while the same non-Natives who produce endless platitudes about that also like to refer endlessly to "Canada's Indigenous people." Here's the newsflash repeated all over again in the hope that folks will clue in before somebody punches them in the nose:
WE ARE NOT YOURS.
UPDATE 2018-05-25 - Gregory Younging has just published an excellent new resource that provides guidance on why writing as if Indigenous peoples belong to canada is at best an error, and what to do instead. The resource in question is Elements of Indigenous Style, and interested readers can purchase hard copies for $20 canadian or an electronic copy for $12 direct from brush education or their local bookstore. Readers may also want to have a look at the annotated reading of the new guide on the Moonspeaker.
I picked the newspaper headline to illustrate this problem for a reason. Newspaper headlines are not decided on by the journalists who write the articles. They are written and rewritten into today's clickbait and yesterday's analogue scandal by editors who have been taught that this is how to sell news. So my purpose here is not to criticize the journalist whose article was saddled with this atrocious use of language that implies "canada" owns Indigenous peoples the way mainstream canadians feel they own their pets. If this seems an extreme characterization, try out this: "Canada's whites raise their voices as income inequality skyrockets." If that reads oddly to you, this second version will blow your mind: "Canada's white people raise their voices as income inequality skyrockets." Here's another one that may discomfort differently and make the point a little better: "Canada's Québecois celebrate another political milestone." Regardless of anglophone views of "the french question," this headline would not pass muster anywhere, including a newspaper like the guardian.
I have observed that this is one of those places where the argument that these are "only words" is usually deployed, except words aren't "only" anything, that's why all of us can be incensed by words that are wrong or disrespectful. The trouble with racist phrasing like those implying canada owns Indigenous people – or anyone, for that matter – is what it reveals about attitudes towards the people supposedly being owned. Presently, what or who you own, may be treated as property, in the sense of being a possession a person has the right to dispose of as they see fit. That is certainly the attitude that colonial governments hold towards Indigenous peoples, and the attitude is far from new and still all too entrenched. Some of the examples below made an appearance in an earlier thoughtpiece.
A surprisingly powerful step to take in challenging and uprooting internalized racist behaviour, which will support putting an end to systemic racism, is to root out any references to Indigenous peoples as "belonging" to canada. One of my favourite not too awkward alternatives is "Indigenous peoples in present-day canada." Sure, it's longer and reiterates points that may be less popular among those still inclined to deem colonialism a lesser evil and pretend a few hundred years of colonial nationhood holds a candle to Indigenous nationhood. But it reflects the truth. WE ARE NOT YOURS. (Top)
There Is Such A Thing As Too Perfect (2017-05-05)
Having survived the most recent end of academic semester, I have been reading The Revenge of the Analog by David Sax. Sax is an investigative journalist, so although his premise is quite clear from the title, his argument for its validity is provided by a series of case studies examining the different ways various "doomed" analogue formats have not only survived digital but are unexpectedly thriving. Generally I only have so much interest in the loud technobooster arguments for the end of analogue, because I find their insistence on referring to the body in absolutely negative terms completely offputting and pointless. After all, there are things that digital is undeniably better for than analogue when we can access it. There are also undeniably things that analogue is better for. On the digital side, the uses we most often hear of are how computers are a boon to smallscale publishing, but for me the reason I first purchased a computer was far less elevated. I had just started university and while I could touch type, my error rate was still high, my ability to figure out how to set margins adequately on a manual typewriter was nil, and the time and budget available for me to retype or pay someone else to retype minimal. This of course is not even remotely in the league of Bettie Nesmith Graham, who invented correction fluid and started the first practical revolution for struggling typists everywhere. On the analogue side, there doesn't seem to be quite as clear a set of examples to provide, although Sax brings together an excellent selection.
A key point that Sax notes is the issue of "perfection" in the digital versus analogue context, which he illustrates especially well in his discussion of the application of digital methods to music making. "Perfection" sounds like such a good idea, since it refers to a lack of all faults, defects, mistakes, and so on. Its roots are in latin, and refer to completion with connotations of unchangeability after whatever act or thing is done. This is why in latin and many other languages, the verbs referring to death and dying are often in what is called the "perfect" or past completed tense. Not that anyone generally means to invoke this older, overtly morbid sense when they refer to perfection now. On the other hand, in many cultures only a very specific type of being can be perfect, a divinity, and divinities are definitely not human. Their perfection is itself a hallmark of their eeriness. In fact, if you read descriptions of deities that have been carefully – or pedantically, depending on your point of view – translated from ancient greek such as the homeric hymns, it is very clear that the poet is a bit queasy about the deity in question. Even when the deity has friendly intentions, the flawlessness that gives them away inspires unease. This should not be sneered at as mere superstition. Those old poets had and have a point, though it has taken awhile for more modern examples to draw it out in this age of contempt for the arts and humanities. The drive for perfection of one kind or another has led to all too many sinister decisions.
But that's not where Sax is going in his reflection on digital perfection, and I'm not going any further in that direction either. The more important point is the inhumanness of the apparently perfect. Intensive digital manipulation can produce a result that looks violently artificial in the sense that it is not possible to see or listen to it and not realize that the product is heavily processed. As Sax notes, in music in particular this has led at times to a method of patching together "perfect" tracks where the singer's voice is always perfectly even and in tune, the very "best" bits of performance can be cut and patched together to create the most seamless performance. Practically speaking, no performer can reproduce that, and this has been true for a long time, perhaps as long as people like Madonna have resorted to the practical necessity of lipsynching their hits while they dance furiously all over the stage. (You can't dance rigorously and sing clearly and in tune, it's physically impossible.) This is the real difference in sound between recordings on vinyl and their digital counterparts when it exists, because the digital versions have themselves often been reprocessed again to "clean them up." While this is a laudable thing to do when hiss and crackle has harmed the music or picture, the processing and filtering can go far beyond such relatively benign change.
Thinking all of this over, it made me realize that the infinite manipulability of digital methods of producing music or art or whatever we imagine is also its greatest curse. It's the prison of perfection. It seems like a removal of limits, yet it is in fact a much tighter limit because people tend to respond to it by trying to produce and reproduce the very perfection that now seems so close. The editing extends into the deepest minutia and becomes a far from enjoyable rabbit hole. The results may also be shockingly predictable, which is of course, boring. Suddenly everything we can make or edit with a digital tool or application has the same qualities as a blank sheet of white paper for a novice writer who wants to write the next great novel, but is terrified of producing the inevitable less great stuff necessary to get to it. In that moment of terror, it is easy for the novice writer – or musician, or carpenter, or photographer, or cook or [insert your favourite example here] – to forget that making the less than perfect versions is actually a hell of a lot of fun at least as often as it is hellishly frustrating. Anyone who isn't able to enjoy something they've managed to make that is howlingly bad (I think immediately of Plan 9 From Outer Space or the 1980s Masters of the Universe live action movie) is truly missing out. (Top)
No, Those Things Were Not "Natural and to be Expected" (2017-05-05)
Currently there is a non-trivial controversy boiling over in response to a peer-reviewed and duly published philosophy paper by scholar Rebecca Tuvel. It is not at all difficult to learn more about the paper, and even read it and make up your own mind about it. It's well worth the effort, actually. This thoughtpiece is not going to go into the topic of Tuvel's paper or the details of the arguments she lays out. Instead, I find myself driven to respond to a chilling comment made in the course of an on-line discussion of this paper and the subsequent semi-academic, academic, and non-academic responses to it. The statement was to the effect that:
"Well, death threats, and on-line harassment is what anyone who writes something like this should expect. It's only natural that people would respond in this way."
No. Death threats and on-line harassment are not "natural responses." A person who states or argues something that others disagree with has no reason to expect to be harassed and threatened for that reason. They may expect to be ignored, a more common occurrence on the internet, or disagreed with rigorously on many different bases. They may even expect that others may struggle at first to disagree civilly due to the current vogue for texting/emailing/posting without thinking or at least taking a deep breath first. Anything beyond this is not "natural" nor should it ever be "naturalized."
I find it extraordinary that I am writing this when I can navigate to any number of websites where someone is pontificating about the "right to free speech" often referring to the united states constitution whether or not they are american (which is a bit surreal at times). On one hand it truly is extraordinary. On the other, it is not. As I have already written in a previous thoughtpiece, many bits of rhetoric originally used to defend the rights and integrity of others, including and perhaps especially others we disagree with, have been remade into weapons to silence. To silence not those who wish to prevent others from sharing their ideas, or being safe, or simply doing their jobs, but those who found that heedless of the fact many of these ideas were developed by upper class white men who wanted to protect themselves from aristocrats, those ideas could protect any oppressed person. Which is pretty amazing. But they can also be misused by careful manipulation to increase oppression or reimpose oppression instead, or even create oppressive conditions where they did not exist before. Unfortunately we humans are nastily clever with language that way.
Now believe me, I get it. I understand the temptation to use language in this poisonous fashion because in the moment it doesn't feel poisonous. The people engaged in this behaviour do so with a level of self righteous feeling that I have only seen before in those I can only describe as religious fanatics. Or maybe it does feel sort of poisonous, but it is easier to rationalize as the linguistic equivalent of bug spray. A nasty necessity, best applied in tiny, closely controlled circumstances. Unfortunately, in cyber space as in firm space, no matter how limited the application, the poison still travels farther and takes out more of the good than the bad. That, and there is no effort whatsoever to act in a strictly limited fashion.
For those inclined to learn more about Rebecca Tuvel and the contents of her paper, a good source is the news coverage at Feminist Current, which includes links to the paper itself and a range of responses to it. For contrast in terms of response to considerations of the arguments Tuvel is examining when presented by a different academic, it is worth reading Adolph Reed, Jr. at commondreams.org. A helpful and interesting response dated 8 May 2017 by Kelly Oliver is also available at The Philosophical Salon. (Top)
The Unnatural City (2017-04-02)
Last year I wrote two interrelated essays, Touch Me Not and Books Are Sharks. The former is more of a meditation than a classic essay, considering the perhaps unexpected relationships between current social attitudes towards the sense of touch and physicality in north american mainstream culture and imaginings of what technology will be like in the future. There I found, and perhaps this is no surprise when among the various tech moguls it is considered unproblematic to refer to the evolutionary marvels at the ends of our arms as "meat styluses," that minimizing touch, minimizing a sense of physicality in the world is central to most tech visions. This is not exclusive to people busy trying to persuade us that the internet of spying things is just what we always wanted. It is a surprisingly strong trend in north american science fiction as well. Not an exclusive trend by any means, because like anyone else I can immediately think of impressive exceptions (Jeff VanderMeer, anyone in the anthologies edited by Athena Andreadis...). But still, it's there, and heavily reinforced by the current crop of tech marketing.
A key reflection of this trend is the unnatural city, or perhaps I should label it, the uber-unnatural city. Cities are incredibly problematic socially and environmentally, and I say this as someone who lives in one. In science fiction, or to be more ecumenical, speculative fiction worlds, the current problems we wrestle with in cities are either somehow magicked away, or the city is a complete dystopia. Regardless, these fictional cities often share an extreme version of the real thing, in that they are described in a way that indicates they were built by completely tearing apart the land they are sited on. Hills levelled, gullies filled in, terracing applied, rivers straightened or enclosed and redefined as sewers. Followed of course, by the endless, grinding work of keeping all that together against gravity and a thousand other factors as the land struggles to get loose again.
Hence pictures like the one illustrating this thoughtpiece. Not only are there more roads, the roads soar through the air, physics be damned. Though I suppose this may be somewhat more plausible than everybody having individual hovercars. There are often no plants or trees, unless they are restricted to flower beds and that sort of thing. If you spend some time browsing the site this picture is quoted from, soon you'll bump into that perennial favourite, the city under a giant glass or other transparent material, sealed off from whatever horrors the weather can throw at it. One of my favourite versions of this happens to be from a programme I have already referenced many times on the Moonspeaker, Doctor Who. The fancier version comes from the reboot, versus the original which tended to lack the budget for pretend aerial shots by the time the planet Gallifrey became more of a location. The city isn't just under a dome, the dome has an extra chunk of city sitting on top.
The representations of enclosure and sealing off are important for the unnatural city, perhaps because this suggests an imposition of order. A place for everything and everything in its place, as the awful cliché runs, yet it also reiterates Mary Douglas' definition of pollution as "matter out of place." To exist and function, the unnatural city must make a place, though I think it would be more accurate in the real world instances to say that the unnatural city first unmakes a place, and then imposes a type of space. "Space" in the sense of something abstracted, think a grid along which all streets and buildings are arranged regardless of intervening things like those pesky hills, gullies, and rivers. That's an easy example with considerable antiquity. But that's not the only abstract space available for imposition. Other types have been based on specific interpretations of texts considered to have divine origins, or reflect a scaled up version of a key type of building, or even the apparent arrangement of the local zodiac.
It isn't necessary for the city to be unnatural in our imaginings or in reality, though I concede if we start building places for ourselves to live that don't start from a basis in destroying the places that were already there and the assumption that there's bound to be another site to move to when this one is spoiled, those may be so different from cities as we know them now that we are forced to call them something else. We may even get to call them "natural" since I suspect if we manage to build such places, we'll also have finally overcome the illusion that we humans are not animals with the supposed right to destroy any other beings that aren't human as we see fit. (Top)
Qui Bono From Robot Armies? (2017-02-24)
Now that we are in the age of internet of shit devices being built into botnets for use in massive ddos attacks, I have observed more and more thoughtful technology commentators resisting the urge to refer to these as "robot armies." It's a strong urge, as reflected in talks like one in delivered in november last year by Maciej Cegłowski at the australian web directions conference, Who Will Command the Robot Armies. In that talk, just as I did myself in this very paragraph, Cegłowski paralleled iot devices to a number of drones and robots under development, some civilian, most being built by the u.s. military research laboratories. Curiously enough, not many folks have been drawing on examples from the various space programs in the world, perhaps because the relationships that this reveals between space and military research are uncomfortably close.
Who benefits from robot armies must seem so obvious to many fortunate enough to live in countries not currently at war or under continuous attack by various militaries under a whole range of rationales. The answer is simple. "Our" soldiers, who won't have to put themselves in as much danger in an already dangerous profession because they can use robots like the one in the picture to check and detonate explosive devices, deliver necessary supplies in periods of intensive firefighting, or carry gear over difficult terrain. That all sounds pretty good, nobody likes the idea of soldiers being gravely injured or killed if there is some way to avoid that. In fact, it sounds easy, commonsensical, something that "everybody knows." When I notice an idea or answer has that quality, that marks it as something that deserves further thought. Especially in this case, because it seems to me that the best way of keeping soldiers safe of all is not to go to war in the first place.
The origins of the term "robot" are reasonably well-known today, so here I will give only a précis. The term was coined from czechoslovakian robota meaning "forced labour" which entered "western" culture via Karol Čapek's play "Rossum's Universal Robots." Efforts to soften the term in english have been ongoing, with more recent meanings referring to a machine that can carry out a complex series of actions, controlled by computer programs. The "forced" part of the labour has dropped out because the robots are not sentient. With this in mind, the reason the iot devices are slipping under the "robot armies" rubric is no longer so mysterious. Robots tend to be most useful and most powerful at scale, by being combined together in numbers. These numbers may be relatively homogenous, as in the case of types of iot devices that all have the same vulnerability, or say, a pack train of robots that carry luggage. Or they may be fairly diverse, as in factories that employ sets of robots to carry out certain tasks in different sections of the building. A major growing concern about robots is that they don't just help people keep safe, in fact they have primarily been applied to getting rid of human labour as a cost cutting measure. Then there is the issue of what it means to have a large number of centrally controlled robots, whether within a factory or as the envisioned "robot armies."
The countries driving hardest to develop robots in military and civilian applications though, are those often labelled "the west." To me this begs many questions about potential relationships between this increasing military and corporate interest, and the slowly growing hysteria of male people who think they are white about "falling birthrates" and deeply embedded racism against anyone they consider racialized, which is closing the door on immigration as a solution to this "problem." On a more abstract or pop culture level, I wonder a lot about the insistence on framing "robot armies" as things that will become sentient and determined to wipe out humanity. How many people appreciate that this is rapidly becoming a displacement of anxiety about what is now not only contempt but overt violence and hostility of the "1%" against everyone else, who are the ones who mainly own and guide the most dangerous robots?
Then there is the whole line of thought that runs if a person does not have some sort of job to do that produces something in a capitalist economy, that person is useless. Since world war ii is a perrennial generator of books, movies, and other paraphernalia that makes various media companies money, I hope that one of the most overt and heinous applications of this idea is not a mystery to any readers. What may be surprising instead is the notion that this idea is not one held only by extremists. It is so common and so ingrained, that anyone who suffers a period of unemployment frequently also suffers a terrible loss of self-esteem and sense of purpose, and may even find themselves struggling with depression. It is still widely considered okay to treat anyone who is unemployed with the utmost contempt, especially if they are also homeless. (The homeless get treated like shit even when they're employed, which is most of the time.) Many people associate unemployment therefore with a serious loss of self, and a sense that without a job they lose their right to live full lives. They are hardly being "luddites" when they wonder skeptically and uncomfortably qui bono from the robot armies. (Top)
Anonymity is Not the Problem (2017-02-17)
Last year I wrote a series of three pieces all about the "Trouble with the Web" (1 | 2 | 3), from my idiosyncratic spot in one of its quieter areas. A reader going through those pieces might be surprised more or less by things not included in them, because there are several (mis)features of how people are able to access and participate on the web that I had nothing to say about. The term "(mis)features" is not intended to be arch – some folks will feel these things are features, others that they are misfeatures and ought to have been fixed yesterday. The top four even a social media hermit like me has managed to pick up on are: peer to peer networks (still!), open wifi, open paper and preprint archives, and anonymity. Lots of people have written in detail about the first three, and somewhat fewer on anonymity. It also seems that the vast majority of people write about anonymity solely to figuratively stab it in the neck. After all, it is supposed to be the real reason that trolls have poisoned all on-line discussion and are ruining news feeds with propaganda and lies. Would that it were that simple (seriously!). It's not, and if you're worried about pervasive surveillance, you ought to be thoroughly wary of this claim.
The way the argument for the horribleness of anonymity goes is, the people who behave horribly on-line do so because being able to do so and not have the actions traced back to them is a an extraordinary disinhibitor. Equipped with a login name like "B1ff," "randguy," or whatever default insulting label sites with bad attitudes towards anonymous commenters use (yes, I'm pointing at you, Techdirt), said person can fling insults, derail discussions, and even engage in concerted harassment, all in the bliss of invulnerability. I get it, this seems too plausible, we can find a far earlier version of the very same argument in Plato, a snippet from a dialogue often referred to as the ring of Gyges. There are a couple of critical problems with this though, even before we get into issues that may be more concrete and personal. First, for this to work, we must be quite sure humans are inherently bad and just dying for a way to do whatever they want and get away with it. This is a very common idea among people with judaeo-christian backgrounds, but overall a belief in the essential good or badness of humanity seems to be a faith position no matter what a person's background is. Or we need to be sure that really, all anonymous or more often pseudonymous participants are that sort of awful human, which runs us into the bad argument of guilt by association.
Here is where things get a bit more subtle, though. Just because anonymity or pseudonymity don't always correlate with horrible on-line behaviour, doesn't mean nobody ever takes advantage of those states to be horrible. That seems uncontested, and can be verified by taking a sample of forums and blog comments. But correlation is not causation, as the many instances of people under their own names or with their real name openly known alongside their pseudonym who have blown apart and trolled the hell out of other commenters, posted something horrific, or harassed someone else shows. Taking correlation for causation is always very tempting, again because it is simple, and there are a few cases where it turns out to be true. Just not this one. So what is the better explanation then?
Well, Maciej Cegłowski would remind us that making it as easy as possible to jump into a conversation or group and start posting is contributing to the problem. That does make it all too easy to perform drive bys, and there are sites that have finally given up on comments because they effectively became constant easy targets for the trainee troll brigade. It seems to me this makes good sense. Having a sensible speed bump that leads a person to make an investment in the on-line community they want to participate in is a good way to discourage those who don't want to invest, and trolls certainly don't. As Cegłowski would be the first to say I think, this is not the end of the question.
The most common attempted solution these days for many new on-line communities with no entry speed bumps, is to use some form of moderation. Moderators may be volunteer or paid, and they have the unenviable task on especially busy sites of dealing with massive volume and needing to displease everyone equally. Alas, this is basically impossible even with the best and most determined of intentions. Trolls invoke emotion, especially the negative kind, it's what they like, and helps keep them as the centre of attention. And as we all know, when we feel attacked, it is incredibly hard for us to see a moderator as acting in an even-handed way when they don't seem to be nipping bad behaviour in the bud. Worse yet, the moderator may indeed not be stopping the troll, and we can't tell if it's because they are overwhelmed or biassed. Well, unless the moderator says something unfortunate, and then the troll really sits back and giggles. Moderation does not work well once the number of commenters gets very big and very busy.
Back in the olden days of the web, many early denizens effectively used their real names on-line, and I think it is partly their reminisces that gives the negative claims about anonymity so much pull. What they have forgotten however, is that many of them came on-line as a community, and additions to the community were incremental. In other words, that speed bump thing. Since they were part of communities, they had rules of behaviour that the community enforced together. I got my start on-line at the tail end of this period, and there were not moderators so much as list owners and similar, who tended to let things run freely unless it became clear a bad actor had actually become too much for the community to settle down. Then they could do the things we associate with moderators now, using the ban hammer, temporary account suspensions and so on. But the best anti-troll protection was the respect community members had for one another. It wasn't ideal then either, but I think this did help people remember to do things like ask clarifying questions before hauling out the accusations and name calling. If people are quick off the mark with those, that is like a homing beacon for trolls, because they know that is just the sort of situation where they can set off a bomb with little effort. Remember, these folks don't want to work at it.
This suggests to me that one root of the trouble with on-line communities these days is about how people handle conversations where not everyone agrees. Worse yet, commenters seem reluctant to respond to someone else with some version of "I think I've misunderstood. Could you clarify?" The response to disagreement and misunderstanding both all too often right now is attack. There is such a thing as verbal violence, in fact Suzette Haden Elgin wrote a whole series of books on The Gentle Art of Verbal Self-Defense. Mere disagreement is not verbal violence though. It seems that the pressure so many feel to hurry, hurry blocks their understanding that mere text lacks inflection. We can misunderstand the other person's intention and meaning just from being in a text-only environment, let alone when we are trying to fire back an immediate response. Having said all this, I think there is an even bigger source of trouble, and that is the persistence of the claim that the internet isn't real. What happens there isn't real life – this nonsense is even repeated in that early Wachowskis break-through hit, The Matrix. But here's the thing. On the internet, we interact with other real people. And that means the words we write are having a real impact. Too many trolls don't seem to understand this, and they are still encouraged in their ignorance. However, only games aren't real, and game playing isn't all or even most of what most people do on-line. Believe it or not, this is encouraging.
It'll take some time and effort, but this is definitely a humanly fixable problem. Taking what happens on-line seriously, and teaching all newcomers that participating in on-line communities is more like talking to friends on the phone than playing a game against the computer is important. I suspect this is already pretty much done, and the people who disagree are a distinct, though unfortunately still influential minority. Shifting out of the constantly faster and constantly bigger is at least as important. The speed bump effect has value, and the hatred of ads is huge these days. An on-line community that is forever trying to shove as many new users in its craw as possible isn't a community, it's an advertising company. The folks in charge of ad companies don't care what a cesspool the community interactions are so long as lots of eyeballs are forced to look at the ads. That doesn't mean don't participate in the fora these businesses create, but you may have to take a more active role in making a healthy community. That means everyone helps keep the house clean instead of running to a moderator individually. More importantly, and this might be the hardest part, is going back to actually acting on the premise that we can agree to disagree, and to be generous enough to start from the assumption we've misunderstood when someone has written something that surprises or offends us. I don't mean give jerks a pass (and I know, this is tough stuff to do, it's tough for me too). I mean after taking a break to calm down, then reading again and seeing, yep, definitely a jerk, then get in there with a constructive response. Not name calling or accusations. And then stop. Don't let that jerk upgrade to troll by getting into a pissing match with them. (Top)
Visible Intersectionality? (2017-02-20)
Over the past several years, with a sharp booster last month, there has been considerable discussion of intersectionality. To the unguarded listener, the usual ways of talking about intersectionality seem uncomplicated and easy to make sense of. The term was coined by Kimberlé Williams Crenshaw in 1989, and it is a handy, if polysyllabic term for how a person may be oppressed based on more than one aspect of their identity. I would actually reword this slightly and refer to oppression based on more than one identification, applying historian Barbara Fields' point that identification is something made by others. The identification is part of the oppression, because it is the first step. First identification as some type of person who can be oppressed is made, then unfortunately, further oppressive action follows. This is a small tweak, but it might help people out with what seems to me to be a real misunderstanding of what "intersectionality" and "being intersectional" means. Better yet would be to stop referring to "identity" and instead seeking to discuss who people are, because that can be linked to material reality that makes it harder for us to impose stereotypes. But, I digress.
Let's step back and consider the simplest possible case where "intersectional oppression" could happen. That would correspond with one person, say, an Indigenous woman, who is also a lesbian and mistaken fairly regularly for a man because her overall bodyshape is fairly androgynous. I can speak to this one from direct experience. For that case, there are several intersections of potential axes of oppression. In no particular order:
The identification by oppressors part comes in when a person applies a negative stereotype and then treats me badly according to that stereotype. For example, they pester me incessantly in a store because they are certain I must be about to shoplift at any moment. Or a person driving past shouts a sexist or homophobic slur on their way. Admittedly, these are minor examples that are less immediately harmful, but build up like the proverbial thousand cuts over time. The more acute and potentially dangerous sorts of things are of course being physically threatened by homophobes, or denied employment for not "performing femininity."
UPDATE 2020-08-04 - I have done some light editing for clarity here – clarity is the great victim of the current deployments and bastardizations of "identity" and "intersectionality." For a different perspective and explanation of the meaning of intersectionality, including how it is related (or not) to "inclusivity" and other such buzzwords, please see Holly Lawford-Smith's january 2019 post at medium, What intersectionality isn't. It is worth keeping an eye on this post to see if it will be affected by the ongoing medium censorship wave.
UPDATE 2020-11-25 - An excellent source to add here for the careful consideration of standpoint epistemology is Olúfémi O. Táíwò's 2020 article in The Philosopher, Being-in-the-Room Privilege: Elite Capture and Epistemic Deference.
Okay, so with that example in mind, what does intersectionality mean as applied to Feminism, then. Well, a Feminist analysis would consider and analyse all four axes and their potential compounds. In other words, a intersectional analysis wouldn't stop at noting where stereotypes imposed on women were used to rationalize oppression, it would go on to consider how stereotypes of lesbians were used and combined with them, and so on. In terms of action as opposed to analysis, it can be a bit trickier because figuring out how to act so as to challenge those knots of oppression effectively and safely is tricky. Sometimes, especially in the case of protests that depend on many people being present, just having as diverse a crowd as possible can be enough. "Diversity" tends to be the generally applied solution, although it does carry a nasty risk of tokenizing, in part because the diversity is supposed to be visible, which means getting the racialized folks out. There are other less mentioned and harder to implement actions and tactics. For the purpose of this thoughtpiece though, let's stick with this familiar example, especially in light of the commentary swirling in some circles around the recent Women's Marches.
The commentariat decried the "lack of diversity" and supposed failure to be intersectional of at least some of these marches. Some pointed to who was speaking or not, others to whether they felt they could see enough racialized faces in the crowd. I focus on these because this is where I think the misunderstanding of intersectionality comes in, or perhaps a failure to take intersectional analysis far enough comes in. These are difficult days. Mass protest is getting riskier all the time, with the right to gather in groups without a permit under perpetual siege and the likelihood of police violence and arbitrary arrest growing all the time, especially in the case of protests against the status quo. If you are a racialized person, the police are that much more likely to target you for random arrest, violence, or profiling. If you are a woman or deemed to be in any way feminized, the likelihood that the violence will be sexualized is all too high. If you are poor, as so many women and racialized people generally are, your ability to make bail, pay fines, or replace anything you lose in a protest is probably truncated. Meanwhile, the calls for those who have privilege, whether that be race, sex, or gender-based to get in front and push hard, because that privilege insulates them from considerable risk, have been growing.
I already noted, these are difficult days, and challenging the status quo is more dangerous than it has been for a long time. So when I see a huge women's march or other action, and it happens to be apparently majority white women, I don't kneejerk criticize. I remember that for many non-white women, they have far higher immediate risks to life and limb to consider, and may not be able to personally participate anyway if they are struggling to make ends meet. And then I look up the messages the women who could take part are imparting, and if I see those messages are intersectional and respectful, then that suggests to me that some portion of the white women are taking up the challenge of using their privilege to support genuine efforts to oppose more than just their own oppression. They may not be absolutely perfect at it, but who is anytime, let alone what for many of them was the first time they ever took part in a protest.
Intersectional oppression isn't just about what a person subject to it may suffer just day to day. It's also about what they may actually be able to do, whether that be which jobs are open to them or which means of challenging and overcoming oppression may be feasible for them at a given time and place. Making a given time and place of action open to as wide a range of people as possible is not easy, and may be one of the toughest challenges that action organizers face in a time of expanding surveillance and militarized police. I don't think this gives action organizers and participants a pass, but it certainly doesn't help them or anybody else if we have misunderstood intersectionality or failed to consider it deeply enough. (Top)
Sure RFID Tags Are Convenient (2017-02-13)
Thinking back, it's hard to say when I first heard of RFID tags. I had actually bumped into them and even used them long before they were objects with a basic definition of what they were and how they were used all sorted out. The fad of having one embedded under the skin so as not to need to carry identification struck me as so profoundly foolish that I had to doublecheck the story with better documented sources. If the RFID tag can be scanned by an uninterested bouncer from a short distance away, well, so can anybody else with the right equipment. That RFID tags are not the panacaea for problems the general public actually has been shown by the sudden proliferation of wallets, cases, and bags that block RFID scanners. The chips weren't originally supposed to be about tracking us, of course. No, the big sell was that businesses generally could track all their equipment perfectly and stores could end shoplifting forever. This has not happened, though we can rest assured that RFID tags have been shoved into almost everything we can buy, and the panting desire of the surveillance industry to have them literally embedded in every manufactured thing in existence is more than a little obscene.
But this is mere ranting, right? After all, most RFID tags are attached to the packaging, which we all throw away. They can't be read at extensive distances, real tracking RFID devices are more substantial and harder to hide. Nothing to worry about, right?
Suppose that you are a person of colour. Suppose further that you go to a clothing store, to buy a jacket. This is quite a nice thing to be able to do. The store, as they all do now, attaches RFID tags to all of its merchandise. But it does so in a different way than usual. In most clothing stores, the tag is part of a large gizmo that pins to the garment and includes a dye capsule. A shoplifter would find the gizmo sets off a detector at the store entrance and exit, and if they try to get it off without the right tool, the dye will be released, spoiling the garment and probably getting all over their hands. They're weird, bulky things that can make trying stuff on awkward sometimes, but everybody knows where they stand. At this store however, all the RFID tags are sewn into the garments. Which should be no big deal, the cashier assures you. They're attached to another tag sewn into a seam like the washing instructions tag, and you are permitted to tear both off once the garment is purchased. The RFID tag is sewn into a pocket. This sounds fair enough. You do want to take the RFID tag off, after all, because otherwise it will set off other store detectors. You can only be sure RFID tag desensitization works for that store so that you can walk out without setting off their alarm. Those of you who still borrow library books and have them from more than one library system know this well.
You get your new jacket home, and hunt dutifully in the pockets for the RFID tag. Oddly enough, nothing is in either pocket that you can feel. The cashier dragged the whole thing several times over the desensitizing pad, so it's not quite clear what's going on. Maybe there wasn't one? Well, its a puzzle. No sense worrying about it. Until the next day, when while running your errands, you set off every detector in every store you enter and often ones you walk past if you are too close to their scanner units. You get to enjoy cashiers who insist on checking your purchases, your receipts, and then watching while you walk in and out without your jacket which clearly has nothing in the pockets because you and they have checked. This is bad news.
So you get the damnable jacket home, and call the store you bought it from. Where the hell is this RFID tag? What does it look like? Seems they don't know, but it's in a seam. So you have a seat, and proceed to systematically pinch and squeeze every seam, and there it is at last. It's one of those older style RFID tags for merchandise, the ones that are roughly the size of a stick of gum. Oh, and guess what. It is sewn into the lining of your jacket. Which you need. So, whether brave or desperate, you pull apart the seam and remove the RFID tag. This means you can't return the jacket for a refund, but that would be even more embarrassing anyway. At least it's a seam inside the jacket. Maybe you call the store again to give them feedback about how much harassment you get as a person of colour in a store to begin with, and how much worse it is when dealing with a hidden RFID tag that can't be universally desensitized on the jacket you bought.
I hope the store manager's response is better than a shrug. (Top)
Pondering Identity and Identification (2017-02-04)
At the turn of the twenty-first century, a new-old idea began to take fierce hold of many people across a whole range of jobs, politics, and places. It has been taken up so widely that it is a bit dizzying, especially since this new-old idea is a sort of abstract noun that has an at best only loosely agreed on definition. The idea I mean is that of "identity." It is both new and old because of course, the word itself is far older than the twenty-first century. According to my OED its first recorded occurrence dates to the 16th century in english, and it originally meant basically what its latin original did: the quality of being the same. What it is meant to describe in a person is that quality of being the same despite the myriad changes we all experience as we grow up, age, and eventually die. It's a paradoxical thing in its own way, each of us literally embodying the real life prototype of the Ship of Theseus. It seems that what happened at the turn of the twenty-first century was a shift in how many people understood identity is created and sustained.
UPDATE 2018-11-13 - I have stumbled upon a brilliant discussion of "identity" and its philosophical and practical meanings via GenderTrender by Jane Clare Jones. Currently available is Identity, Sovereignty, and Narcissism. Part One of her discussion was released this october, and fingers crossed the second part will be up before the end of the year. Her reiteration of the relationship aspects of "identity" and that identities are concepts that we use to talk about and describe reality, not reality itself, is a more than timely reminder. Jones' discussion also provides a great entry point to Plato's theory of forms.
I didn't get into this topic by design so much as by accident. Repeated references to "national identity" had left me completely baffled, because nobody seemed able to sort out what they meant by the concept. Perhaps this shouldn't be surprising, since nations in the modern sense are recent inventions indeed. Eventually I found one writer who suggested that a "national identity" is in fact a communally created thing which many national governments hope to effectively dictate by persuading every person living under their rule to subscribe to and embody the specific elements they define. Hence the concern about how or if children are taught in public or private, parochial or secular schools. In any case, the "sameness" aspect in the case of a "national identity" insofar as such a thing is even possible, is meant to be a shared image of what an ideal citizen of the nation would be like, what they would think and believe, what they would choose to do, and the desire to somehow become that suite of characteristics. All too often this has led to the false idea that if only the people were rendered more similar in certain ways to begin with, then the "national identity" would be accepted and enacted more easily.
Before governments of various types began trying to impose or shape "national" identities, it seems that people got their identities in a more localized sort of way. They grew up within families that were part of larger communities, and their sense of themselves grew out of their participation in a network of friends and relatives and the culture they all shared. This was no passive process, since depending on the time and place a given person might be encouraged or discouraged from all manner of action, from what to wear to what they did for a living. Although it was embedded in the script of the original series of Doctor Who, the Doctor's declaration that "a man[sic] is the sum of his memories" seems gently profound and quite reasonable. With different memories and different experiences, anyone would be a different person. With these things in mind, it seems clear that "identity" as we experience it is a state that we co-create with other people and the land we live on. The thing that seems to have changed at the turn of the century is the advent of a sort of "consumer view" of identity.
By "consumer view" I mean the idea that a person can not merely develop and live an identity over time without necessarily imposing a plan on the process, but that an identity can and maybe even should be selected as a predefined unit. It's a bit like the "national identity" idea, but individualized, and only so far. The individualized part is the idea that a single person may choose for themselves. The "identity" is a ready made thing, clearly defined and available for taking and embodying. This may sound absurd, but the key to making it work is to conflate a connected set of stereotypes with an identity. Then taking on that identity means taking on those stereotypes. This is not new, though the current iteration is. Note how it slips right over into identification, which historian Barbara Fields' notes is something made by others, not the individual.
An early example of a person engaged in just such a practice was a man whose real name was Archibald Belaney. He was an englishman who made an entire career out of pretending to be an "Indian," that is, he played to the stereotypes of "the white man's Indian" people in canada who think they are white had at the turn of the twentieth century. He became a famous conservationist in his time, though it remains a puzzle why he never believed he could carry out that important work without creating a fictional origin story for himself and denying that he was an englishman who immigrated to canada in 1906. The exposure of his real origins did serious damage to the credibility of his work, even though a person can be a good conservationist whether or not they are Indigenous to the place they live.
A more recent and infamous example is that of Rachel Dolezal, whose efforts to pass as a black woman were uncovered in 2016. Barbara Fields noted in a recent interview on the Jacobin that, "Anybody can be black – black is defined as any known or visible ancestry – or 'one drop of blood.' So it's really not based on what you look like, even if you go to the trouble of tanning and wearing a wig and whatnot." This is an instructive point, because it reiterates again that "identity" is socially defined, no one gets to define their personal identity in splendid isolation. Fields continues, "Most Afro Americans don't have any control over identification. Their identity, how they define themselves, how they perceive themselves, can be overruled by that identification." Here is where things get tricky. A person may have an identity imposed on them, regardless of their own sense of themselves, if they are part of or merely perceived to be part of a group, especially an oppressed one. Yet the two examples I have just given are of people who in origin would usually be called "white," who opted to try taking on an identity as a member of an oppressed group. This is something more and more people who might have confidently thought of themselves as white are trying out today.
Here is where the new-old idea of being able to select an identity that seems more desirable for whatever reason collides with other social facts. In this case, with colonialism and its ongoing relevance. Aileen Moreton-Robinson has developed the concept of "the white possessive," the sense of ownership colonial whites are taught to have from their earliest moments. It's the white possessive that leads to such ridiculous statements as, "Everyone knows that our First Nations have full civil rights now." This isn't "our" in the sense of, "Oh yes, those are our neighbours! Thank you for pointing them out, let's go say hello." Truth be told, even in that example "our" doesn't seem appropriate to me. No First Nation belongs to canada, let alone the number of people I have heard or read referring to them as "ours." If this sounds a bit silly to you, try saying, "canada's white people are famous for their love of hockey" and notice how incongruous it sounds. Then try this one, "canada's First Nations invented several sports still played today, including lacrosse and canoe racing." The white possessive also allows people who think they are white to think they have a right to take over what they understand to be a First Nation, Afro-american, or any other identity other than their own.
This is no game, as author Joseph Boyden has learned to his cost. The criticism he has received, and that Rachel Dolezal has received, has far less to do with what is usually emphasized in the media, the various awards and jobs they may have won or accessed via their false claims. Well, I should make this clearer: the criticism they have received from the oppressed communities they attempted to appropriate an identity from has less to do with that. The bigger issue in fact is attempting to take on an identity, especially one from an oppressed group, while neatly sidestepping such common experiences of that group as systemic racism, poverty, or being subjected to identification by others, all from the earliest days of their lives. To my knowledge, it is not considered acceptable to claim to be a war veteran without actually having gone to war, because to do so is deeply disrespectful to both the horrors and the joys of those who have. I think that there is a strong parallel to the case of claiming to be a member of one of the oppressed groups so often selected by "white" people who would rather be something else. It is not coincidental that those people are uninterested in claiming an identity with the group of whites who are oppressed because of their poverty.
Identity is a hard thing, because we have to live with it and through it. We are not able to be the sole arbiters of our identity, because it isn't only what we choose to make of it. It is inevitably the product of our own experiences, including the ones we may most wish we had never had, and the genuine desire to have experiences that may never be accessible to us. And for better or worse, we don't experience or develop alone, but within a larger family and community that inflects how we perceive ourselves and how we fit into our world. The "consumer view" of identity can't cope with such complexity. (Top)
Writing and Plotting (2017-01-28)
Like many writers, on occasion I find myself wrestling with plotting. There is a generally accepted three-act plot in "western culture" for which we may read "europe and anywhere it has colonized successfully." Most people whose first language is english of a certain age like myself will have gotten their first conscious introduction to this plot structure in middle school english class, where we learned this structure was used by Shakespeare and it was first described by Aristotle. Then we promptly went off and read a play that actually had five acts, like Romeo and Juliette or Hamlet, Prince of Denmark. Then we began to suspect that we had misunderstood something, because all of Shakespeare's plays seemed to have five acts. (If you'd like to check that, OpenSourceShakespeare has you covered.) Of course we had, understandably, because the key is not the number of acts necessarily, but the overall story, which falls into three parts: problem, conflict, resolution. This is a familiar, serviceable plot structure. It's clear how it works, and simple doesn't mean easy. After all, the trick is to find a way to manage this structure in a way that is interesting and strikes the reader as plausible within the story's universe.
Except there are times that I can't help but wonder if this structure can be deceiving, especially when it creeps into other genres of writing, as it must. It does so even in non-fiction, that both lauded and maligned category, where somehow the very same three parts turn up. Set up the problem or question being asked, lay out the evidence, then close with an interpretation or final argument. Even a short piece like this one will have much the same shape. Familiar and memorable, which reflects the form's deeper origins in oral performance, and in the case of essays a very specific type of oral performance, political speeches. Ancient ones, I should add. Think Demosthenes and Cicero. Day to day life doesn't really work like this. I have internalized the convention so deeply though that I can easily transform any event into the expected three part shape, and in this I am no more than ordinary.
But if real life doesn't actually have that shape, could this internalized way of representing events actually lead us to misunderstand them? Once the pattern is activated, it is a habit to seek for the next part in the sequence until all the desired parts are in place, creating a coherent narrative. I suspect this may be why to this day hurricanes are still given human names, and referred to as if they are malevolent, mindful entities out to destroy as much and frighten as many as possible. On the other hand, perhaps for many of the people unwilling to accept the reality of climate change, that lack of narrative coherence contributes to their skepticism. If they aren't experiencing anything that leads them to suspect something is awry with the climate, then they can't see a problem. Let alone the fact that "the climate" refers in effect to the whole Earth, and the connection between local, short-term weather, to planet-wide, long-term climate is not necessarily an easy link. Without a problem they recognize to start the ball rolling, there is no unified narrative, and to their thinking, no convincing argument. That said, it is also true that anyone may refuse to accept an identified problem and resist the attendant narrative, no matter how sensible and important it may be.
For all the folks who love to watch anime and read manga, you already know at least one other way of structuring a plot very well: kishōtenketsu. I learned about this form via an especially fruitful fall down a search engine rabbit hole that took me to the website of the Still Eating Oranges art collective. They have two excellent articles describing kishōtenketsu, The Significance of Plot Without Conflict and its follow up, Plot Structure All the Way Down. Kishōtenketsu is a type of part or act plot, summarized by Still Eating Oranges as introduction, development, twist, and reconciliation. This may seem reminiscent of a well-constructed Agatha Christie mystery, but this is no more than a family resemblance. Christie is teasing us by misdirection and sleight of hand, while the twist in kishōtenketsu is a true surprise or apparent non-sequitur. The reader or watcher is still given the pleasurable task of wondering how the story is going to end, but in the kishōtenketsu case, how the elements are going to be brought together into a sensible whole. The contrast to a mystery or any sort of action story is striking: resolution of the conflict dissolves the story. The cast of characters is broken up, the original circumstances erased. It doesn't have to work that way, but it makes sense that it does in a western-style plot. "Conflict" is not a word with positive connotations, it is something that is supposed to go away. If my understanding of kishōtenketsu is correct, the resolution could make something go away, or not, or maybe a bit of both.
I suspect there are more ways to plot a story than these two, many ways to persuade the reader to keep asking "What will happen next? How is the author going to get themselves – and us – out of this pickle?" (Top)
If a Non-White Person Experiences Something But No White Person Was There, Did It Happen? (2017-01-17)
Most readers will immediately recognize the question the title of this thoughtpiece riffs on, which the OUP Blog renders, "If a tree falls in the forest, and there's nobody around to hear, does it make a sound?" Almost as many people describe it as philosophical brainteaser as a rather insulting joke. Admittedly, the tree version is an easy starting point for younger students learning about philosophy for the first time in grade school, which adults can forget too easily. When it comes to the title of this thoughtpiece however, I am in deadly earnest. I mean that literally. For too many racialized people – as opposed to self-racialized people who think they are white – this is a real life and death question. It would be too many if this was the case for one racialized person, but the numbers are far higher than that.
What brought this question into focus for me was the mainstream media response to Amy Goodman being threatened with arrest and various charges for providing media coverage of the ongoing protective action being led by the Standing Rock Dakota Nation against the Dakota access pipeline. DemocracyNow is not part of the mainstream media and did not join in the weird chorus I am about to describe. It has played a significant role in bringing the peaceful actions of the Water Protectors and the attacks on them by hired mercenaries and various levels of police force to international attention, and for that DemocracyNow should indeed be lauded. The mainstream outlets belatedly piled on and then, on learning that Goodman was facing charges promptly began blaring, "Amy Goodman demonstrates the danger of opposing powerful oil and gas pipeline interests." This has passed out of the news cycle since because the charges have been dropped. But let's pause for a moment.
Let's have a quick look at the time line of events roughly up to the chorus point, according to DemocracyNow.
So I ask you. Do you seriously think that the filmed and written evidence of a private company attacking Indigenous people with a private army, dogs, and pepper spray does not show how dangerous it is to oppose powerful oil and gas pipeline interests? That it doesn't count as dangerous until an ostensibly white person who is also a member of the press is faced with charges that the danger is real? Since the events involving Goodman directly, the Water Protectors have been shot at with rubber bullets and blasted with fire hoses in sub-zero temperatures, all while undertaking peaceful opposition to the pipeline and holding their ground because they aren't sure what the next u.s. federal administration will do. Is this not real evidence already? Here's another example.
There are thousands of Missing and Murdered Indigenous Women and Girls in canada right now. (The likelihood this isn't also true for the u.s. is all too small.) When the first estimated numbers of affected women and girls were provided by the determined families and their allies furiously decrying the race and sex-based nature of the violence and how it was being ignored, they were told to stop exaggerating. The press told them. Provincial and federal police told them. Then the federal police began checking their own files. A police force made up predominantly of people who think they are white. And the numbers they found were even bigger. Not that this has made much difference as yet, because it has taken nearly eight years to get an inquiry. The inquiry is being carefully designed to block analysis of the combined race and sex-based factors that mean every Indigenous girl and woman in canada has a far greater chance of "going missing" which can mean being trafficked or murdered, suffering rape or general physical assault, or being dropped off on a winter "starlight tour" to freeze to death by the local police. The statistics have now been calculated and written up by non-Indigenous groups, so now it counts as "a problem." Not the same sort of problem it is for the Indigenous women and girls, by the way.
So, does an experience have to be of or witnessed by a white person, white both in their own mind and according to other whites, to be considered real? (Top)
As we enter a new year, I find myself reflecting on the latest experiment in papering over past and present colonialism, a thing called "reconciliation" with Indigenous peoples here in the northern part of what is now labelled as canada on the maps. The Truth and Reconciliation Commission of Canada (TRC), which began in 2009 and released its report and "calls to action" in 2015, heralded an outpouring in mainstream canadian media. "Reconciliation," "improving the relationship," et cetera, ad nauseum. I have been reading and listening to this stuff for over a year. Indigenous writers and thinkers whom I respect have struggled mightily to find a good path forward in this mess of media buzzwords. Ryan McMahon dedicated an entire season of his podcast Red Man Laughing to "reconciliation," for crying out loud. He seemed to come around to the idea a bit. A media blitz plus lots of federal feel good press releases set my reactions to high skeptical, so I have been watching, and waiting, and listening, and reading. Doing my best to reserve judgement. Well. It has been a long period of research, and here is the conclusion I have come to.
There is no reconciliation. That's right, folks who think you are white (many thanks to James Baldwin), there is no reconciliation. There will be no reconciliation. Period.
UPDATE 2017-09-03 - In First Person Plural (UBC Press, 2011), Sophie McCall describes reconciliation in a way that is especially germane here. "While reconciliation prioritizes the expiation of the colonizer's sense of guilt, it places the onus upon the colonized to end longstanding conflicts." (112)
UPDATE 2019-07-10 - I can hardly put it more clearly than the great Sto:Lo Orator Lee Maracle, who notes in My Conversations With Canadians (Book Thug, Toronto 2017): "Removal was the object of residential school, and it was not for purposes of assimilation, and it was a crime. It was done to destroy the language, culture, and sensibility of Indigenous people. This is genocide. No academic or English language or mathematics or science courses were taught in the first one hundred years of those schools. Those would be the sort of courses that would justify calling it an assimilation program. Instead only the destruction of Indigenous language and knowledge was offered. Children worked and recited scripture when they were not being beaten, starved, or raped. When are rape and hunger part of an assimilation program? Only when it applies to us. Elsewhere in the world, it is genocide."
How can this be? Well, let's start with what the word "reconciliation" actually means. According to my bog standard OED, it refers to restoration of friendly relations between parties, or making "one view or belief compatible with another." In colonial states, there have been and are no friendly relations. What there have been and are, are Indigenous people engaging in resistance and damage control while non-Indigenous people continue pursuing their displacement and dispossession by every means possible. This includes non-Indigenous people who may never have consciously intended to engage in such behaviour in their lives, because this is something that comes from a systemic not an individual, basis. At no time has there been any widespread effort on the part of non-Indigenous people, most especially those who think they are white, to make their own views and beliefs compatible with the views and beliefs of Indigenous people, even the basic ones like the right of Indigenous people to exist and have a future. The definition of "reconciliation" in my dictionary also refers to "making financial accounts consistent." When the money stolen from any Indigenous nation just in canada is returned with interest and a full and complete acknowledgement that it is not possible for Indigenous peoples to "surrender" their lands as a prelude to putting the land situation right and upholding treaties by the people who think they are white – let alone the other non-Indigenous folks who may or may not have realized what a mess they were getting into when they came here – then maybe the word might at least hand wave at that definition.
I appreciate that many readers might find what I'm writing here inflammatory and unfair. How dare I suggest that there is an ongoing effort to destroy Indigenous people, after all, that's in the past. Non-Indigenous people living in canada today had nothing to with those horrible actions, and people know those things are unacceptable and no one does that horrible stuff anymore.
You may not believe me, but I sincerely wish this was true. I wish that all the horrors and the systemic racism were truly in the past, with no connections to today, and that no one now is complicit with colonialism because canada isn't a colonial state anymore. Many people, especially those who think they are white, would fiercely disagree here. After all, they can point out that today there are no more residential schools, no pass system, it isn't illegal to practice an Indigenous religion, Indigenous people aren't being rounded up and forced onto isolated reserves and starved. And besides, according to more than one of my acquaintances who think they are white, look at all the good things the european invaders brought. They have a hard time coming up with specifics, referring mainly to "advanced technology." As if there was no advanced technology on Turtle Island before europeans came along. (Hint: There most certainly was.) I wish they were right, too. Except they are wrong. If they were right, none of the following things would be part of all of our lives today.
So no, there is no reconciliation, and there will be no reconciliation, because these facts, and many, many more, show that on a systemic level, there are no friendly relations. On a systemic level, there is no effort going on to admit and accept that Indigenous people are not going anywhere, they are staying Indigenous, and colonialism is unacceptable and must end. On a systemic level, there has been and is no effort to "balance accounts." This reflects the mainstream definition of the term. It also reflects the real life facts. The trouble is systemic, so even though I do indeed have good friends who are not Indigenous, even friends who think they are white, and yes, they are good people, those relationships aren't reconciliation. This is still true despite the wonderful to see growing resistance by non-Indigenous people to colonialism and the systemic practices that oppress Indigenous people and are tweaked slightly every day for use against others who have been racialized or feminized relative to the "mainstream." To be clear, this resistance is effective, it's just not big enough to bring down the system quite yet.
What there is, is the possibility that people who think they are white may at last establish friendly and respectful relations with Indigenous peoples and nations within canada. Which means actual decolonization and the people who think they are white taking up the difficult and critical task of dismantling the systems of oppression, not just of Indigenous people, but also of women, of other racialized groups, and gender defiant people living within canada. If those systems are allowed to persist in any form, they will continue to poison the well for everyone. The question then, is whether the people who think they are white will pursue that opportunity, which begins with rejecting the fiction of whiteness and acting on that. I know which answer I'm hoping for. (Top)
There's Surrealism, and Then There's Grammar (2016-11-24)
One description of surrealism is that it is an attempt to capture the strange state we experience while dreaming, when all manner of random juxtapositions make sense, or some strange combination of objects is, within the dreamscape, gut-bustingly funny or viscerally disgusting. Then again, dreams can be startlingly coherent or even lucid, although they may still include features that are absurd on consideration after we wake up. The latest research connects dreaming with how we process the memory of our day to day experience and its associated emotions, and that in our dreams we may be sorting out and practicing how to react to situations in our waking lives. Consider all of that, and the fact that we still don't understand very much about dreaming, though the evidence is that not dreaming is detrimental to our mental health. You may be wondering how I could possibly have gotten onto this topic, considering today's feature photo is an oldie but a goodie from an extended trip in eastern canada. But to me this little sign was more than a little surreal at the time, not because of its very sensible directions, but because of the grammar issue in the french text that someone simply could not abide.
Just to be clear, I am not mocking the mysterious editor of the sign. There's something about travelling, probably not getting quite enough sleep, and coping with places that are not home that can make us far more sensitive to all sorts of things we might have ignored or merely given staff a head's up about at home. What gave me pause was how surreal grammar is, and how none of us are indifferent to it. It is easy to find books that aren't too technical that reflect a remarkable range of views on it, and nobody is immune to developing their own views. The strange career of the word "concerning" – it is generally a verbal form that seems to be developing a second adjectival use – in north american english has certainly made me aware that a change in usage can be quite surprising to the ear. The surprise comes not so much from whether the usage is wrong or right (language change is just change), but from the fact that I still understood what the other person meant. On the other hand, I have a colleague who could be reduced to paroxysms of outrage on hearing someone else using "concerning" in its new guise, while not realizing she had used the word in that way herself. One more data point for the evidence that we pick up changes in word usage and grammar unconsciously, and may use them unconsciously, even if we feel sure the usage is wrong when we turn our minds to it. The surrealness of grammar is a real thing though, not just something apparent related to jet lag and canada's historically fraught language politics.
Every language has a grammar, including the ones used to program computers and that rarely recognized member of the class, mathematics. If languages didn't have grammar, and the grammar of each language wasn't fairly consistent, then they wouldn't be helpful tools for communicating with. Notice I said "fairly consistent" and this is because no language is perfectly consistent. If that state ever happens in the real world of spoken languages, it probably only holds for a vanishingly small period, even for the groups of protestants who migrated to the present day united states, determined to stick to the vocabulary used in their chosen edition of the christian bible. How frustrating it must have been to find their language was inexorably changing anyway.
At least among its native speakers, english has a reputation for inconsistency, although that may actually be the wrong word for the situation. In many ways, english has become a patchwork language, chockfull of borrowed and constructed words at varying levels of naturalization. Consider the difference between a word like "pyjamas," which can be traced back to persian, versus "schadenfreude" from german. The former has become rather ordinary, we hardly notice it. Schadenfreude, even though english is a germanic language as well, not so much. Quite apart from borrowing, english has a range of irregular verbs and plurals that are hold outs from different dialects and earlier times. For example, the plural of "foot" is "feet" because changing the vowel used to be a common way to make plurals. If you want to find a verb that is irregular in just about any language provided it exists, check out "to be" or "to go." In english, "to be" especially is a sort of grab bag.
According to my OED, "be" itself is derived from old english bēon, and that verb was already "defective," that is, its conjugation was made up of forms from three other verbs (the other two are wesan and a verb with a root something like *isi-). "Am" is from another verb again from a time before germanic languages were even a thing, and apparently nobody knows where "are" came from. That said, it's tempting to see it as somehow made up by analogy to "were." Who knows, maybe it was. Pretty much the only thing that holds the verb "to be" together is that english speakers have all learned to use it and conjugate it in all its rag bag, pasted together glory. Compared to "to be," "to go" is ridiculously simple. Most of its forms come from old english gān, and "went" is from another old english word, "wendan" which means something like "turn around and go back." For the truly surreal that we actually use every day all the time if we speak english, I think "to be" wins hands down.
The verb "to be" is also mightily irregular in french, and it is no coincidence that it is a form of that very verb that started this whole thing, including the helpful sign edit illustrated above. I should also add that if you are familiar with french and read the sign again, you'll notice something else surreal about it: try translating literally what the french said before the correction was added, and compare it to the english. You'll recognize the idiomatic english expression behind the incorrect french right away, even though that isn't the wording of the english on the sign itself. (Top)
Existential Questions (2016-10-30)
As a general rule, riding the bus isn't too exciting, which I figure is a good thing, since most potential excitement affecting a bus and its passengers is liable to endanger life and limb. Over the course of my various travels I have gotten acquainted with a fair number of transit systems, from those including computer-driven trains to those creaking along as best they can with poor funding and cancerous urban sprawl to cope with. The state of the buses and train cars can provide an uncanny mirror of the city itself. In one city, on one of the busiest transit routes, the buses regularly used on the route seemed to have inspired special ire in some passengers. Nowhere else have I seen so many upholstered seats torn up from one end of the vehicle to the other, together with evidence of passengers attempting to burn holes in any accessible plexiglass. Either somehow nobody noticed that particular madness going on, or cigarette lighters today are on the edge of flame thrower status. That said, such damage is impressively uncommon, even if the whole bus isn't riddled with video and voice recorders as they are now in so many cities. Much more common is that remarkable trigger of civic fury, graffiti. The funny thing about bus graffiti though, is that so often it seems to reflect a desperate, stymied need to communicate.
To be clear, the presence of graffiti indicates complex social issues that often come down to a battle over the uses of public space. By public space I mean space where the members of a given community move around to carry out their errands, meet friends, get to work, and so on. The boundary between this sort of space and privately controlled spaces that may still be open to the public is not as tidy as the capitalist drive for enclosure demands. That said, I suspect much of the outrage about graffiti has to do with a sense of territoriality quite apart from what are supposedly neutral considerations like property values and crime rates.
The picture of the unusual bus graffiti example I bumped into awhile ago certainly doesn't fall into the familiar categories we may be used to putting wall graffiti in. It isn't a tag, or any of a range of profane scrawls threatening assault or impugning the character of someone most of us will never know. Probably this bus was simply one frequented by many post-secondary students, which can lead to atypical existential reflections like this one, especially considering the discouraging future many of those students are facing. And by post-secondary students, I don't mean just university attendees either – people preparing to enter the trades are asking such non-trivial questions too. This seems a logical outcome since the folks like canada's current finance minister have decided that "job churn" is inevitable and so everyone else should "just get used to it." Quite the comment from someone who has a guaranteed steady work and lifelong pension in a government that seems likely to be as quick to reduce access to such things for everyone else besides its current members as its predecessor.
It is no wonder that people are asking such questions as this one, especially if they happen to be too young to have had a chance to take part in the decision making leading to the current situation. The minister's comments strike me as oddly fatalistic as well, a sort of shoulder shrug in the face of a situation that is a product of human choices, not weather or geology. That with a side of, "And anyway, it's not my problem." It's funny how the same people who insist that any problem they can see a way to throw machine technology at to solve is possible to solve, no matter how farfetched the solution may seem, yet throw up their hands, refuse to listen, and stomp off home as soon as the problem at hand is socially complex. As any of us can easily see by checking a newsfeed, people struggling to be heard are doing considerably more than somehow writing graffiti in buses, and they certainly aren't giving up on solving socially complex problems. (Top)
What if There is Nowhere Else to Go? (2016-10-02)
At least in the north american tech media, the topic of potential "manned" missions to Mars have been on a steady boil for the past several years, with not quite periodic sharp peaks. Sometimes the peaks are from the latest expensive sort of success or explosion for today's crop of private companies endeavouring to take over access to space from the military, which so far as I can tell is primarily an american phenomenon due to the cost. Or else they come from the latest project intended to mimic travelling to Mars, since this will take a non-trivial amount of time, although it seems that at least 1 out of 3 of these is usually a hoax. There are naysayers of course, who point out how the Moon is much closer, so if there is going to be "space colonization" that's the place to start. The trouble with the Moon is at least threefold for the folks hoping to at the least send missions to Mars. The Moon is close by, relatively. Men have already been there, literally (this was in the days when women astronauts were considered a joke, not a possibility). No matter how hard anybody looks, the Moon has no water to write home about.
But look at Mars, which you can literally by glancing at my picture taken with my point and shoot camera some time ago. It is far away, though not too far, we're obviously not talking light years here. No man has made it there yet, which gets the mostly male billionaires more interested, especially if they watched the original Star Trek with its (in)famous split infinitive marked introductory voiceover. And there is just enough tantalizing evidence for the possibility of water ice preserved under dirt and rocks against the near vacuum of space to make the whole venture look plausible in some corners.
Gene Roddenberry, creator of the original Star Trek, passed away in 1991. Regardless of the plausibility or not of his vision of the possible future, as a television writer he got one detail spot on from the beginning for his original audience. The framing narrative, even if never overtly spoken, wasn't there just for the suits when he was pitching his idea: "wagon train to the stars, plucky colonists on the frontier." "Frontier" is a heavily loaded word in the united states legendarium especially, yet it also has deep resonances in most other colonizer cultures on Earth. Many people no doubt consider this a good thing, since supposedly "colonizing new lands" was a mostly noble endeavour with some unfortunate genocide spicing things up. By this logic, struggling to get from Earth to a place lacking the necessities of life and finding a way to live there is somehow heroic. The tie to colonizer rationalizations for invading and stealing the homes and very lives of other people should be making sharp calls on our attention, because again, the tie is not coincidental. The growing interest and excitement about "going to Mars" is not about technology, nobility, or heroism. It is about land, social inequality, and the capitalist hope of somehow finding a way to continue pretending they can always have more money just a little longer.
Social inequality based on a few people maintaining control over most of the land everyone needs to live on, with, and from by means of total violence is not new. The evidence for the length of time it has existed puts it at between 10 000 and 8 000 years old, and at no time, despite the best hopes of today's apologists for colonialism, was it ever universal or inevitable. It remains neither of those things, held together by the slender thread of total and unremitting violence. By nature, social inequality is unstable, precisely because eventually most people are being pushed into poverty so desperate they lose their fear of suffering or using violence themselves. It gets to the point they have nothing to lose, and there are a lot more of them than of the select group who claims to own everything. What is the select group to do, when it looks like their war against everybody else will stop going in their favour? Their favoured solution for a very long time is fundamentally what is politely labelled colonialism: stealing somebody else's land. It doesn't take much time with your favourite search engine to find references to "the colonies" as an excellent means to take care of the "surplus population." Said surplus includes those who become "colonists," the soldiers expected to kill off the opposition, and those who manage to stay home but make a basic living via work supplying the soldiers and the colonists.
Right now, we are living through a period in which social inequality is now so incredibly out of control that it's major economic support system, land exploitation, is collapsing. There is basically no more land left on Earth to mine for trees, or metal, or people, to pick just a few salient examples. A non-trivial number of corporation heads are hoping to mine water and probably clean air next, but the likelihood this would be more than a short term "solution" is small. It doesn't look like a promising way to diffuse frustration and desperation. Worse yet, current winners in the social inequality game are finding the effort to persuade everybody else of the positive benefits of colonialism is nearly impossible nowadays. No one believes there are "empty lands" on Earth anymore, for the very good reason there aren't, and probably never were any, once life happened. So how to recreate the prospect of "free land" and start the pressure release cycle again, without tripping over scruples about genocide and warmongering? Enter Mars.
After all, there are no humans on Mars. The scientists keep looking for signs of life and can't seem to find any convincing evidence. The trip to Mars would probably take at least a year, and the recent "mission to Mars" proposals tend to assume the crews won't be back. Sure Mars is much smaller than Earth, but it has no oceans, its area is close to that of Earth's dry land, and the relief is generally low. Boosters brush aside as a technical difficulty that people would have to live inside some sort of completely controlled capsule environment, and how that is supposed to be built or supplied. So what if it will be difficult, here at last is perfect colonialism, and none of that bullshit Ray Bradbury came up with in the Martian Chronicles either. Whether or not you consider colonizing the Mars (or even the Moon) possible, I think we can agree that this is not an immediate solution to the problems we face here on Earth. So effectively, there is nowhere else to go, and there are plenty of us with no desire to go anywhere, we're fine with Earth. It's the oppression we have and will have an unrelenting issue with until it's gone.
So here's the real question for the space exploration boosters out there: right now we have nowhere else to go. Are you willing to turn your minds and your money to putting an end to the fundamental problem instead of trying to paper it over by finding ways to purge "excess population" and doubling down on viciousness? That's the only guarantee of a future there is. (Top)
But, Is It Tech? (2016-09-06)
Here is a question not for the faint-hearted. What is a "tech company"? Seriously. For a question of this kind my usual dictionary is useless and quaint, which means a visit to wikipedia. According to wikipedia, a technology company (absolutely no slang for wikipedians): "A technology company (often tech company) is a type of business entity that focuses primarily on the development and manufacturing of technology. IBM, Lenovo, Huawei, Microsoft, Apple, Oracle and others are considered prototypical technology companies. Information technology (IT) companies and high tech companies are subsets of technology companies." High-tech is of course the stuff that is so bleeding edge it will either mess up whatever you plug it into because its driver is missing or has no obvious utility at all.
The wikipedia community has left a few companies out whose spokespeople or owners insist are tech companies, yet could not be considered another example of the ones they have listed. Notice the list includes companies that design and sell hardware and software or just software. The currently otherwise omnipresent twitter, facebook, and even google,* are not on the list. This strikes me as anything but mysterious, because fundamentally what these three companies sell is not so much technology as advertising. Not that they will admit it, even as facebook declares war on adblockers, apparently in the belief that people will not walk away if they can't block annoying and intrusive ads, which nowadays is something like 95% of them. Nor does selling access to any amount of information about the users of the services maintained by these companies strike me as equivalent to a technology.
So it seems that the way the word technology is used by on-line based advertising companies has many parallels to the software patents being laboriously and expensively overturned by litigation over the past few years, the ones based on the idea that implementing an obvious idea on a computer should be patentable. In other words, if a company does whatever it does on the internet by means of computers, it's a "tech company." Which if we took it absolutely seriously would probably force us to relabel most newspapers, entertainment companies, and distributors technology companies. This seems absurd. On the other hand, it is easy to see why new age advertising companies would prefer to be considered tech companies instead. It deflects the general disgust with advertising while clothing them in the golden glow of something we have all been relentlessly trained to consider always good and wonderful and an improvement, no matter what it consists of or does. I think it is fair to concede that this is an outstanding bit of marketing, though it is no more true than the manic claims on many household products, especially soaps, that they are NEW, IMPROVED, and BETTER THAN EVER. They can't possibly be, and everyone knows it, from the marketing team to the eventual purchaser of the soap. And we all know the likelihood any company would agree to an advertising campaign that instead said, "We changed the packaging and how the soap smells. It's still soap. It'll clean stuff. Don't get it in your eyes," is pretty low.
Unlike the new-age ad companies though, it is also not likely that the companies selling household products are going to see any grave issues with their sales from people treating their claims as mostly silly. If a part of the air in a new-age ad company's profit balloon is hype rather than something substantial, things could get ugly when that air leaks out, as indeed it has for a range of smaller fry, some of which were bought out, others of which quietly went belly up leaving most of us none the wiser. For the bigger fish in the "social media" and search engine parts of the pool, it will be interesting to see how things work out, and which of them manage to outmanoeuvre hype hits and how they do it.
*Due to the google re-organization and spin-off derby, I understand "google" is the search engine related part, not the part that sells chromebooks. Even if I'm mistaken, the chromebooks are definitely a sideline, not the main business, so I think the contrast holds.(Top)
Freedom From Versus Freedom To (2016-08-25)
A number of writers, among whom the most famous is probably Riane Eisler of Chalice and the Blade, have explored the differences between "power over" versus "power to." The differences, as you can imagine, are not just semantic, but also ethical, and when or if "power over" should be valued more than "power to" is a critical question all of us are dealing with right now. For example, should we be seeking power to work with the environment and ease the difficulties the changing climate is bringing us, or should we bullheadedly try to take power over the climate à la bad science fiction with climate control machines? A much smaller scale example may be at play at your local place where people get educated: should people's power to for active learning be curtailed in favour of rote learning? This example is a bit better, because it draws out where we struggle most, when it seems like "power over" might be the right answer in one situation, and "power to" in another. Thinking this over made me think again about another culturally loaded word in english these days, "freedom," which is related to power.
"Freedom" like "power" isn't a word that can stand by itself when we want to be sure what we're discussing. One modification for discussion purposes is the one that I have made in the title, of "freedom from" versus "freedom to." Similar to "power to" and "power over," these are not absolutely separated concepts or even absolutely separated kinds of action. They are interwoven in that how we enact them impacts other people around us enacting them from their side. I may be able to exert power over someone to force them to do what I want, and that obviously denies any power to say no they may have, and may do damage to us both in terms of the potential power to have positive relationships with others, let alone each other. There seems to be considerable overlap between the different versions of power and freedom at first glance, because if I have such power over, I am free to act in that way. But the key thing to take notice of is that power refers to an action you can take, an ability you have, freedom is a state that you can be in.
So I may be free from sickness or fear for example, or free from coercion by any of the all too numerous ways it may be applied. Because of such freedom, I may have attendant power to do an all manner of things, from the undoubtedly positive such as help others, to the definitely negative like punch my neighbour in the nose. On the other hand, I think a case could be made that "freedom to" is just a semantic difference in this context rather than a truly meaningful one, unless the difference can be vested in the role of other people. I've actually done this fortuitously by highlighting "coercion" in the "freedom from" part. Coercion requires at least two conscious wills, one person forcing, the other person resisting more or less successfully. If we follow that, then we could argue "freedom to" obtains in situations where we could be prevented from exercising our power of whatever sort by factors outside of any willed action. The standby examples here tend to be weather and natural disaster related. There are people currently dealing with the effects of earthquakes on their lives in Burma and Italy who would love to have the freedom to go about their ordinary lives, but this is not going to be possible for awhile. I would like to have the freedom to go for a long walk in just a t-shirt and shorts today, but it's pouring. Then there are instances such as, I'd like to go work on this paper now, but I need to eat first.
Perhaps this perspective could help sort out what people mean when they inveigh about their "freedom of speech." So often the people who make statements in public about "freedom of speech" are clearly having no issues speaking freely whatsoever, to the point of intruding on the peace of some and blocking the speech of others, that a quite different message comes across. They seem to be demanding freedom from the speech of others, whose speech they find coercive, that it forces them to do something they would rather not, or at least that it makes them feel they must oppose such force. Which is quite backwards if the person insisting has one or more likely more positions of privilege in society, as indeed the majority of people I have observed insisting on "freedom of speech" have. This is not generally a rhetorical ploy of people of colour, anyone in poverty, or women, for example.
Which leads me to think not a few "freedom of speech" folks are effectively demanding that everyone accept that any new information or speech that inspires them to question their privilege or to think about it in anyway, is a form of coercion. This in the teeth of the fact that they are refusing to do that questioning and thinking, and their social privilege guarantees them the choice to refuse. It is undeniable that having your privileges challenged is uncomfortable, it's just that mere discomfort is not a surefire sign of coercion. Perhaps more often than not, it is our better nature not allowing us the freedom to ignore the actions we must take to put wrong things right, especially when those actions are our own. (Top)
"So They Stopped Just East of Something Awful..." (2016-08-18)
The backroads and byways of the universe are curious ones, and one of them included hearing a book by the late John Bellairs, a well-known american writer of numerous books for "young adults." This book, along with many others in Bellairs' oeuvre, included a number of wonderful, obsessively cross-hatched black and white illustrations. Despite my relative youth when I first saw them, they stuck firmly in memory, their distinctive style so strong in my mind's eye it was an astonishing surprise to see what could only be an animated example of the same artist's work in the opening and closing credits of the Mystery! series on my local public television station. Lamentably, I can't remember now by what means I finally determined who the artist was, and it was of course, Edward Gorey.
Like Gorey himself, among the illustrators whose work appealed to me at an early age were John Tenniel and in effect also the brothers Dalziel whose wood cutting style complimented his work so well. I haven't found any articles or books mentioning if Gorey had a fondness for Doré, whose work also has many points of resemblance to Tenniel's. The general theme of these and a number of other artists in the same visual genre, is the application of careful gradations and concentrations of lines to create tone and depth. Where they tended to differ most was their methods of stylization, with Doré being very much in a realist mode, Tenniel tending to run more to the absurd (he was primarily a political cartoonist after all), and then Gorey himself, with his mode of realistic unrealism.
I trust the language and logic purists have long ago been driven away from the Moonspeaker if they ever came, but I won't leave this oxymoron unexplained. The world of Gorey's own books, beginning with The Unstrung Harp, is consistent and fascinating even as it is always benignly off-kilter and rather unnerving. "Something Awful" is a town the protagonist of The Unstrung Harp pauses by on a long drive, and it is an excellent example of Gorey's charm as a writer. He had an abiding love of the strange nooks and crannies of the english language, and seems to have collected words whose sound especially tickled his ear heedless of sense, and then found ways to bring them together in stories and numerous alphabet books. The result is a stable, if still somewhat shadowy fictional world that nevertheless reflects many of the foibles and worries of the real world, always with some action or detail that shows the reader that this fictional world is unrealistic in unexpected ways. Many of these twists are impossible to discuss without tramping heedlessly into spoiler territory, but the place name "Something Awful" is an excellent example. Realistically, nobody would name a place that, right? maybe? Then if you are also a person who due to travel and other circumstances has seen many small towns, companion names matching Something Awful's slight cockeyedness begin to come to mind, usually derived from a now unfamiliar surname, such as "Find Later" or "(S)nottingham."
Gorey spent much of his career creating commercial art alongside his books, theatre projects, and his famous attendance at every ballet choreographed by George Ballanchine (The Gilded Bat is a story with a ballerina protagonist). Most of his commercial work consists of book cover designs, through which he matured his own drawing style and came to hand letter the text of his own books rather than have them typeset. No doubt most examples of Gorey's book jacket work has been collected out of the second hand book market already, though a selection of examples can be seen at the Edward Gorey House website, and also in Edward Gorey: His Book Cover Art and Design with text by Steven Heller, published by Pomegranate.
According to his biography at the Gorey House website, his work is also considered "a precursor of the graphic novel movement" which I think must mean among artists and writers not necessarily focussed on participating in the established comic books market. Gorey is certainly an early respondent to the dearth of books for adults that combined illustrations and text to tell a story, where the former added further information and repaid being "read" in their own right. Eventually this led him to create several picture books for adults, including The West Wing, a moody meditation on – actually, I'm not sure what on, which is just as Gorey intended I suspect. (Top)