Title graphic of the Moonspeaker website. Small title graphic of the Moonspeaker website.

 
 
 
Where some ideas are stranger than others...

FOUND SUBJECTS at the Moonspeaker

On What's Right With the Web, Part One (2015-12-22)

OPTE Project graphical representation of the internet as of 2003, reproduced here under Creative Commons Attribution-NonCommercial 4.0 International License, accessed 2015. OPTE Project graphical representation of the internet as of 2003, reproduced here under Creative Commons Attribution-NonCommercial 4.0 International License, accessed 2015.
OPTE Project graphical representation of the internet as of 2003, reproduced here under Creative Commons Attribution-NonCommercial 4.0 International License, accessed 2015.

Having spent three Thoughtpieces detailing what is wrong with the web in broad strokes (1 | 2 | 3), it is just as important, if not more, to take on the tougher task of explaining what's right. The task isn't tougher because there isn't plenty right with the web, so much as that at least right now in north america, we are very much encouraged to articulate what we don't like within specific limits depending on our gender, race, class and so on. Yet there is nowhere near so much encouragement and even training to express the opposite, probably because the former is perceived as the discontent that is supposed to allow the ongoing survival of capitalism past the logical physical limits of the Earth itself, let alone human societies. This makes it all too easy for what we value most to be appropriated and remade into a profit centre for a rent seeker, because it is hard to defend what you don't consciously realize is precious and worth preserving. Your very valuation of whatever it is, whether "it" be an object, a system of social relations, a way of doing things, makes it prey for just those sorts of people who see those things as merely not yet exploited sources of money. It is no coincidence that many of those valued relations and objects are not participated in or used by means of money exchange at all. This is another feature that makes them harder to recognize in a capitalist fundamentalist society to be sure, yet also an important reminder of the limits of capitalism. But let's save that topic for another day.

The fundamentally right thing about the internet in general and the web in particular is that they constitute a communication medium. The reason one of the earliest tools created for *NIX boxes was an instant messaging program had nothing to do with the initial limitation to text, and everything to do with the fact that we humans love talking to each other. Here was an early means to chat together, and furthermore the people talking could be in different countries or otherwise far enough apart that telephone long distance charges would be prohibitive. It was also relatively low bandwidth, so even at the astoundingly slow speed of a 14.4kbs modem it was possible to participate in the extraordinary proliferation of bulletin boards and email besides. Once limited file exchange was possible via ftp, it was probably inevitable that Tim Berners Lee would propose and develop the fundamentals of the web. His conceptualization wasn't just about file exchange, but about information exchange, because people could build pages to provide information. At first a bunch of those pages were probably little better than phone books, but that didn't last long. The first text-only cat meme website likely wasn't far behind (and it probably had ASCII art too).

Considering how visually oriented the web is these days, it is understandable if this claim inspires skepticism. Yet, if the internet were anything else, we wouldn't be dealing with the issues caused by telecom companies trying to move all phone service purely onto the internet against all common sense (when the power goes out in an emergency, so does the internet, guys, and in some places that means cell service too). Nor would new generation on-line chat tools like Jabber or Skype be feasible. It doesn't matter that these are new and/or different realizations of ways of communicating we already had. That is pretty much how integrating a new technology works, first you do things you already knew about with it. Then you get more adventurous, pursuing the concept of hypertext further, for example, although "hypertext" actually isn't new either. It's a non-obvious reconceptualization of the footnote and the bibliographic reference.

What makes this all so exciting and valuable is exactly what makes footnotes and bibliographies so useful: the interconnections between people and between the information they choose to share is greater than the sum of its parts. The graph from the OPTE Project illustrating this article suggests this in a metaphorically pleasing way. Suppose for instance, that I wanted to write an article about Adam Smith's philosophy. Within literally minutes, I would soon learn that he wrote far more than The Wealth of Nations, and that he was primarily a moral philosopher, not an economist, and that his most infamous book is actually a small (and yes important) part of his work. I could even download copies of older editions of all his books, research the social context within which he carried out his work, and go on to make unexpected and instructive connections. For example, see a book like Napoleon's Buttons or recent studies on James Mill and his son John Stuart Mill explicating how the former's history of the british colonial project in india influenced his son's eventual philosophy. And then blow your mind by learning about Harriet Taylor Mill. Such connections are much easier to research even just with email, let alone all the other options for gathering information and asking questions of experts available today.

Although access to the internet is by no means ubiquitous, and it is not going to end tyranny and other evils in the world just by its existence, it is true that it can be used in a way that successfully opposes such things. So long as everyone remembers that there has to be human action in firmspace by people using their bodies, not just their phones and computers for that potential to be actualized, the internet could certainly become what some of its boosters claim in the future. Of course, some of the challenge to that lies in the issue of how information on the web is found if it is found at all, quite apart from the key questions of how internet access is made available and on what terms. (Top)

Compelling Attention (2015-12-18)

This attitude seems to be surprisingly prevalent on the web. Standard clip art tweaked by C. Osborne, december 2015. This attitude seems to be surprisingly prevalent on the web. Standard clip art tweaked by C. Osborne, december 2015.
This attitude seems to be surprisingly prevalent on the web. Standard clip art tweaked by C. Osborne, december 2015.

Way back in September, Cory Doctorow sent a tweet that he also featured on the blog he helps edit, Boing Boing. Evidently he felt the comment had enough weight behind it to deserve being set out for the less- or un-twittered readers of the blog and whoever might reblog it from there. I often disagree with Doctorow, and have a real issue with his perpetual self-promotion, but when he settles down to impart real information, the combination of his undeniable writing skill and careful thinking leads to great things to read. This particular tweet, his response to the pseudo-hand wringing about whether making a twitter account private was contributing to a dangerous impingement on free speech, is indeed worth reading and thinking carefully about: "If you think 'free speech' is the right to force others to listen, you actually believe in 'compelled attention.'" Since september, the hand-wringers have turned additional attention to blocking people on other social media like facebook and the majority of the hand-wringers are men who feel angry that their access to certain people, especially women, is being curtailed. It is no surprise to see attempts to use a concept widely viewed as progressive and liberatory being deployed in an attempt to strategically undermine efforts to counter and prevent sexual harassment and on-line abuse. After all, the technique was developed in firmspace, in at least two forms.

One is quite crude, and involves an attempt to relabel a given concept by making the new term a part of endlessly repeated talking points. It is a central feature of modern political campaigns, and depends for its effectiveness on drowning out the original term and managing to bully or otherwise manipulate other people into using it. A good example is the odious term "political correctness," specifically developed by socially conservative right wing spin doctors in the united states in an attempt to convince the world at large that treating people of colour and women with respect amounted to being unable to "speak truth" and "be honest." The other technique is in some ways more subtle, although it is thrown around no less messily, because it manipulates presuppositions.

"Presuppositions" are the silently assumed ideas a particular statement or question carries, and the most infamous exemplar is the leading question "when did you stop beating your wife?" In order to counter invidious statements of this kind, it is imperative to respond to the presuppositions, not the surface message. However, it can be very hard to do that, because by design a leading question or statement is intended to trigger an unthinking response by tapping into strong feelings. In the case of leading questions, the feelings are guilt. In the case of strategic manipulation of statements considered progressive, the feelings are outrage and anger, and the point is to divert those feelings from their original target. Doctorow has put his finger on just such a use, which is again being applied in an attempt to neutralize successful resistance to harassment and oppression. The primary presupposition of the "free speech concerns" about blocking and privacy setting is that harassing and abusive behaviour can and should be equated to free speech. But the issue at hand in the item that drew the discussion and Doctorow's comment in september had nothing to do with free speech, and everything to do with perceived right of access to a person's social media account.

"Compelled attention" and "compelled access" are closely tied things, on-line or in firmspace. If you can force your way into someone's home or office and impose your presence on them, you are certainly compelling their attention. One does not require the other, however. A burglar may force their way into a home, but the last thing they want is to compel anyone's attention. This is also true in cyberspace, since the metaphor THE INTERNET IS A WORLD has structured the web from surprisingly early. If ever it was conceived primarily as a great web of intersecting tubes outside of programming tasks, I doubt that conception could have remained primary for long. There is a long cross-cultural tradition of using places to help organize, understand, and remember complex bodies of information. The constellations, systems of place names and the "memory palace" mnemonic technique are just a few examples. In the beginning though, the metaphorical internet was supposed to be a very small and controlled sort of world, the virtual avatar of a work desktop located in a cubicle, the horrid mutant offspring of the monastic study carrel. This is no trivial starting point.

The model desktop of the internet, and indeed computer software more generally, is not a management desktop either. A management desktop is not widely accessible, and can have documents and the rest left openly on it, because a manager almost by definition has an office with a lockable door and an assistant to help curb access the manager does not want. Regular staff desks are quite different, in that their desks are default accessible to managers, and what they keep on their desks is defined by rules. The nature and size of their desks is often also closely hedged about by rules: they can only be a certain size, all or most personal items may be forbidden, all documents must be kept in one particular drawer. I have worked in an organization where the rules of cubicledom all but forbade personal items (even paper calendars were regulated by size), and managers feel free to rummage in an employee's desk for something to write a note on if they are not near their own office. Documents must be kept in designated parts of large filing cabinets at each end of the office floor, and the only locks on the employee desks and the hutch only just wide enough to store an outdoor coat in, are the kind that can be easily unlocked using a ruler. (I'm not making this shit up.) In effect, staff desks are depersonalized, semi-public spaces with little or no security of any kind, and people who consider themselves more entitled can be a source of serious disruption, harassment, and even danger. Furthermore, they are places where the speech of non-management staff is sharply curtailed by the power differential between them and management.

Here's the thing. The internet is not the office, and the guiding metaphor for understanding it as a space is changing. People are demanding an internet like the world, in that they have their own place in it, and they can decide what to do with that place, who may visit, and who may stay. In fact, it is a cruel irony that for too many people, cyberspace may be the only "location" where they have a place of their own in this way. Social media boosters encourage people to view their product as similar to a virtual living room, apparently having in mind the way television and the Nielsen ratings helped compel attention to advertising and facilitated information gathering. Yet as soon as this encouragement is taken seriously, not just the social media companies set about doing everything they can to cripple the attempts of their users to maintain privacy and manage their virtual living rooms. So do crackers and abusers, and any person with a belief that they are entitled to access to certain types of people, regardless of the wishes of those people or of whether their actions in service to that sense of entitlement do harm.

If the web is going to survive as a vibrant medium for all the wonderful things it does and successfully curb and prevent the horrible things that it also may be used for, the sense of entitlement fostered by sexism, racism, and greed needs to be wrestled to the ground and put out of its misery. And until that is done, we all need dependable encryption, genuinely secure computers, and to respect and applaud the efforts people take on-line in general and on social media in particular to maintain their privacy and the civility of their social media spaces. It's a tough job, and their efforts make the web better for everyone. (Top)

Indigenous Futurism (2015-12-06)

A still from the trailer for Elizabeth LaPensée's 2011 short film 'The Path Without End.' A still from the trailer for Elizabeth LaPensée's 2011 short film 'The path Without End.'
A still from the trailer for Elizabeth LaPensée's 2011 short film "The Path Without End," 2011.

A time ago that seems far longer than it is (roughly a year), I finally decided to try figuring out what the term "Afrocentrism" was supposed to mean. There is an ongoing racist effort to redefine it into something akin to the absurdities of Eric Van Daniken alongside the white supremacist panics every time a black scholar or any other person of colour scholar insists on engaging their studies in a way that does not make everything about white people. Even the "friendly" efforts to make non-white centred studies white-centred after all by presenting them as a response to white boredom seems to be petering out, and not a moment too soon. Then again, that may have all too much to do with the ongoing upsurge in deadly violence against people of colour in the world right now.

I didn't know if it would be possible to find calm and nuanced discussions of the meanings Afrocentrism with examples and a respectful acknowledgement that there is disagreement about it. So it was a pleasant surprise to find just that, and not just one, and many of positive thought-provoking quality. (This contributed directly to my set of short pieces about what's right with the web by the way, those will appear soon.) There is no single definition of Afrocentrism of course, because its guiding principle is decentring the white perspective and experience. So its application is going to vary depending on what material is being considered, which may in turn lead to additional terms to more specifically describe what people are doing. Being an Indigenous writer with a penchant for fiction that can be categorized as science fiction, fantasy, speculative... well, whatever the hell attempted rigid tiny box marketing departments are forever trying to create for writing that works with truth via stories that are not literally real, the following definition stood out.

Afrofuturism denotes an approach to science fiction that recognizes that stories of technically advanced aliens who kidnap, rape, and enslave human beings are not fantasies but are, rather, retellings of the actual history of contact between Europe and Africa. As Greg Tate has reasonably put it, "Black people live the estrangement that science fiction writers imagine."

This definition gave me a whole new perspective on the expression of real fear that came over a white friend of mine's face one day, when somehow the question of whether there were small, grey aliens kidnapping humans à' la Whitley Streiber. I had puzzled over such stories myself for a long time, because on one hand yes they were creepy, yet they did not seem unfamiliar, and the majority of stories in this line came from white people, it seemed. ("Seemed" is a key caveat here, I am stating my impression, not making a statistical claim.) In this conversation with my friend, the whole thing suddenly came clear. It seemed then, and seems now, that that particular genre of stories reflects how non-Indigenous humans understand the experience of animals being caught and tagged for monitoring and experimentation by scientists. Think about it: a noisy flying machine swoops in from overhead, blazing with light. The animal is tranquilized, so that it falls paralyzed and losing consciousness as unfamiliar two-legged creatures descend on it. The two-legged creatures lack fur, have almost non-existent noses, tiny mouths, and large eyes that dominate their faces. Those creatures then proceed to take samples using more or less invasive instruments, leaving behind a tag punched on through the ear or a radio collar designed to be impossible for the animal to remove, at least until its monitoring devices stop working. At this point I had to stop, because my friend was freaking out. Her response wasn't what you might think. "It isn't like that, it isn't like that!" I'm still not sure if "it" was a back reference to the alien abduction stories, or a reference to the capture of animals for human use.

UPDATE 2018-09-17 - In the course of working on a more focussed article on Indigenous futurism and historiography, I finally found Rebecca Roanhorse's sublime article in Uncanny Magazine, Postcards From the Apocalypse and the transcript of the Indigenous Futurisms roundtable at Strange Horizons, both emphatically worth the read.

A short while ago, I finally managed to stumble into Indigenous futurism, the approach being taken by Indigenous artists to the same type of material. It's not the same approach, as is to be expected. It is Elizabeth LaPensée and Grace L. Dillon's position that Indigenous people have been telling science fiction stories long before the invasion of the americas (also see the post at Silver Goggles about LaPensée's film, and have a listen to her interview on the Red Man Laughing podcast). Okay, but then what does science fiction mean or tell about, if the foundational narrative of the tellers is not based on the incessant invasion, colonization, and ruthless exploitation of other places and people they don't know? It can still include tales of space travel, and it can certainly struggle with the question of colonization and its related problems – Basil Johnston has retold several such stories in his published works, for example, oral traditional stories. All too often, Indigenous science fiction is misunderstood because of careless, disrespectful or just plain bad translation, because alternate dimensions and realities are also other spiritual planes. Let alone the issue of the assumption that Indigenous philosophy, spirituality, and epistemology are somehow retarded and lesser than so-called "western" approaches.

The most powerful aspect of old Indigenous science fiction (or speculative fiction or... see above) and the stories more and more Indigenous authors are sharing, is one that is often missed by white people. In our stories, we have a future. A future, a near future, any number of alternate futures, a future. In the vast majority of such fiction from non-Indigenous writers until it seems like the day before yesterday, there were no Indigenous people in science fiction, because after all, Indigenous people were not supposed to have a future. They were supposed to die out like all the other presumed lesser humans who were allowed to be considered human because they could learn to speak a colonist's language. Steampunk is among the earliest places this got broken down, albeit via the resurrection of racist stereotypes of Tonto and all the other wooden Indians. (Sorry Star Trek fans, and remember, I'm a fan too, important as Chakotay is, he's another damned wooden Indian stereotype, and an unlikeable git to boot – a tough spot for Robert Beltran to be in.) All unintentionally, this knocked a significant hole in the barrier to publication of Indigenous science fiction. Since all the stories waiting to be shared are more like an ocean than paper airplanes, that meant the whole barrier has been coming apart inexorably, though all too slowly.

A future imagined with Indigenous people actually in it, is a very different sort of future. That imagined future may be bright, or sort of murky, or just plain dark, depending on which issues the writer is dealing with. It all depends on the selected "what if..." question we pick. The biggest and most important starting point is no "what if" though – there is no question along the lines of "what if my Nation has a future?" We already know that our Nations have futures, and we are certain of that. Walidah Imarisha, co-editor of the anthology Octavia's Brood put it most powerfully and eloquently:

Our ancestors dreamed us up, then bent reality to create us.

(Top)

On the Trouble With the Web, Part Three (2015-11-06)

Quote from Edvard Munch, Skrik, from the National Museum of Art, Architecture and Design in Oslo, Norway. Quote from Edvard Munch, Skrik, from the National Museum of Art, Architecture and Design in Oslo, Norway.
Quote from Edvard Munch, Skrik, from the National Museum of Art, Architecture and Design in Oslo, Norway. This image is excerpted from a public domain image at wikimedia commons.

Yes, I know, you're right. The animated gif has never been gone, some of them are excellent. However, like anything designed to move and distract attention for whatever reason, the key challenge is to use them with moderation – which is hard, because they're cool. I have observed a terrible proliferation of them on blogs I visit that have any interest in pop culture, which reminded me of something not so cool about animated gifs: they never stop unless they are set up to loop only once or a few times on loading. If there is a way to make the gifs stop after a few loops or a number of seconds, I have not found it. Worse yet, a misbehaved animated gif becomes a giant smear of frames across the webpage, ruthlessly gobbling room and generally making things illegible. The end result is that I have resorted to blocking these items by default on those sites, and it isn't often that I miss much by doing so.

UPDATE 2015-12-18 - Stopping Animated Gifs: I continued investigating the animated gif issue, and finally found some pleasing results. First, some Firefox forks include a preference that you can set to allow animated gifs to play only once, such as Sea Monkey. Most other browsers allow animated gifs to be stilled by hitting the escape key. Unfortunately, both Safari and Firefox fail epically by lacking this feature. In the case of Firefox, an add-on called Better Stop reimplements what had been a feature of the browser before, a tip I learned from the Tech 4 Luddites blog. (Unfortunate title, useful blog.) There is a similar extension on Github for Safari called Deanimate, and the comments on it are generally positive.

UPDATE 2022-08-04 - I had forgotten to make note of the fact that someone at firefox got the memo about restoring the ability to alter the behaviour of animated gifs. It is hidden in about:config still, which is annoying, but at least it is there now. The setting is "image.animation-mode" and it can be set to once, none, or normal. "Normal" means the damnable animated gif never stops, "once" does what should be the real default, and "none" never lets the animation play.

The trouble here is an analogous situation to excessive and obnoxious ads. They are meant to force attention and somehow persuade you to buy something in the ad case, and in the case of those sites, they're meant to add interest to the item they illustrate. The trouble is, when everything is jiggling, gyrating, and visually shrieking, the result is visual mud. The sites I have in mind are blogs that bring together cool and/or interesting things. The things are already cool and/or interesting. Animated gifs rarely add anything, unless they are integrated in some way as part of the original item. More often they are added as a sort of editorial comment.

The issue here seems to be, the still common insistence that there should be less text on webpages and more pictures. Perhaps this is in part an extreme reaction against the original web, which was text only. It didn't stay pictureless long, and video followed rapidly after. Text remained predominant, and at first nobody found fault with this. Then something funny happened around the early 2000s as more and more people began to see the web as a way to extract rent by placing ads on other people's web pages. The web advertising rant has already been provided in this space in The Trouble With the Web, Part One.

Isn't it curious how once the business model of those ad companies began to gain traction, that suddenly it turned out that a wholly new way to write for the internet was required? This after literally twenty-odd years of successful web writing for different purposes and audiences had already been developed. Books on building websites began explaining that sentences should be no longer than could be comfortably read on a screen that was 640 pixels wide, paragraphs verboten if they were longer than two or three sentences. They recommended many pictures and lots of pull quotes, although what the pull quotes could be from when the article also had to fit on only one or two screens remained a mystery. By definition, pull quotes come from the main text. These guidelines had little relationship to studies showing that computer screens were not equal to books for lengthy reading, since most of those studies hadn't even happened yet. No, they really came into their own with the proliferation of ads and the attempt to claim there is no other way to fund great stuff on the web.

A text written according to those guidelines is very easy to paste ads into. The ads can be placed wherever a pull quote could go, or where in some magazines an illustration would go (Harper's still has such illustrations, for example.) These guidelines created the flexibility for ad insertion on web pages that two column layouts on paper can provide, and indeed, you can see that some of them must have come straight from looking at such layouts on paper and trying to understand what made them more readable. Except, they missed the point, as the many great sites on typography both for paper and the web have explained. The issue is not necessarily the length or complexity of the text, which should vary with the purpose of whatever is being written. The issue is with the legibility of the words due to the combination of font, layout of the page, and layout of the words themselves.

The combination of this often eviscerated or pre-masticated form of writing for the web leads to web pages with not much reason for anybody to stick around looking at them, with the obvious exceptions of sites that are primarily about movies, sounds, or visual art of some other kind. So the folks writing in this way resort to flashier graphics to hold interest, just as advertisers do. Which is incredibly counterproductive, because the problem is that the writing isn't dense enough. (Top)

If You Try to Discuss OSes, Then You Will Be in a Flamewar (2015-10-30)

Flamewar warning created by Jillian Leigh, whose site seems to be down, alas. Flamewar warning created by Jillian Leigh, whose site seems to be down, alas.
Flamewar warning created by Jillian Leigh, circa 2012, whose site seems to be down, alas.

This seems to be some kind of cyberspace and firmspace pseudo-law, and it has always puzzled me. The recent microsoft update foul up led to another round of snipes against linux and macOS users who are "not special" for having left behind windows. Well, lots of them were never on windows to start with, but what's up with this "special" bullshit on either side anyway? Personally I am a big fan of windows alternatives, especially macOS until recently (not so fond of the current attempt to make apple computers into iPhones), because for my part I cannot stand windows. I got the worst possible introduction to it back in the day, which was MS-DOS on IBM boxes with migraine-causing orange type on black background CAT screens, and then windows NT with the worst memory leak on the worst machine possible in the office. (Summer students do not get the best machines, but honestly, they shouldn't get the worst – nobody should get the worst, the worst should just be taken out and shot.) Still, I have made peace with windows when I have to use it on office computers for contract work, and that's fine.

One of the strangest conversations I had was with a colleague who was looking into buying a new computer, who said to me, "But I know, you'll just tell me to get a mac." (My mother said the same thing when she was looking for a computer.) And I had to explain that actually, no, I wouldn't, and not because of the price she was considering. Apple hardware is more expensive initially, but you have to amortize that properly over the life of the computer. I have never heard of a windows box that survived in real use more than three years, unless the owner managed to avoid hardware failures and switched to linux to overcome the always growing RAM requirements of windows. That is not universal anymore though thank goodness, especially if you build your own windows gaming machine or find your way to a well-made IBM Thinkpad (swapping out its original hard drive to get rid of Lenovo crapware, a trick I learned from Cory Doctorow), but those are both more geeky options this person wasn't looking for.

No, what I said to her was: "Not at all. I recommend trying as many different computers and operating systems as you can, and the one that doesn't make you want to hurl the wretched machine through a window, that's your box. For some folks, that's a mac. For some folks, that's windows. You've got to live with it, not me. Life's too short to have an important tool piss you off all the time." This merely means I got to see enough of how much suffering my own relatives had with an unsuitable computer plus the horror of flamewars to realize that OS proselytization is an evil, by the way. It's hard to say I could have honestly said something relatively sensible on the topic without that information.

Now, there are benefits and drawbacks of a more technical kind to each system. Windows is certainly the bigger trouble magnet because there are so many more windows machines out there, and that can be a reason in itself not to buy one, or to replace it with a different OS. I think it's too easy to misread the relief people express on replacing a system that has become a huge source of stress for them for self-righteousness. The same misreading can happen to frustrated declarations that they are going to change OSes in response to a security breach, virus, or whatever. They may not be expressing themselves well, but I am mystified by how many people who like their windows machines seem to take this sort of thing no less than personally. As if they are being rejected along with windows, which is evidently far from the case.

There are, of course, other reasons to argue against OSes of one flavour or another based on concerns about cost, privacy, proprietary software, accessibility for persons with disabilities, even whether hardware and software support are available. Yet when it comes to what makes a given OS and its GUI more usable for a given person, we don't really know why one is better for one person than another. It's not just familiarity or whether you can cope with the command line, or what you first saw. In my case, that should have made me a windows user for life, and my friend who prefers windows machines even though he started on macintoshes a macintosh user for life. On the other hand, we do seem to know what is so awful people will flee rather than use it if they have a choice. (Try to find somebody willingly using an Osborne portable. Those ugly things are damn near immortal, and were the backbone of the course catalogue look up system at a university I attended until a few years ago.)

UPDATE 2023-03-20 - The link I provided originally to Conway's paper on how committees invent still works, but it looks like he has undertaken a website reorganization and clean up that only by luck has not led to that link becoming one to a dreaded 404 error page. The newer link is How Do Committees Invent? I am quite surprised that the links on Conway's site are now all obfuscated and generated using javascript, so that if javascript is off there are no links. It looks like the point was to create the effect now rolled into standard CSS, the hover effect to change link colours on mouse over.

Thinking it over, it occurs to me that maybe a thing that makes OSes feel very different to different people has to do with Melvin E. Conway's observation that "Any organization that designs a system (defined more broadly here than just information systems) will inevitably produce a design whose structure is a copy of the organization's communication structure" (see his website, especially his on-line reprint of How Do Committees Invent?). So depending on the person's comfort with a particular type of organization, that in itself can inflect their experience of major products that organization produces, like a computer operating system. The apple "I'm a Mac" commercials, whether you loved them or hated them (some of them were too obnoxious even for me) were playing on a real contrast. In fact, this may also explain why so many people with a more detailed knowledge of computers get up in arms about apple's walled garden, while mysteriously giving google's a pass. It's as if they see one as bad because it has an opaque wall, but google is okay because the wall is made of glass. Trouble is, it's still a wall, guys. Of course, this is my interpretation, and I'd be interested to learn more from the folks who would not characterize the contrast they are making in this way.

To return to the main point, if Conway's observation explains the different feel of various OSes, which is itself a product of how they are designed, perhaps it also explains the visceralness of response people can have to finding out someone else dislikes their favoured OS. Perhaps they really do feel a bit identified with the sort of organization that OS reflects, so a critique of that OS ends up feeling more personal than it is. I doubt this is something anybody would seriously expect, and so it would be mostly unconscious in its effect. This seems a much more respectful way to interpret how it is possible for people who otherwise discuss computer related matters with quiet rationality to explode with vitriol as soon as the "OS question" comes up. In fact, this effect could even be an important driver of flamewars that aren't being egged on by trolls outside of the category of information systems, since Conway himself pointed out that his observation applies more widely than them. (Top)

On the Trouble With the Web, Part Two (2015-10-01)

Biohazard logo courtesy of Wikipedia. Biohazard logo courtesy of Wikipedia.
Biohazard sign graphic contributed to wikimedia commons by Offiikart under Creative Commons CC0 1.0 Universal Public Domain Dedication, 2015.

Okay, so the current iteration of advertising is one big problem, and it says a great deal that this is widely agreed on today. However, that certainly can't explain the awfulness of much of the web right now, because so much of it derives from places where advertising is besides the point or even non-existent. Shitty stuff went down at reddit long before the site's management began trying to monetize it. The determination to monetize anything that has a large number of unpaid participants is a huge problem, but that goes right back to the issue of advertising and the attempt to colonize such spaces with ads because supposedly they'll generate so much money from all those people supposedly dumb enough to work for free.

UPDATE 2016-01-01 - On "The Web Obesity Crisis": I am far from the only person talking about this. See the brilliant and hilarious talk by Maciej Cegłowski. He draws out the same points about how effectively the drive to serve ads among other issues not only bloats sites beyond utility and helps spread malware, it is encouraging people to believe they can't participate in building the web without extensive tools and "services." He also points out – and I had not fully realized this – that it has gotten to the point that it is hard for novice coders to learn how to build websites by using the "view page source" option in their browser. He is anything but kidding. Just recently I finally had to resort to searching the page source to get away from all the ad bloat code, and then figure out how to navigate to the mangled css file that was causing the issue I was trying to work around so I could view the page. The site in question, which shall not be named, is usually worth the effort. Not this time, and I no longer visit it.

UPDATE 2023-06-26 - Further to the idea of deliberately developing and maintaining sensible websites that don't waste bandwidth and processing power in the worst ways possible, it is well worth studying Kris de Decker's ongoing project Low Tech Magazine. De Decker is serious about showing how energy use reduction is possible as well as absolutely necessary to deal with the climate change issues humans are facing. He sometimes wobbles on the edge of individualistic solutions, but he is broader and increasingly well developed point is that so-called high technology is at its best when deployed strategically, and its typical use and deployment based on coercion and centralization are a suicide pact with the greediest elements of the human population. The series of articles detailing the process of moving Low Tech Magazine purely to a solar-powered set up are an excellent analysis of the challenges and the infrastructure demanded by constantly online "services" and surveillance. A great starting point is How to Build a Low-tech Website? and How to Build a Low-tech Internet. Those very keen on this approach may also wish to read up on that taken by the artist duo 100 Rabbits, starting with their explanation of how they manage internet access on their sailboat and their mission which gives an introduction to their specific hardware and software choices.

In fact, the desire to exploit the energy people have put into popular websites associated with social networking and newer style bulletin boards is another significant problem for the web. The cynicism and lack of respect this desire reflects is bad enough, but it is not unique. It is contiguous with the discovery of the fan in the twenty-first century and the potential for abusing their love of a particular fictional universe for profit. Whenever the fans protest the way the fictional universe is developed, the storms of condescension pour down. After all, the universe in question is fictional, they should grow up, et cetera. Except, the fans are expressing what they understand is supposed to be the way things work. After all, aren't they customers now? Aren't they paying good money, and specifically not getting content for free? Then they should be getting satisfaction for their investment, right? If it's supposed to be good for public institutions like universities, why isn't it good for movies and television programs and comic books?

Here again, the way of approaching people surfing the web is a transfer of the model from television and the movies before the web became available and more accessible, and when in a time of low to no competition between entertainment companies. (Alas, the times of low to no competition are back.) What is the most common image of the habitual television watcher? Why, a couch potato! And as we all know, couch potatoes are passive consumers. They don't create, they don't have opinions or judgements. They just accept whatever is available among the few channels on the television. Regular movie watchers don't have quite as pejorative an image attached to them, but again there is an image of accepting whatever the nearest theatre may provide.

The thing is, fans and the busy participants on sites like reddit are anything but passive. Thanks to the now booming firmspace world of conventions, we are all learning just how much fans create, how they may bring their energy together in a worthy cause, and so on. This is also true of the new-style bulletin board communities, though in their case we have also seen examples of them banding together for pointedly unworthy causes. These are folks who will definitely resist having their efforts exploited for profit by somebody else, especially if the somebody else seriously expects them to have no share in the profit, whatever that profit may turn out to be.

On the other hand, it is possible that the majority of people now on-line are not fans or vigorous reddit participants or whatever. Perhaps they are all primarily on social media and not writing articles or building things. Except, I have a hard time seeing the effort the social media mavens I know put into their profiles and the content they contribute as equivalent to nothing. The trouble for them is the same as that for folks like me who tried using "free" webspace from various web companies in much earlier internet days. If we used that "free" webspace, the terms of use claimed we had agreed to give up all claim over our designs and work to the company for them to exploit as they saw fit. If the company went under, there went our websites. Or the company could take our sites away from us on all manner of pretexts, pointing back to the "terms of service." Well, nothing says contempt quite like such terms of abuse.

The contempt for the general web visiting population bears a strong relationship to the intense efforts to deskill and disempower them. Entire businesses as well as open source tools are built around the idea that building a website should be like using microsoft powerpoint. All visual, no code, lots of templates. But why should it? Coding a website is actually not at all difficult. Here is the code for a simple, no frills, easy edit webpage:


<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8" />
<title>Title That Shows at the Top of the Web Browser</title>
</head>
<body>
<h1>Title on the Page Itself</h1>
<p>A paragraph of whatever you want to say.</p>
<p>Another paragraph.</p>
</body>
</html>

A raw beginner would need a bit more information than this of course, but this already allows them to ask sensible questions and sort out what is happening. Having taught people the basics of webcoding myself, it astonishes me how astonished they are that it is quite possible to build a website with little more than a text editor. (This is not the same thing as claiming that the web should be coded with only a text editor – I quite agree that such claims are nonsense.) They can hardly believe it. They may never want to actually code a site by hand or be able to tweak a pre-built template for their own site, but this makes at least a small change to their view of the web. And by the way, the people I'm referring to are not people sneeringly referred to as "digital immigrants" but those lauded as "digital natives." The real definition of a "digital native" seems to be someone who has been presented everything about computers and the internet as magical black boxes into which they are not allowed to look. Certainly, that would be just the sort of person who would remain passive and easily exploitable! (Top)

On the Trouble With the Web, Part One (2015-09-24)

Biohazard logo courtesy of Wikipedia. Biohazard logo courtesy of Wikipedia.
Biohazard sign graphic contributed to wikimedia commons by Offiikart under Creative Commons CC0 1.0 Universal Public Domain Dedication, 2015.

The web used to be a very different place. It didn't used to be overflowing with ad pollution, privacy invasion by web tracker, and rampant abuse of social media and comment forums by gangs of males with the practical, if not literal mental state of spoiled three year olds. This didn't make it a sort of virtual heaven, though there were shades of that idea and hope in some quarters. (See The Pearly Gates of Cyberspace by Margaret Wertheim for the proof.) It wasn't very open, for one thing. You had to have some kind of access to a university or perhaps a large business or computer firm to even have an email address. Graphical user interfaces were still relatively uncommon. I got on-line myself right at the changeover: when I started university, my philosophy professor got the whole class newfangled email accounts because "they were the future" and encouraged people to take part in MUDs. For my Fortran class, I had to have my programs run on a mainframe and then pick up printouts of the results for one half of the semester, then spend ungodly hours in the UNIX lab after that. By the time I graduated, I had compilers on my first mac and so could work ungodly hours programming at home – and my parents were contemplating getting a computer.

More than a few commenters have suggested that the trouble with the web today is that so many people are on it now. "The great unwashed" have spoiled the web. The snobbery and vicious condescension of this claim is bad enough, almost as bad as the fact it's obviously wrong. Many of the best things on the web were simply impossible before it became accessible to people ranging from kids in grade school to grandparents. All those new people have brought additional contributions, ideas, and energy that have been transformed into projects like Wikipedia, the Internet Archive, OpenCulture, any fanfic or fandom site you can think of, preprint sites like arXiv.org and the Perimeter Institute Seminar Archive – I could go on, of course. Sometimes these projects started as one person's brainchild, sometimes they didn't. But they were impossible without the contributions and interest of all those other people who are on-line who couldn't have been back in the 1990s.

As with anything really interesting, there is not a single problem that is affecting the web so negatively. Instead there are several interacting with each other in unfortunate ways, and I think it is possible to describe at least a couple of the bigger ones here.

UPDATE 2018-05-21 - The two ad-supported sites have since been replaced by better alternative sources with better options for contributing to their ongoing upkeep. One was focussed on tech issues, the other more pop culture oriented. These are the primary areas of interest at risk of being overwhelmed by the temptation to serve ads, and then be overwhelmed by the subsequent parasite effects of the ads.

UPDATE 2021-04-06 - Wikipedia is rapidly developing into another form of serious and dangerous trouble on the web that not many people have taken very seriously as yet. Fundamentally it is becoming, if not always designed as, a very nasty way to gather commons, enclose them, and manipulate them to pollute the information people who use the web to learn and make decisions about the world. Among the few writers taking this issue seriously is Michael Olenick, who has at least two excellent deep dives on the subject, Wikipedia: The Overlooked Monopoly and Wikipedia: Deep Ties to Big Tech. I encountered both of these via reposts at the excellent economics blog, naked capitalism. The source blog is that of the Institute for New Economic Thinking, which is also more than worth the time spent reading and thinking with it. Overall, wikipedia is blighted by the same features of capitalism in general and late capitalism in particular that has ruined the potential of the products and people from companies as diverse as apple and rubbermaid.

First, let's face it, is ad pollution. It is possible for ads to be okay. However, the way ads are deployed is emphatically not okay. They are deployed in a way that makes websites either like cable television, or like an overpriced glossy magazine. In both cases, you can hardly find the actual content of the webpage for all the crap. I have doubts about the utility or respectfulness of ads that creep over 10% of the view space on a page. Worse yet, ads have become the premiere vector for malware, spying, and blowing through data caps. The serious editorial control problems that attend ads are also growing in visibility again. For example, see 'I cannot be that person': why the 'Queen of the Mommy Bloggers' had to quit on the guardian. The advertisers had begun to dictate the content of their host. In fact, an editorial on the guardian asks, Is this really the beginning of the end for web ads?. Frankly, I don't think they can die fast enough. There's no excuse for taking the stupid ad-supported route that we already know destroys the quality of the content the ads supposedly supports. Check out the local newspaper of today, and compare it to one in the local library's archives. You'll be shocked.

There is a whole lot of crying about the prospect of the loss of ads in some quarters. How to pay the bills, how unethical it is supposed to be to get content "for free." Well, how about not spying, spreading malware, overwhelming the content, and generally acting like drunk assholes in my virtual living room, advertisers? Then I would be far less inclined to block you on the few sites I visit that depend at least in part on ad revenue. For me at least, due to site quality collapse I have only two sites left that are ad-supported, and I am actively seeking replacements for those two because their quality is also collapsing with what are now the telltale signs: multiple advertisements masquerading as articles. Unfortunately, right now ads are for the most part not symbiotes but parasites. For other sites that are ad-free but need financial support, I am shifting into contributing via set ups like Patreon or other means.

As for the claim that it is unethical to get something from an ad-supported website for free, there are problems with it besides the virtual drunks and house trashers that ads have become on-line. If a given author or company is determined that all of its products should be paid for, then it shouldn't be putting its products on-line at all. Instead, it should serve up a catalogue that people have to order from, with only teasers to catch attention. Unauthorized copies will likely find their way on-line, but as the profits of both the music and movie industries show, this is not in fact the problem some would like us to think it is. Worse, since ads are such a problem now, an ad-free subscriber's edition begins to feel less like a pleasant and appropriate option balancing the needs of the publisher and the reader, and more akin to extortion. "Want to see our stuff? Don't want our drunken fratboy ads? Then you'll pay, won't you?" Actually, um, no. I'll do without, when it comes down to it. Disrespecting a potential customer is a pretty good way not to have a customer. And frankly, the web is not just a place for customers, which is something advertisers and their hosts seem unable or unwilling to understand.

In many ways, the web is still more like a public library than a wretched and unnavigable mall, which in my opinion is a very good thing. Maybe an even better model and description is that the web is like a combination of public libraries, victorian era social clubs, yes stores, and yes for better or worse, the parts of town where the organized crime happens. It would be preferable not to have the organized crime part, although realistically we can't expect that to be completely prevented. The victorian era social club part is especially interesting, because in their day they were as much about providing social interaction and curated complementary products for their subscription fees as reading. They could also be hotbeds of terrible sexism, racism, and general awfulness depending on the make up of the membership and the policies of the group. The web is not a marketplace, even if on-line marketplaces happen to be the flashy thing the get rich quick schemers have managed to keep at the top of the tech and finance pages. (Top)

A Cri de Coeur: WTF, Firefox? (2015-09-23)

Circa 2013 mozilla firefox browser logo by Sean Martell for mozilla via wikimedia commons, under the licenses invoked by mozilla, Creative Commons Multimedia 3.0 Unported or later and mozilla public license 2.0, 2015. Circa 2013 mozilla firefox browser logo by Sean Martell for mozilla via wikimedia commons, under the licenses invoked by mozilla, Creative Commons Multimedia 3.0 Unported or later and mozilla public license 2.0, 2015.
Circa 2013 mozilla firefox browser logo by Sean Martell for mozilla via wikimedia commons, under the licenses invoked by mozilla, Creative Commons Multimedia 3.0 Unported or later and mozilla public license 2.0, 2015.

UPDATE 2019-09-22 - I haven't updated this piece in quite awhile, and thought it was worth noting some more recent behaviour in firefox as of the 60 series. The "we will annoy you until you update firefox" pop ups are consistent with mozilla's continuing descent into ignorant paternalism, not least because they completely removed the checkbox where you could tell firefox to shut up and leave it to you to decide when to update. That few people used the option is not an excuse to remove it. The irony is that having a little badge pop up quietly and visibly at the top of the window is actually a really good idea, it's the modal pop up plus removed setting that stinks. Mozilla's logic seems to be to remove what "few people" use, which de facto means taking out valued options and features for power users. With features the case is more complex, but with options it is just stupid, especially when they aren't even left available in about:config.

Another new feature is that firefox now throws up a flashy window shrieking about security issues that may make the site you intend to go to dangerous to visit. There are indeed malware-vending sites and such out there. But this is a broad brush message that may simply be a response to somebody marking a weblink as an "https" site when the site in question is actually just an "http" site, and that can genuinely just be a typo. The message in the window doesn't mention any of this, it doesn't even point out that this is a common spoofing technique that may cause false positives because the number of sites generally not on https is dropping all the time. Anyway, in the end if you weren't planning to sign in or try to buy something from the site in question, and you have script blocking, chances are pretty good that you'll be okay if you change the start of the link to "http" and head off to visit whatever the page is.

UPDATE 2022-08-07 - Further to the growing disaster that is firefox, which has been completely suborned by the corporate money they should never have accepted, those who like to use the "no DRM" setting should have a read of some news from Techrights via baronhk's rants. Firefox Has DRM Even if You Turn it Off provides some important details about how mozilla has taken to deliberately undermine its own about:config and general user preference systems. The rationalization they are using for this specific example is that supposedly drm is only drm if a proprietary system uses it. Otherwise it isn't drm, it is "content protection." Do read the bugzilla discussion about the clearkey preference setting, it is eye opening.

I have been watching mozilla with a jaundiced eye for some time now, admittedly. It wasn't always like that. No indeed. It used to be that release notes for new versions of firefox went unread, and I happily allowed auto-updating and didn't mind the times firefox went memory-daft or the period of releases when it crashed at random intervals. Not being tied into lots of web-based tools or on-line gaming, many of the issues firefox could raise for others didn't bother me. The things that did bother me had to do with current systemic problems with the web, not the web browser. (That's an essay/rant for another day – in fact several of them, see On the Trouble With the Web 1, 2, and 3.) Firefox updates didn't break things, mess with my settings, or add third party crap that I didn't want in the first place. It was awesome and decently trustworthy. Then things began to go wrong.

The gutless decision to put DRM into firefox was a deep disappointment, but at the least I could block the plug ins and prevent them from ever running. I didn't like having the new code installed at all, and was angered to discover there was a windows version without the code, but no such alternatives for any other platforms. Then mozilla announced the pending change to the add-on ecosystem, the very thing that is helping make the web usable at all at this point. That is not a change I am interested in participating in. Then another update introduced pocket, a tool I do not use and do not want, and worse yet was third party code. It took way too much time sorting out how to disable it and purge it from my bookmarks menu. A menu rendered nearly useless by firefox coder's insertion of useless and undeleteable "default folders" because apparently they can't believe that we can sort our bookmarks for ourselves.

Besides my regular work machines, I also run a linux box in order to build more in depth knowledge of that system and how to administer it. So far I have not tried to alter the default updater settings, generally not finding them to be problematic. After today I will be learning how to adjust those settings, because today among the updates was one to firefox bringing it to version 41. This did not perturb me, because I had read the release notes and was satisfied that this update would not be too problematic. It listed security updates, HTML5 related improvements, some image rendering stuff, and various other "improvements." Well guess what. Among those "improvements" is hijacking the creation of new tabs to forcefeed you ads via "recommended websites" based on your browsing data. The message that explains this says it's all okay, they'll only hang onto your data for a little while. Meanwhile, they have blown out my about:config settings – they are still there, but apparently overridden by hardcoding elsewhere. It's bad enough how this interferes with my workflow, but to add arrogant insult to injury, a decision has been taken to make it so that folks can't turn this bullshit off. Pissing off power users is not a way to gather support via labour or money for your project, mozilla. It certainly won't improve your browser share.

Please note particularly the point about the release notes. They did not include any information about this new tab behaviour.

End result? Besides not updating firefox on my other machines, I will be working out how to roll back the update on my linux box. But this is crazy, because of course I will lose the security updates too, not the best end result. However, here's the thing. Security updates should never be contingent on accepting other, non-critical updates. Period. Security updates are too important to be used to blackmail people into accepting changes to the programs they are using that they do not want, for whatever reason they may not want them. It doesn't matter if all the programmers think that the people using the program should accept the changes because supposedly the changes are "better." Programmers, you can't know that. "Better" is relative except for security and critical updates.

When I switched to firefox, two huge factors played in its favour. First, it was and is, far faster than safari, which at least on my macs runs like an unattractive pig despite all the tweaking, plist regenerating and all the rest applied. (If safari ran like Miss Piggy, I could reconcile myself to the fact.) Second, it was highly configurable, if that was what you wanted. You could get into the about:config page up to your elbows if you wanted, or never ever look at it. You could use add-ons to change the GUI, and add critical features the sickened condition of the web now requires a browser to have even if it isn't built in. Wonderful stuff.

Now however, the obsession of mozilla seems to be with currying favour with the quixotic but moneyed forces behind DRM, and finding ways to get around ad blocking and privacy protection steps taken by frustrated web surfers. The new tab behaviour is an awful illustration, not just because of the "tiles" aka "web sites inflicted on you based on your browsing habits in new tabs" but especially because it blatantly blocks user settings to enforce either tiles or nothing. I will certainly not be making any donations to support this sort of cravenness and destruction of a decent and useful web browser. At this point many writers would recommend a pleasing fork from the mozilla project, but so far I have not been able to find one that has not rolled in the new tab behaviour that causes the worst trouble for me in the first place – with the key caveat that I have not yet had a chance to check Sea Monkey.

Anybody could read this so far, shrug and say, "This doesn't bother me." Which is fair, and of course if it doesn't, it doesn't. (After all, I mentioned above the things that don't bother me!) But if nothing else, there is a bigger issue here that had never come clear to me before until today, and I hope readers give it some extra thought. Security updates should never be contingent on accepting other, non-critical updates. They are too important for that. People will refuse to apply updates that break features they prefer for whatever reasons, and that will include security updates if those cannot be applied separately. That contributes to the cesspoolization of the web, and the issue is not the people refusing but the coercive approach to non-critical updates. (Top)

Programmers, Have You Considered What Your Error Messages Say About You? (2015-06-30)

Quoted from the metrostar systems blogpost The UX of Error Messages, august 2016. The bomb logo is of course a classic Susan Kare design. Contrary to the blogger's opinion, this shows Apple used to be better at error messages. Quoted from the metrostar systems blogpost The UX of Error Messages, august 2016. The bomb logo is of course a classic Susan Kare design. Contrary to the blogger's opinion, this shows Apple used to be better at error messages.
Quoted from the metrostar systems blogpost The UX of Error Messages, august 2016. The bomb logo is of course a classic Susan Kare design. Contrary to the blogger's opinion, this shows Apple used to be better at error messages.

This is a serious question. Since I have some involvement in programming myself, I appreciate that writing error messages is a tough job. It's hard work to shift the ones you create for yourself as a programmer into something of any use to a non-programmer who will be operating the eventual program. This is not news at all, in fact, there have been excellent resources to help with creating better error messages for years, many of them in the guise of teaching how to create effective program interfaces. In a previous thoughtpiece, I reviewed an oldy but a goody, The Humane Interface by Jef Raskin. Another great example is About Face by Alan Cooper, now in its fourth or fifth edition (I still find the second edition to be the most readable). Sharp-eyed readers will note I have left out his subtitle, and I will explain my reasons for that. So there are books, there is tremendous experience among game programmers who deal with tough and vocal audiences who want their games to work yesterday, plus what are now decades of industry programming experience.

All of that, and I still have to deal with programs that insist on interrupting my work to cry that something has gone wrong. The most recent example I have had to deal with is a skin on top of the local file manager that integrates it with an organization-wide file manager. That means I am never free of this program even though it runs in the background, and its malfunctions can completely destroy my data. The situation is so bad that I save a local copy outside of the main file system and log a second copy into the organization-wide system. When the program crashes, which is at every save, even when I am saving the local copy, it throws up an error message. The message blocks everything else, nothing can be done until it is dismissed. It says "A problem occurred. This program has quit unexpectedly. Restart or find a solution on-line?"

Excuse me? Where the hell did that bullshit euphemism for "crash" come from? Oh, and nice suggestion to search for a solution on-line. Care to give me some freaking information on what went wrong so that there's a point to that exercise? It could be the program is running out of memory, or that it is clashing for some reason with the local file manager or some other program, or it is stuck in an infinite loop somewhere. By the way, those all have sensible, terse, non-technical descriptions: program out of memory, resource already in use, program not responding. Those all tell me something that I can potentially act on, or at least provides half decent information for tech support to start from while I try to persuade the error log to actually dump to email like it is supposed to so I can send it to them. This pile of offal was produced by clever people who presumably knew better, and probably wanted to do better. But evidently the usual reasons for doing better aren't working. So here's a different one.

The error messages in the programs you write say things about you – not about the computer, not about the person operating the computer, you. They can say nice things or mean things. Right now, most error messages say this about programmers: "Programmers are assholes who don't respect your time and think you're stupid because you didn't write your own goddamn program." It doesn't matter if every single programmer in the world is the most wonderful person around who spends days breaking their heads writing code with the exact opposite feelings and beliefs even when they're exhausted and just want the horrible neverending project to go away. This is what it says.

And it makes the people trying to work around the error messages hate that program with passion. Sure, it's just a program. Until it's the one you have to work with for the majority of your work day, like the file manager that inspired this article and tempted me to beat the office computer to death with the keyboard.

There are simple and practical things that can mitigate this right now. (None of this is original to me, Cooper and Raskin have this covered.) Corny as it may sound when you read it in the illustrated message, "Sorry, an error occurred / this program has crashed." is an excellent start to an error message. Provide some basic information on what went wrong to the extent you can for the person operating the program, dump an error log as a text file on the desktop, and allow the user to retrieve their data in some way. In other words, it should be possible to tuck the error message out of the way and perform triage, or the program itself should include its own data recovery. These things have already been implemented multiple times, so there should be some reusable code out there.

Another thing I would suggest is a change of language. Instead of calling people who operate computers "users," call them "operators." "User" is a poisoned word that has gathered connotations of addiction, stupidity, and lack of agency. It's even harder to write good error messages if the undercurrent in your mind, regardless of whether you believe it, says "the people I am writing this for are stupid, incapable, and unable to choose some other option anyway." "Operator" gets around this courtesy of its connotations of agency and skill. Then the undercurrent is rather better. "Operators are busy, clever people who have delegated the nitty-gritty engineering of their tools to you, the expert programmer, so they can focus on their own jobs. They need to be able to work around the effects of errors and crashes until the underlying problem can be fixed." Such a reframing won't make error message writing an easier job or one programmers are eager to do, but I can vouch for it easing drudgery of the task.

But even if that last idea would never, ever appeal to you, seriously, ask yourself this question programmers, please. What do your error messages say about you, and is that what you would want them to say? (Top)

The Millennial Blues? (2015-05-14)

According to my entirely non-scientific survey of internet sources, millennials don't care much for flannel. The appalling marketing image is quoted from Levi-Strauss of a remarkably awful plaid shirt, circa 2015. According to my entirely non-scientific survey of internet sources, millennials don't care much for flannel. The appalling marketing image is quoted from Levi-Strauss of a remarkably awful plaid shirt, circa 2015.
According to my entirely non-scientific survey of internet sources, millennials don't care much for flannel. The appalling marketing image is quoted from Levi-Strauss of a remarkably awful plaid shirt, circa 2015.

Practically speaking, I realize that the "generation" labels that we have all heard about ad nauseum (i.e. baby boomers, generation x, the millennials of the title) are in fact stupid stereotypes developed for the sake of marketing. The money windfall of the Beatles generation is something that the greedy and the gullible have wanted to recapture ever since, despite the truth of the cliché that you can't catch lightning in a bottle. The terms get used in many places as a "snappier" shorthand for the actual group of people that so much effort is being spent on stereotyping. Admittedly, "children born between the years 1945 and 1955" doesn't sound quite as personable as "baby boomers." Unfortunately, these terms are a bit vague in terms of how many years they encompass. So generation x for instance, apparently may include people born between the years 1975 and 1980 – I think – and they would have grown up with things like vcrs and the first microwaves. Meanwhile, the recent people of moment, the millenials, grew up with computers and maybe saw a dead vcr in the basement or something. The persistent tie between specific products and the group in question is the giveaway that the term was developed for marketing, and also explains the year vagueness. This point leapt out at me finally upon my discovery that apparently generation x has an affinity for flannel that the millennials do not. (I won't shame the source by naming them, it was too funny to read a description of "gen x'ers" that seemed to be based solely on Kurt Cobain in his happier days.)

What led me to look up these terms were news articles referencing the troubles of millennials, who have been having a bad time on entry into the workforce, if they can get into the workforce. It seems everybody is finding fault with them because supposedly they all have tattoos, want time out for naps, and all the rest of the unsourced anecdotes implying they are immature babies. Apparently I count as a generation x person, and supposedly should agree with the calumnies cast on millennials. I don't, having been in the spot a lot of them are now facing on trying to get into the workforce, get paid enough to live in my own place, and be treated as a human being. I had student loan debt up to my eyeballs, the job market was atrocious, and the level of hostility to anybody without enough age to have ten years experience in the job already was pretty stunning. I literally walked into interviews, all gussied up just like you're supposed to be, best foot forward, shake hands, answer the questions, be professional. Only to be told my appearance didn't fit the corporate culture, or that they were just finishing up the day by interviewing one more person and they hadn't even read my resumé. (Yes, an interviewer told me that to my face.)

This is the regular experience of a person of colour, let alone a woman of colour, let alone a gay woman of colour who cannot pass as straight. Oh, did I mention above, that all those generation terms are by definition for white people? Funny that. So, millennial folks, welcome to a sliver of my world. There could be a silver lining to this, though. This could be the best social consciousness training a generation of young people has ever had.

It gives a whole new perspective on the current drive to criminalize youth, doesn't it? (Top)

We Have "Smart" and Even "Self-Driving" Vehicles Already (2015-03-22)

November 2012 image of a purported self-driving car quoted from Steve Jurvetson's flickr under Creative Commons Attribution 2.0 Generic License, 2015. It may also be viewed at wikimedia commons. November 2012 image of a purported self-driving car quoted from Steve Jurvetson's flickr under Creative Commons Attribution 2.0 Generic License, 2015. It may also be viewed at wikimedia commons.
November 2012 image of a purported self-driving car quoted from Steve Jurvetson's flickr under Creative Commons Attribution 2.0 Generic License, 2015. It may also be viewed at wikimedia commons.

No doubt there are many people who could read this title and say, "Er, no, that's not true. If it were, would google and various car companies be spending money on this? And we need self-driving vehicles, look at the horrible state of the traffic, the accidents. It's carnage out there." I always look twice at any abstract claim that "we need" something, as I do at any claim that "somebody should do something." The latter often masks "I should do something, but I won't" the former "I want to make you buy something you don't want or need." Note the modifier "often" though, it isn't impossible for these to introductory statements to be perfectly honest, though ill-chosen. Nevertheless, the question of self-driving cars left me bemused. I simply could not see the point. What problem(s) are self-driving cars supposed to solve? I could not get over the stubborn feeling there was a bad bill of goods being sold. Yet the efforts to build such cars did seem sincere enough.

After all, the problem of writing a computer program and coordinating enough sensors on a car to make it able to be driven automatically is not easy. To do that people have to write enough computer code and compile enough basic data for the computer to check to perform the same tasks people to do using their eyes, ears, and proprioceptive senses for. The "problem" of computer vision is nowhere near solved, and as soon as matters get to questions of mimicking bodily senses beyond those of the eyes and ears in software, things get even more difficult. The fact that it turned out google's famous self-driving car couldn't cope with changes to the smaller scale landmarks along its customary route reiterates the point. But this surprises no one, that a problem is hard is not a reason not to attempt it.

Except, this problem actually isn't hard. And it's solved. By means of those vehicles better known as street cars and trains, most of which are driven by humans, but some trains are already driven by computers. The problem is vastly simplified in the case of both of these types of vehicles. They have established and permanent tracks. People can be taught how to interact with them to minimize safety issues, and the safety problems of getting on and off of them have been repeatedly solved in sensible and effective ways. (Where they haven't inappropriate pennypinching is usually the cause.) The routes for streetcars and similar can be adjusted relatively easily, though not trivially. These types of vehicles work best when they are given traffic light priority, and this is not difficult. So actually, the "problem" above is not the "problem" as usually presented.

The "problem" the self-driving car proponents are trying to solve, is how to fix the problems created by the insistence that every person should have a car. And note what they're actually trying to solve has nothing to do with the other corrosive effects of a car oriented culture which people would like solved (we already know how to solve those too). So these folks want to make it so everyone can have a car no matter what, and make the best of the concomitant commuter time penalty by having the car drive itself while the person who once would have had to drive works, or sleeps, or reads. Who am I kidding? Of course that person will be expected to work.

In effect, the self-driving car project is all about further supporting a car culture that we already know is unsustainable on multiple levels to allow for more "productivity" that is itself unsustainable, applying technology in way that is antithetical to the libertarian principles most boosters of the project hold. This is the other place where I go wait, hang on. Just who is expected to switch to self-driving cars, should they become possible in this absurd individualized form instead of the mass transit version that we already know does the job and does it better? Are Larry Page or Sergey Brin going to sit in one of these cars while they get driven around? (Okay you're right, Sergey Brin is an unfair example, he already thinks his cell phone is emasculating him.) Maybe a lot of these boosters are rich and have chauffeurs so the stretch isn't too far. Except people are already talking about how people driven cars and self-driving cars can't co-exist on the road, because the self-driving cars will be so much better than stupid and slow people. Well, who actually drives most of the cars? And who has been especially sold the idea that driving and having your own car is a marker of freedom and adulthood?

To be clear, I don't think the self-driving car boosters are out to end human driving by everyone but them or anything like that. They are following the logical requirements of their apparent belief that every person must be allowed to have a car or two to drive around in, but that people are stupid and computers are smart. The latter point is often prettied up with "computers are more accurate and they don't get tired" which is of no help when they crash or otherwise fry your data. It's logical, and it's well-meaning if we take the safety rationalizations that get trotted out seriously. However, self-driving cars are only logical within a closed system of thought that refuses to acknowledge the existence and effectiveness of trains and streetcars and what we have already learned about the consequences of prioritizing the use of single family cars over all other forms of vehicle. And that's just to start, just to pretend that a problem hasn't been solved when actually, it already has. (Top)

The Bittersweet Joys of Fanfic (2015-03-06)

Number 13 from the Battle On! comic series by Jett (Jeanette Atwood), quoted from Tom's Xena Page, which remarkably is still online. So far it looks like Jeanette Atwood's portfolio site is down but can still be viewed on the wayback machine. Number 13 from the Battle On! comic series by Jett (Jeanette Atwood), quoted from Tom's Xena Page, which remarkably is still online. So far it looks like Jeanette Atwood's portfolio site is down but can still be viewed on the wayback machine.
Number 13 from the Battle On! comic series by Jett (Jeanette Atwood), quoted from Tom's Xena Page, which remarkably is still online. So far it looks like Jeanette Atwood's portfolio site is down but can still be viewed on the wayback machine.

In a recent bout of procrastinating, I spent some time hunting up X:WP fan fiction – which of course means not much time hunting, because there is still quite a lot out there. On the sites where people can post comments, it has been interesting to see people complaining that hardly anyone writes fan fiction anymore. On one hand, I share their sense of frustration. On the other hand, I can't crab about it, because I retired firmly from the X:WP fan fiction game despite people asking me for "loose ends" to be tied up. (The trouble with "loose ends" is that nobody identifies the same ones.) Fan fiction writing can be a lot of fun, so it might seem a bit strange to walk away from it, basically, especially the X:WP tied version. For awhile there, the show had a lot of inspirational power for fans, and it is the only show I ever felt moved to write any fan fiction for. So what changed?

A significant part of what changed was of course, the show itself. The combination of "creative" decisions that amounted to rank cowardice basically gutted the experience of watching it. No folks, a pseudo-christianity arc plus cultural appropriation, and the usual homophobic storylines applied to lesbian characters do not a pleasant watching experience make. By the end, it really felt as if a decision had been taken to break the toy because Rob Tappert and Sam Raimi didn't want to share anymore. This was such a shame, and don't get me wrong, as a writer myself I realize sometimes sharing is hard, although it helps tremendously to remember that stories aren't things that run out, but wonderful webs of ideas that keep growing.

Once the show's magic was snuffed, quite a few people tried to write "corrective fan fiction." Not an easy remit, but personally I never had the heart for it. And by then, my experience as a fanfic writer had developed in unexpected directions. For instance, the level of vitriol I experienced for opting to write in a parallel xenaverse still boggles my mind, especially the denunciations of any "new character creation." The same people who reacted with disdain if I created a character that didn't correspond to graeco-roman mythology somewhere seemed unaware that Celesta is a modern invention, there's nothing graeco-roman about her except her name. Since I was writing fanfic to have fun with the characters and try out different stories without worrying about creating a new fictional world, I was taken aback. Probably I was naïve, not least because I got to the party late, the xenaverse had already been in existence for three years or so. It's never easy to join a party that has already started.

UPDATE 2016-01-01 - I'm happy to add some new information here to reflect that fan fiction communities in general certainly do not all equate with my experience. In fact, it was my growing knowledge of how three dimensional and wonderful it can be once you find a solid footing in one that made my disappointment so acute when this particular iteration failed. A thoughtful overview of the history of fans on-line is provided at Fanlore.org, and if you want to read (or listen) to an outsider's perspective who learned to respect fans, have a read of The Fan is a Tool-Using Animal by Maciej Cegłowski.

In the end, even that didn't chase me out of writing fanfic. What did probably affects a lot of writers, especially if the Royal Academy of Bards bard feedback month is anything to go by. What finally knocked me down was the indifference. Writing is something writers do because for them, writing is like breathing. So I continue to write. But fanfic is different from other forms of writing because it is a communal experience, it is about interacting with readers who are often also writers of fanfic themselves. (If you don't believe me, search for "Harry Potter" and "fanfic", you'll be gobsmacked. Then check out star trek fanfic. You'll be more gobsmacked.) When the community is no longer present, or is unwelcoming, the fanfic cycle coughs and dies.

The "communal experience" aspect is both a feature and a bug. It's a great feature, hands down. The trouble is, communities both firmspace and virtual tend to struggle with diversity issues as soon as they begin to come together around a significant pop culture phenomenon. As soon as the participants begin to include more than the usual mainstream audience, things get harder. The more mainstream folks respond with bewilderment and outrage to different takes on "their" stuff that non-mainstream folks notice or prefer to feature. If this sounds a lot like my view of Tappert and Raimi's response above, that's no coincidence. It's not surprising, and it is tough for people to parse these issues out successfully in firmspace, let alone on-line. The end result for me at least, is that I found myself without a fanfic community to participate in, and so like many others, I hung up my fanfic quill and moved on to other things. Sometimes you have to let these things go. (Top)

The Trouble With The Hobbit (2015-02-22)

Cover snap of the special edition sound track for Hobbit III, quoted from Watertower Music, february 2015. Cover snap of the special edition sound track for Hobbit III, quoted from Watertower Music, february 2015.
Cover snap of the special edition sound track for Hobbit III, quoted from Watertower Music, february 2015.

It is distantly possible that those of you who read things here on occasion may have had a look at my essay drawing together my thoughts about J.R.R. Tolkien's Lord of the Rings (LOTR) and its ancillary works, "Myth, Not Allegory." If you have, you may have noticed that despite all the things that irritate and frustrate me terribly about Tolkien's secondary world, evidently I gobble up most of what I can get my hands on related to it nonetheless and enjoy much of it. In fact, that's a significant reason I find the frustrating bits so frustrating, precisely because I find Tolkien's imaginary so wonderful. All that said, I expect movie adaptations to fall far short, not because movies can't be as good as books (a claim I think is nonsense) but because movie adaptations must inevitably run painfully aground on the difference between what the filmmakers imagine, and what readers imagined in their heads – unless the filmmakers have worked as hard on their visual realization of the secondary world as the original author did on the written version. Then, no matter how differently they imagined it, the filmmakers will have something quite interesting going on.

So I was utterly delighted with Peter Jackson et al.'s take on LOTR, for the most part. There were troubling bits, and a hysterically funny misplaced cut in the original theatrical release of The Two Towers that threw myself and my friends out of Jackson's Middle Earth for the rest of the movie, but those were minor issues really. And we still enjoyed The Two Towers, although I fear we irritated the people around us mightily with our giggling through the rest of the show. It has never bothered me that the movies were long, or that the extended editions were even longer (I have them all and love the documentaries). And when I heard The Hobbit (TH), my least favourite book by Tolkien was being movie-fied and extended into a trilogy, that didn't bother me either. People seem to have forgotten that mega-movies were actually the rule, not the exception not so long ago, and I have no sympathy for the people who whine that "two hours is too long." These are the same people on average who extolled Avatar, a movie about which I don't have enough "no" for in my entire body.

But, I knew there was a big risk factor with TH, and this was its origin as a bedtime story. If Jackson, Boyen, and Walsh weren't going to do something akin to filling the story with muppets, they were going to have a rough time wrestling the story fully into the LOTR universe they had already created. They were going to have to resist Jackson's weird obsession with gory slapstick and just plain slapstick in the wrong places, and that would be hard because of how the dwarves and hobbits were originally handled. They were going to have to root out the childish "and Gandalf saves the day!" structure to the major episodes. That's definitely hard, and demands a genuine rewrite away from TH to The Red Book of Westmarch. And they would need to remember what worked ten years before: easing the audience in through the silliness of the initially childish hobbit worldview, which is modified as said hobbit grows up. Tough stuff. The Rankin cartoon came quite close, even though Gollum was such an epic fail, and Bilbo looked – strange. So I expected to watch beautifully photographed movies, with probably too much slapstick and uneven storytelling due to the source material and the constraints of being post-LOTR Jackson et al. versus pre. And my expectations were met, in the saddest way. (Another person worth reading about this is the Disgruntled Haradrim, Phenderson Djèlí Clark in A is for Armies: An End to Hobbit Quests and Prequels.)

It wasn't until the third instalment of TH that I finally put my finger on what was really wrong. Inappropriate humour wasn't the worst of it. Failure to treat the dwarves and the elves as serious characters, especially the dwarves, did terrible violence to the story. These dwarves are on an insane, officially noble quest, but most of the time behave like ill-grown frat boys, and when they aren't overeating they're busy arguing like children with the suddenly childish elves. That Thranduil rules "lesser elves" fixes nothing – they aren't child-elves, they are lesser in terms of their contact with and obedience to or lack thereof to the deities of Tolkien's secondary world. Those were clangers, so I sat back to enjoy the filming even if I found it weird to be able to see the pores in Martin Freeman's nose. Still, clangers, whatever. However, in the third movie, it hit me during the big boss battle between Thorin and whatever the bad guy orc's name was – oh, Azog the Defiler, right.

Video games do not make good movies. A good movie can start from a video game based premise, but the timing and filming requirements don't match. By nature, action video games especially demand a series of excitement highs, one for each level completed. And they demand a player's direct participation. This is what the filmmakers were doing with all three movies in TH, which forced them to artificially isolate characters to create the big clash, the big chase, the final boss fight. It looked stupid outside of its proper place, and I literally felt like I was watching somebody else play through a video game in front of me. Now, this is terrible news. It means that the style of effects and filmmaking appropriate to video games is leaking into where it doesn't belong, where it made the weaknesses of the movies it affected even worse. This is not a new effect either, it was the bane of the two follow up movies to the original Matrix, gorgeous looking but meaningless and all about looking like a video game since they were literally developed to tie in with video games.

This is real shame, and it is hard to say how long this pseudo video game style of film making will persist, or how much more it will creep in accidentally in other films. In the meantime, it will probably do harm to many more movies before the fad has run its course. (Top)

Greatly Exaggerated (2015-02-07)

Image of 'Dr. J. Devman's Lesson in Anatomy' from page 112 of *Rembrandt: His Life, His Work, His Time,* published in 1894, courtesy of the Internet Archive, 2015. Image of 'Dr. J. Devman's Lesson in Anatomy' from page 112 of *Rembrandt: His Life, His Work, His Time,* published in 1894, courtesy of the Internet Archive, 2015.
Image of 'Dr. J. Devman's Lesson in Anatomy' from page 112 of Rembrandt: His Life, His Work, His Time, published in 1894, courtesy of the Internet Archive, 2015.

All clichés can be repurposed depending on context and time period, and so it is that today we can talk about how the internet giveth and the internet taketh away. But I must confess it took until a few days ago for me to realize just how hard specific governments and companies are trying to convince us of a particular piece of bullshit by repeating it ad nauseum in hopes of getting everyone to parrot it without thinking – especially young people presumed to be too naïve or too dumb to understand what is happening. The piece of bullshit in question is the declaration that "privacy is dead." All such declarations should be dismissed on their face, by the way. Anything somebody in the media declares dead is most emphatically alive, and the claim for its demise has an ulterior motive. In the case of privacy, allowing it to be killed in the first place.

An interesting article summarizing the history of claims about the death of privacy and the parties who most want the population at large to go along is in an article at No Moods, Ads or Cutesy Fucking Icons. Leaving aside the problematic "slippery slope" argument (it doesn't do the job everybody who uses it wishes it would), Peter Watts lays out an excellent way of bullshit detecting for statements of this kind, if you don't find my claim about their general falsehood convincing. In paraphrase, he relocates the argument for increased government surveillance and the moribund state or privacy by moving the argument to firmspace – the real world where we human beings do the sort of stuff computers will probably never be designed to do (by which I mean digestion and its correlates, deliberately bathing in water, etc.).

The rumours of privacy's death are exaggerated and an attempt to force people to give something up because "the cool kids" or unreasoning fear triggered by governments and paramilitary organizations say so. I think Peter Watts is bang on when he points out privacy must be far from dead, or this ongoing propaganda blitz would hardly be necessary. The motivations for the blitz are also quite clear. There are few governments right now not engaged in active efforts to reinstate the totalitarianism characteristic of much of recent human history. Totalitarianism leaning folks hate privacy, and anything else allowing people other than themselves to make decisions. The paramilitary outfits expect to be hired on by the totalitarian folks. And the various "social media" companies are interested in continuing their colonization of individuals in one last grand push to create what they hope will finally be the advertisement that never stops working and profit without end.

Mind you, these folks don't mean privacy is dead, dead. Or at least, not dead, dead for them. No indeed. Privacy is supposed to be dead except for whoever can afford to do things like buy all the houses around their chosen mansion and evict all the occupants in order to ensure people can't peek in their windows or see them pulling dandelions in their yard. To them, privacy is a luxury item and it too, should be monetized. The only people who actually "deserve" privacy then, are the usual suspects by this logic: the rich, the infamous one percent. Funny that.

With all these things said, it is heartening indeed to see how few people are actually falling for the line of bullshit, and how fiercely people are fighting to keep their privacy. It will be interesting if the whole discussion of on-line and firmspace privacy will lead to a reconsideration of the actual utility of "the cloud" and even the current implementation of social media itself. (Top)

Poetic Musings (2015-01-29)

Woodcut illustration of Christine de Pisan, reproduced in *Illustrated History of Furniture From the Earliest to the Present Time* (1890) by Frederick Litchfield to show the chair and desk. Public domain image courtesy of wikimedia commons. Woodcut illustration of Christine de Pisan, reproduced in *Illustrated History of Furniture From the Earliest to the Present Time* (1890) by Frederick Litchfield to show the chair and desk. Public domain image courtesy of wikimedia commons.
Woodcut illustration of Christine de Pisan, reproduced in Illustrated History of Furniture From the Earliest to the Present Time (1890) by Frederick Litchfield to show the chair and desk. Public domain image courtesy of wikimedia commons.

In September of 2010, Kristen Hoggett wrote an intriguing essay on The Smart Set, called "Music to a Poet's Ear" in defence of the right of lyricists to call themselves poets. The essay is well written and definitely worth the time it takes to read it. Yet it is also a bit strange to read, especially if you're both a classicist and someone who has a musical way of thinking, which Hoggett herself acknowledges early on that she does not. Her conceptualization of what a poet's career consists of struck me as particularly surreal, but perhaps this is because my own vocation is not to be a poet specifically, although writing poetry is certainly something I work on. But let's leave those sorts of things aside, and consider the actual origins of poetry, because the truth is, poetry is music, it was born from music, and for all of us, budding poets or no to not have been taught this in the awful mandatory english classes we suffered in school is a real disservice.

UPDATE 2019-11-02 - I have finally stumbled upon Maria Popova's annotated reading of Muriel Rukeyser's The Life of Poetry, which is as worth the time spent reading together with Rukeyser's book as any of the vast corpus of Brainpickings posts. This one is especially striking on several levels, because Popova notes that to begin with she did not find poetry engaging, this as someone who did not grow up in any north american education system. She draws on Rukeyser's observations about the mistrust of emotion in united states cultures, which I think we could plausibly link with the heritage of protestant sects including the puritans who decried "excessive emotion" and the various revivalist-connected ones that encouraged emotional extremes during "conversions." I am also reminded of Ursula K. Le Guin's essay, "Why are Americans Afraid of Dragons?" and her consideration of how mistrust of "escapism" and "fantasy" are reflective of mistrust of free thinking. To emotionally respond to poetry also means thinking, emotion and thought are not separate, and freed emotion can well go along with freed thought. Hmmm.

If we look into a standard english dictionary for a definition of "poetry" in the literal sense of what sort of special words it is, what we find is interesting indeed. According to my electronic OED, it is "literary work in which special intensity is given to the expression of feelings and ideas by the use of distinctive style and rhythm." I am inclined to disagree with the "literary" part because that inappropriately denies the poetry status of works not written down, and that simply doesn't make sense. "Style" isn't very helpful either, it isn't specific enough to mean anything much. Does style mean fast or slow? With an efflorescence of adjectives? But those things are not exclusive to poetry at all. Let's modify the definition a little to be clear about what I at least – your own mileage may vary of course – consider to be the required features of "poetry."

Poetry is a work, whether written or spoken, in which special intensity is given to the expression of feelings and ideas by the use of distinctive rhythm. This definition has some nice properties. It doesn't insist all poetry must rhyme, for instance. It can have a distinctive beat like blank verse, or perhaps be structured by alliteration as in old english poetry. It allows for sound to be used to create poetry by pitch variation as in the case of the homeric epics before ancient greek shifted from a pitch to a stress accent. It even allows for "special intensity" to be created by means we may not realise are poetry at first, as long as there is a distinctive rhythm. So free verse is included, since even if the metrical scheme is not regular, if it makes strategic use of repetition of some kind for rhythm. And notice, like the electronic OED we aren't bogging down in quality assessments. It is quite true some poetry may be considered good or bad by everyone, but this in itself doesn't tell us whether it is poetry or not.

Turning back to Hoggett's essay, one of the great puzzlers for me is this snippet: "After the advent of writing and books, songs were increasingly brought into the public sphere as they were sung aloud, embracing end rhyme and repetition to aid memorization." Except, rhyme and repetition did not come into poetry when it began to be written down at all. The oldest poetry available to people today to read, itself derived from a lengthy preceding oral tradition, includes plenty of rhyme and repetition. The rhymes were often internal rather than final, in part because many of the languages this poetry was composed in are highly inflected so their endings are sharply limited by grammatical rules. There is clear evidence via these ancient works and anthropological evidence for poetry beginning as a special mode of speech compatible with dancing and drumming, and therefore that its beat and repetition reflect its origins in music.

Hoggett points out that the insistent rhymes of the song lyrics she discusses could be marked as a fault, as could the "sing-song" tone of the Joni Mitchell song she discusses. I know of what she speaks, because in my own upper level english classes these things were fiercely derided as "childish" and Hoggett comments that these make the song sound somewhat like nursery rhymes. However, this is not because these techniques are childish, it is because they have been relegated to poetry directed at children, no more and no less. Frankly, I defy anyone to find the distinctive beat of Shakespeare's sonnets or his plays "childish." Yet this is exactly the sort of thing where thinking of poetry as a form of speech much closer to singing and music than ordinary speech, of applying a musical mind, is most important. Taking a musical perspective can help us overcome prejudices we have been trained to have, and I dare say, enjoy poetry more, and for those of us who write poetry, to write better.

A repeated rhyme, even a short refrain, is more than just insistent. Each of these serves as a the verbal equivalent to a musical tonic. The "tonic" is the home note of a given piece of music, the sound a musical piece systematically returns to. Part of the joy of listening to music even if we have no idea about such things, is the anticipation as the music moves away from the tonic, and then makes its way back in ways that may completely astonish us. This can go wrong in music, and indeed it can in poetry. The rhyme can be repeated just a bit too often, the refrain recurs one or two too many times. In this poetry is like comedy: you need good timing, and you need to know when to stop.

But what happened to the perceptions of so many english teachers that rhyme and repetition have been so maligned and their actual origins in poetry so misunderstood? Well, probably the difficulty of following the letter of the law, but not the spirit. If a student is feeling tired or unmotivated, it is very easy to fall back on mechanically putting together rhymes or following a metrical scheme, with mechanical results. These ways of writing need to be worked with and internalized for a writer to be able to deploy them smoothly with the words, as opposed to using them as disciplinary devices on unruly syllables. It is understandable that english teachers would want to curb the amount of disengaged poetry they might otherwise get by pressing students who may not be writing much poetry anyway to avoid them. The habit passes easily on to advising students who intend to keep writing poetry from using them, and avoiding them can become an internalized rule.

Yet think how different and accessible poetry seems, when we take repetition and rhyme as the tonic words of a poem, and its beats as rhythm. Then it becomes unsurprising that poetry may not rhyme, or that it really is best experienced by reading it aloud as well as silently. We can get to know it by getting a feel for its sound and shape just as we do with an unfamiliar song, and after we have savoured it a few times, begin to take in its words more deeply. Its imagery can begin to bloom in both our mind's ear and our mind's eye. Or at least, it will have a chance to. Just like music on the radio, some of it will never appeal to us, and this won't lead us to denouncing poetry utterly. Instead, we will have analogous expectations to those we have of music.

I suspect one of the most difficult challenges for poets today is performing. The earliest poets we can gather information about from written records were performers, just as magicians are. They expected and were expected to go out and sing or chant their works to an audience, and this could be a primary means for them to make a living, or it could be something that served as a sort of part-time occupation. Lyricists who perform their own songs have to overcome a fear of public speaking and performance. I have the impression, and it may be incorrect, that "real" poets don't expect to have to "perform" except insofar as they have to teach to pay the bills. This is sad indeed, because this is the ultimate reason that "literary" poetry has become less accessible to a broader audience: "literary" poets are unwilling to perform their poetry, to not merely read it off the flat page, but live the words to a general audience. The difference is once again between the letter of the law and the spirit of the law. Or, to take a more homely metaphor, the difference between watching the game on television, and being in the stands experiencing it live. (Top)

A Euro Sign (2015-01-20)

One more for the unusual sign collection, and didn't the colour of the foliage come out beautifully too? C. Osborne, may 2012. One more for the unusual sign collection, and didn't the colour of the foliage come out beautifully too? C. Osborne, may 2012.
One more for the unusual sign collection, and didn't the colour of the foliage come out beautifully too? C. Osborne, may 2012.

No doubt someone out there knows exactly what this little sign is about, although I am certainly not that someone. Part of what struck me about it, besides the strange but consistent font and the fact that 47 is a prime number, is how it was attached to the tree with no less than a round-topped Robertson screw. Somebody was seriously determined that this little sign wasn't going anywhere. This doesn't seem terribly interesting, until you try throwing the number 47 into your search engine of choice. (If you try looking up the euro currency you'll get a combination of "histories of" and "current crises of" sort of results.)

According to wikipedia, J.J. Abrams has a strange fondness for the number 47 and likes to put references to it in his various movie and television series projects. It seems he is following a precedent set during Star Trek: The Next Generation. Given the science fiction connection, I can't help but wonder if there is an additional layer of reference to The Hitchhiker's Guide to the Galaxy and its sequels by Douglas Adams. The Answer to Life the Universe and Everything is 42, so 47 could be an appropriately geeky sort of homage.

Developing a preference for a particular number or small subset of numbers may be a human universal. Both archaeological data and historical record reveal a plethora of examples of a small set of digits and their multiples, usually up to no more than 4 times. The top hits seem to be 3, 4, 7, and 9. Tolkien fans will recognize all four digits marking out the numbers of important characters and objects, especially 3 and its multiples. They also turn up quite frequently in the judaeo-christian corpus of writings. Another place they are common is telephone numbers. If you seek deliberate frustration, try to get a phone number nowadays with more than two of those digits in it – it's just about impossible, due to the ever increasing number of cell phones. (The things people who activate cell phones could tell about number preferences would probably be really interesting.)

Phone numbers aside, certainly one of the most popular numbers in recent works has been the every-intriguing number π. Vi Hart has strong opinions about the number π, hilariously and instructively expressed in her many math videos on it (and once you've watched those watch the rest, especially her video about logarithms if you've ever struggled with them). I think she has a point, π isn't that special in the general sense, especially today when all manner of aberrant and strange numbers are part of what scholars know today. Yet, π is sort of special, in at least a historical sort of way.

The famous Syracusan engineer Archimedes, besides contributing to his city's resistance to roman colonization and working on various other technical problems, also performed mathematical research. This was no trivial achievement, since he was working without the place value system or the standard symbols commonly used to day, and had to describe his work in words. (Nevermind he had none of the electronic helpers we have now to help us with tedious calculations.) In the course of this research, he used infinitesimals to calculate two numbers circumscribing the value of π using the pythagorean theorem (which wasn't discovered by Pythagoras at all) and two 96-side polygons. The technique Achimedes used is appropriately called the method of exhaustion, although the name isn't meant to describe the calculator's weariness. In taking all this effort to estimate π, Archimedes also began working out the techniques of integral calculus. (Top)

Tea is the Best Medicine (2015-01-12)

A happy find at a now tragically closed tea shop, a sign reading 'Keep calm and drink tea.' C. Osborne, february 2013 A happy find at a now tragically closed tea shop, a sign reading 'Keep calm and drink tea.' C. Osborne, february 2013
A happy find at a now tragically closed tea shop, a sign reading 'Keep calm and drink tea.' C. Osborne, february 2013

I know, I know, there is a whole, gigantic posse of you out there who will swear by coffee until there's no such substance left in the world. However, for those of us who find coffee smells far better than it tastes and have never been able to successfully retrain our palates to believe otherwise (even if we have successfully done this for beer), tea must stand for all. I have never had a person who loves coffee tell me the same thing about tea, oddly enough. In fact, confirmed coffee drinkers have never told me they find tea unpalatable, just that they don't like it nearly as much, which is quite interesting. Then of course, there are the folks who like both tea and coffee, each in their own place. One of my colleagues swears by his morning coffee and his afternoon green tea. Still, given my druthers, except for a very specific type of sour stomach caused by too much sugar at one sitting, tea is definitely the best medicine.

UPDATE 2018-05-22 - Not that I am unwilling to learn more about coffee, if only in self defence when I can't get any decent tea. It has been interesting to learn about which coffees taste worst at least according to me, and which ones should be drunk as quickly as possible because they are disgusting cold.

It's true that many "medicinal" uses are really about taking advantage of memories of comfort and happy moments associated with it. Hence the post-crisis or other awful experience tea, or the warming tea after a bout of coping with cold weather outside. But tea actually has some real medicinal uses, by which I mean real tea, not what are usually labelled "herbal teas." Herbal teas often do what they promise by the way, for example the ginger turmeric tea I have been trying out because it is supposed to be helpful with certain types of intractable joint pain. Since I like ginger just about anything, it would be no loss to have this nice, non-caffeinated tea to drink even if it did nothing for the pain. Mirabile dictu, the stuff works as claimed. However, that still makes it a steeped drink, not a real tea.

"Real tea" can always be identified, if by nothing else, by its caffeine content. Said content derives from the Camellia sinensis bush native to the most eastern regions of asia, most famously china. To begin with, it was specifically a medicinal drink for improving energy and warming a person up. This follows logically enough from a drink brewed with hot water that contains a stimulant chemical. As all of us know, there is no such thing as a stimulant chemical that remained solely under the control of doctors, and so tea escaped to become first a drink for the rich, who surrounded the drinking with layers of elaborate ceremony, and later a drink potentially though not always accessible to anyone.

Yet there are a few other medically related things tea can be used for, though of course, this is not medical advice, and supposing you tried any of these and your symptoms do not subside, get thee to a doctor post-haste.

Among the most popular tea remedies are those that make use of used tea bags while they are still warm but not so hot that they are uncomfortable on the skin or raise a red flush. They are excellent compresses for cuts or scrapes because they bring down the swelling and draw out dirt. They also staunch bleeding, and so are a real relief should you have the ill fortune of needing a tooth pulled. But suppose you have a wider area issue, such as a sunburn? In this case some lukewarm black or even green tea sponged on the burn will help via the same properties, though take it very easy if you have any blisters to cope with – in that case you may want to err on the side of caution and let the tea get basically cold. Suppose instead you have a plaguey headache, the kind usually related to tension or stress, and you'd like to try something other than an aspirin or other pain reliever? Well, first try drinking a big glass of water and wait 15 minutes, you'd be surprised how often a headache actually means you're thirsty. Then, if that doesn't work, try a small cup of black tea flavoured with cinnamon. If you want less caffeine, make more tea and share it with a friend, giving them the first cup, as that will have the most caffeine in it. Cinnamon is a mild pain reliever, and you'll be using caffeine to get it around your body faster, as indeed does pretty much every pain reliever except possibly aspirin. (Top)

On Obstreperous Signage (2015-01-07)

Calgary Stampede isn't for everyone as this non-conforming notice board indicates -- and probably not for any of the animals at all. C. Osborne, march 2012. Calgary Stampede isn't for everyone as this non-conforming notice board indicates -- and probably not for any of the animals at all. C. Osborne, march 2012.
Calgary Stampede isn't for everyone as this non-conforming notice board indicates – and probably not for any of the animals at all. C. Osborne, march 2012.

Current residents and former denizens of calgary, alberta are perhaps less inclined to have a good long chuckle at this sign, but it struck me as a breath of fresh air. Alas the restauranteurs ultimately decided to pack up and move to the british columbia interior, which, contrary to what you might think, is not necessarily a way to get away from rodeos. (There is quite a ranching community out there, actually.) However, I think the weather and the rent may be a bit easier to cope with where they went. In any case, there is a general and strong sense in any place that has a major tourist-drawing event in it that said event is great for the place and especially the businesses. All those outsiders coming in and spending lots of money and subjecting themselves to mass-produced white cowboy hats, sounds like a mostly positive experience, doesn't it? But my point here is not actually to comment on the calgary stampede, especially since if you're interested the event's own website and a good search engine will serve you better. Instead, my purpose is to praise the obstreperous sign of diverse but still not obnoxious type. This is not such an easy line to keep to, the one where the writer manages to appropriately gore an ox or two without slipping into lazy and unacceptable "isms."

Photograph of a fascinatingly stranded construction sign. C. Osborne, march 2012. Photograph of a fascinatingly stranded construction sign. C. Osborne, march 2012.
Photograph of a fascinatingly stranded construction sign. C. Osborne, march 2012.

Mind you, a sign can be obstreperous* by other means. One of my continuing favourites is this little fellow to the right, featured in a previous thoughtpiece some time ago. Yes, what happened was probably some goofy teenagers chucked it in the river and it washed up on a sandbar. But I can't help but prefer imagining the sign deciding standing around beside the latest bit of framework for concrete that couldn't be poured before the snow fell was lousy and boring. So one evening in the quiet when the traffic wasn't too silly, it hopped off to jump in the river and fetch up on a sandbar where its sign would actually say something accurate about what was happening. Yes, this imagined sign has some difficulties standing upright, yet manages to stay legible anyway. I figure the idea is no sillier than R2D2 supposedly travelling across the deserts of Tatooine on three little rolling wheels.

These two examples are of course, not the ultimate exemplars of obstreperous signs at all, whether real or imagined. I think a better imaginary example, or rather pair of examples, come from Through the Looking Glass and What Alice Found There in the form of the twin signs pointing to the house of Tweedledum and Tweedledee.

* In case you were wondering, "obstreperous" is defined as "noisy and difficult to control" in my electronic OED. I am of course, taking liberties with this meaning by taking "noisy" in its extended sense of garishly coloured. This is nothing compared to another sign I have seen that may actually be an art installation, made up of four huge panels of lcd lights. These lights create a perpetual appearance of people walking endlessly and obsessively around the four panels, or at least always walking by and never getting anywhere. (Top)

Who Needs a Deck? (2015-01-04)

There's more than one way to have a deck in the late fall when one needs inventing. C. Osborne, january 2015. There's more than one way to have a deck in the late fall when one needs inventing. C. Osborne, january 2015.
There's more than one way to have a deck in the late fall when one needs inventing. C. Osborne, january 2015.

Well, that was a longer hiatus than intended! Anyway, to return to other interesting and peculiar sites that I have picked up with my camera. This one here is a great favourite, because really, with a set up like this, who needs a deck? Now, admittedly there are some open questions raised about the actual use levels of this alternate means of arranging a nice spot to watch the sunset and have a drink in the evenings. The roof around the base of the two chairs doesn't look much worn by feet for example, though the picture was taken from the ground and the actual level of wear may not be visible from this angle. Placing two chairs on the roof over the front room is an unusual thing to do on average in the parts of canada I have lived in or visited so far, but that isn't all that made this gem of idiosyncrasy stand out.

The house itself is located in one of a number of highly unusual micro-neighbourhoods in calgary, alberta. This particular micro-neighbourhood is a narrow wedge stuck between several blocks of the most expensive houses you ever will see, a number of houses of remarkable age for that city, and a few fenced and boarded up houses on lots evidently being consolidated to build one more set of cheap and soulless condominiums. Within this narrow wedge is a tiny community of artists, university students, and young families. As a result, this little area is a pocket of wonderful offbeat in a city that seems caught between pressures to be nowhere in particular and somewhere human and interesting. Being fond of long walks around wherever I am living or visiting, it's always great fun to bump into interesting micro-neighbourhoods, and not just because they are such a pleasant change from yet one more outlet of whichever multinational coffee outlet is currently expanding. Since they are chance discoveries though, there are examples that I have managed to lose, in the sense of not being able to figure out how I got to them again, because they tend to be in older areas with erratic street names and irregular road layouts.

In the event I can find a micro-neighbourhood again though, there is some etiquette to be followed so as not to become a sort of unpleasant tourist. In other words, I'm conscious of not wanting to be the person whose bad behaviour spoils the neighbourhood for the folks who actually live there. First is not to be walking through the place on a regular basis, unless of course it happens to genuinely be along my commute. Second is not to be a jerk with my camera – I make myself stick to taking just one picture of an item which strikes me as a neat exemplar of what is cool about the area, specifically of something the folks in the neighbourhood set up to be noticed. As a result I often take no pictures in any of these wonderful little places at all, because unless there are a whole lot of artists and students around, those sorts of things simply aren't around. (Top)

Tearable Puns (2014-07-01)

These are tearable puns. No, really, these puns are tearable. Photo by C. Osborne, september 2014. These are tearable puns. No, really, these puns are tearable. Photo by C. Osborne, september 2014.
These are tearable puns. No, really, these puns are tearable. Photo by C. Osborne, september 2014.

Having survived my thesis defence and the requisite revisions – anyone who tells you they didn't have to revise is either delusional or bending the truth just a little – and apparently I didn't learn anything because I'm off to PhD school next, it is past time to catch up on posts and the like. The photo featured for this post comes from a lighter moment in which another grad student (I think, it may have been one of the department administrators, both of them known for their wit) stuck this on a professor's office door. The choice of professor was more than appropriate, though perhaps he didn't think so. He continues to deny any suggestion that he has punned at any time.

Now, this of course is a relatively low key sort of visual pun. The selection is nearly endless at icanhascheezburger, though on average they are extremely american, so your mileage may vary. Being among the english as a first language folks in the world, as a child I couldn't imagine puns in other languages despite living in a country with a fair amount of french exposure. My parochialism was firmly shut down by a young man who may have been belgian rather than french, though to be fair my judgement there is based purely on the extraordinary pronunciation of his last name (he said it himself) and so could easily be wrong. Anyway, he spent a good five minutes spinning out a long story in which the only answer his brother would give to a question was "moo." Overall, the story was going very well. Unfortunately, he embroidered the story by two questions too many, so that by the time he got to the punchline, "moule" (a mussel), it fell flat. So within five minutes I was disabused of an unconscious notion that puns were english-specific and learned that the key to comedy is knowing when to stop.

In any case, puns are far from a new invention. A highly unscientific survey (thanks Wikipedia!) suggests that they may have begun seriously proliferating with the development of writing systems. If the language being represented uses tones then a whole world of potential puns exists by sound quite apart from the visual puns a logographic system could provide. On the other hand, if the writing system doesn't represent vowels still other possibilities arise. An infamous example comes straight from the bible, where skillful manipulation of vowels helped render a Goddess' name into something much ruder in connotation. It could be argued however that this last example is actually one of "satirical misspelling," spelling altered for rhetorical purposes. (Top)

 

Copyright © C. Osborne 2024
Last Modified: Monday, January 01, 2024 01:26:21