Title graphic of the Moonspeaker website. Small title graphic of the Moonspeaker website.

 
 
 
Where some ideas are stranger than others...

FOUND SUBJECTS at the Moonspeaker

"Ethical Advertising" is an Oxymoron Under Capitalism (2024-01-08)

Example illustration used to help define the term 'oxymoron' quoted from the *beyond the word 'english* blog, 3 february 2013. Example illustration used to help define the term 'oxymoron' quoted from the *beyond the word 'english* blog, 3 february 2013.
Example illustration used to help define the term 'oxymoron' quoted from the beyond the word 'english blog, 3 february 2013.

I admit that originally the title of this thoughtpiece stopped at the word "oxymoron," but on further reflection and some perusal of my trusty dictionary, it became clear to me that this was too simplistic. After all, the word's basic meaning refers to drawing attention, and that is necessary in any economy and under many conditions, not just economic ones. If we are going to run a ceremony, set out to teach others, gather a group of people to work on a big project, or trade, we need to somehow catch the attention of the other people we need to do these things with. The current form of advertising is not this utilitarian however. As we all know too well, it is actively designed to support mass government surveillance with a market cover and to try to manipulate people into buying things they would not otherwise have chosen to buy. Capitalism demands constant expansion on a finite planet, so now the marketing drive is shifting to experiences, because as David Harvey has observed, these are ephemeral. The experiences entail "service work" as well, which in turn has recently been vaunted up into the hyper-exploitation scheme of the so-called "gig economy." But then again, this has revealed another verb, "marketing," which my dictionary and thesaurus alike treat as a synonym to "advertising." This is probably true for earlier periods, before the professionalization of propaganda by the likes of Edward Bernays in the early twentieth century. It is notable that in contrasting the dictionary definitions of "advertise" versus "market," it is the latter that mentions "to promote" at the front of the main definition.

Still, if advertising is undertaken while striving not to gather information about the people who see the advertisements apart from trying to insure it reaches somebody in sensible numbers, it seems rather benign. After that, whether a person chooses to buy the product depends on the product itself and what they think it will do for them. If the product in question is genuinely useful or attractive, then it doesn't need to be "sold" with a sprinkling of soft pornography or absurd claims of any other kind about what buying the product will say about the buyer, if anything. Of course, the trouble under current conditions is that not everyone or every business striving to sell a product has equal opportunity to present their attention-grabbing efforts to the world of people they hope to reach. We are past the point where merely screaming louder will work, it is now far more a matter of corporate cartels striving to block competing products out of the market all together first by preventing competitors from advertising successfully, then at all, and then sinking their products if they possibly can. But the corporations are warring with each other at even greater levels than before as they try to find ways to create pseudo-products from enclosing all forms of entertainment, digital technology, communication, and so on. They are actually in trouble, so they are resorting to the last port of call for such organizations, military pork barrel contracts, and possibly the most mendacious and vicious advertising modes devised in recent human history after extremist religions.

The end result for the moment is that we have a peculiar sort of stand off. There are companies and other sorts of organizations that need to let people know they exist and what they hope to sell or do in return for the money or labour they would like to get in exchange. For those genuinely seeking to advertise ethically, they are fighting their own leaning towards believing claims that the primary way to get the word out today depends upon "social media." It's easy to drop making ridiculous claims and paying for softcore pornography shoots. But the myth of "social media" has considerable traction, although it is slipping. Still, such businesses and organizations are fighting the constant messaging that if they don't use poisoned "social media" then they really won't have any way to reach potential customers or contributors in enough numbers to survive. It's a sort of "fomo for organizations" situation. There is an interesting counter argument to this idea though, quite apart from the evidence of flailing and survival issues under present conditions of late capitalism.

UPDATE 2024-04-19 - Yes, apple did have a very clever marketing team in its more precarious days. Due to the distinct appearance and overall design of apple products, even with the logo carefully hidden they were recognizable. From time to time they would show up in television and movie scenes, dressed up as part of what made the illusion of an office or whatever, and so generally not "to be noticed." But people did notice, and so began a new age of paid product placements. An early sign of apple's changing fortunes was the "anonymous" featuring of its products in transit and poster advertising for a western canadian telecom company through 2000 - 2005s. By 2006, apple had switched to an overt, highly produced "Get a Mac Campaign" of often funny commercials. The end of the campaign's run indicated another big change in direction for the company.

An important inspiration for this thoughtpiece is a pair of blog posts by the president of the purism computer company, which has been building a solid reputation for its cell phones and computers. Their phones and laptops are of particular note for the privacy-conscious worried about surreptitious listening and photographing, because they are designed and built with hardware switches to turn microphones and cameras off, as well as wifi and bluetooth. Obviously if purism flogged its wares in the manner of today's apple computer (not the clever and low budget marketing from its more precarious days) or microsoft, then that would be so inconsistent as to eviscerate the whole point of their operation. Purism markets itself as a "social purpose corporation" after all, and puts considerable emphasis on its commitment to free/libre software, privacy, and security. So it makes sense they would be worried about how to deal with the conundrum of advertising in this day and age in Is Ethical Advertising Possible? and Purism's Ethical Marketing Principles. They are being transparent, which is definitely to their credit. I am not sure if anyone has pointed out to the president that if you have one blog post asking about ethical advertising, it is pretty obvious you are setting up to say "Of course there is, we know how!" Perhaps as in the case of many on and offline publications, somebody else writes the headlines and titles.

Now, what is the counter argument then? Why, that people who are interested in the type of product a given business sells or in working for a particular organization don't just sit around waiting to have information shoved in front of their faces. They talk about what they have in mind with friends and family. They read relevant publications, including blogs that cover their interests either directly or obliquely via a reasonably sane group of regular contributors and commenters. Since in real life we don't actually live in a world of total warfare of all against all, people share tips and suggestions for possible sources and opportunities. Those are the places where the genuinely relevant and constructive policies win out regardless of overspend on advertising. There is a real lesson in the fact that people don't share tips about which product to buy or place to work or volunteer with based on crazed slogans and grotesque imagery. They share tips on how to filter out the organizations that use crazed slogans and grotesque imagery. Experience supports just such wise curation. The reasons for late capitalist advertising to focus on the "youth market" with its lack of experience and proneness to fads and peer pressure into doing things many would otherwise choose not to are very clear. (Top)

Respect Readers, Take Yourself Seriously (2024-01-01)

Workboot image by pen_ash via pixabay from pen_ash's free for use collection, july 2022. Workboot image by pen_ash via pixabay from pen_ash's free for use collection, july 2022.
Workboot image by pen_ash via pixabay from pen_ash's free for use collection, july 2022.

With the four hundred fiftieth thoughtpiece rapidly approaching and other long term projects finally getting finished and posted, plus the recent twentieth birthday of The Moonspeaker, I have been thinking about writing and the principles I apply to it as a practice. Practically speaking there are just three of them, two of which are noted in the title of this thoughtpiece. The first one is simply that as a writer, you have to start somewhere. It has always fascinated me how often a preliminary beginning has been enough to get things going, with no need to even keep it. Indeed, it is usually worth just deleting that preliminary bit. A physical analogy is something like being a bit resistant to heading out to take a walk, even though you enjoy it, so you decide to do some walk-like things at least like taking out the trash and vacuuming, and by the end of that actually, you;re ready for getting outside and stretching your legs for an hour or so. Which all suggests that writing is like any exercise or art, a person needs to warm up a bit before really getting down to business. For myself at least, writing and walking do go together, in that a great deal of planning and structural revising gets done when I am off on a long walk primarily to exercise. When I head out for an afternoon of errand-running, there are too many distractions for me to do writing work as well, although it is true that neat ideas or solutions for annoying issues may pop to conscious mind out of wherever my subconscious has been working at them. My current walking shoes are not quite as worn as the workboots in the illustration, being relatively new, but they'll get there.

The other principle is to respect readers, and when it comes to this website, human site visitors whether they spend much time reading or not. The other non-human visitors, from bots to crawlers to indexers, I basically ignore them for the most part. Search engine optimization is a game that can still be played to great effect, algorithms aside. I am aware of a blogger who is incredibly proud of how his blog is in the top twenty five percent for reach, that it costs him no money, and part of that is he writes content for the search engines to pick up. The flip side of achieving this though, besides free hosting with all the risk that type of "free" entails, is his observation that this means he must write roughly eighty percent of the time on topics he is not personally interested in. To my ear, that does not sound even remotely free. This blogger does not just write the awful marketing-text style of post, the kind of post where one paragraph is divided by sentence into pseudo-paragraphs to make it easy to insert advertisements. There is no reason for him to since he is running the site using services that don't cost money, avoiding the perverse incentive to accept advertising. He is thoughtful, up front about his approach and how others could mimic it, and generally I have found even his technical articles avoid the unhappy tone that highly skilled computer administrators can slip into when they need to explain something that has become second nature to them. That is an important skill it can be easy to miss is there. I appreciate learning about this approach, in order to think through consciously whether I would like to follow it in future. I really don't think I would, though.

Instead, part of my way of respecting the reader over time is seeking to write actual paragraphs, and writing in several different forms. Hence there are essays and shorter pieces for the non-fiction side of things, including my effort to do some webmaster resource provision in the form of the essay on writing an rss feed. On the fiction side there are works by others that I've annotated, plus my own stories, some short stories, even a couple of novels (one still in progress). There's a very old draft of a creative non-fiction book as well, which I hope to finally post the updates to this year after an astonishingly long delay. My own take on the old-fashioned links page is there in the form of the Previous Random Sites of the Week page, which collects up the websites featured in the "Random Site of the Week" corner on the home page. At some point soon I need to rework the page into some loose categories, but in the meantime rather than just throwing a linked title up for each one, I try to give two or three useful sentences of description. When I am really lucky, there is an especially good place to start to get into the site besides the random site's own main page. By this I don't mean any criticism of the main pages at all, they are generally excellent. It's just that a wonderful random site is often very rich, with many pages to explore and get to know. I usually find my way to them via a search result or recommendation for a specific page, so it strikes me as worth sharing that fruitful starting point along with the main website address. A person may then select whichever of the two they prefer. One thing I may finally do is add an optional version of the link that will open in a new window or tab. I have debated making that the default behaviour, but this a behaviour I don't like myself when browsing websites if it is not something I have triggered. On the other hand, not everyone has memorized the key command that achieves the same result whether or not the link is set to behave that way, and the mouse gestures to do it may be difficult for physical reasons affecting the person or the pointing device. So it seems to me the most respectful approach will be adding that optional version of the link.

Now, the third principle I use is to take myself seriously as a writer, but don't worry, not too seriously. Otherwise the first principle of starting somewhere would be rendered ineffective. But that begs the question of what I mean by seriously then. For me it means not chasing the latest media sensation to write about, instead sticking to writing about what genuinely interests me. Otherwise I would be spending precious time producing text that I couldn't care enough about to edit, let alone update and make additions to improve source citations and rectify cases where I feel a title is not accurate and needs correction. Journalists and bloggers who are able to write across a wild range of topics regardless of personal interest impress me with their ability to do so. I am not that disciplined however, and the odd time I've tried, usually for a specific work or academic assignment, the results definitely show I was struggling to do more than dial it in. Just dialling it in on purpose without rectifying the situation goes against the respecting the reader principle, too. It's interesting how just starting somewhere doesn't feel at all like dialling it in – for a technical simile, I'd say it is more akin to a modem, where the modem facilitates making the connection, then supports the data flow. The other part of taking being a writer seriously, but not too seriously, is making sure to keep reading diverse things and learn more about writing. If I have no prompts to start somewhere from, then things get stuck pretty fast. As many pieces on The Moonspeaker show, varied reading has blessed me with many prompts! (Top)

Wait, How Does That Work? (2023-12-25)

Japanese ministry of interior poster now in the public domain concerning the 1918 flu via wikimedia commons. Japanese ministry of interior poster now in the public domain concerning the 1918 flu via wikimedia commons.
Japanese ministry of interior poster now in the public domain concerning the 1918 flu via wikimedia commons.

A regular contributor to the economics blog naked capitalism, Lambert Strether, has observed since the advent of the COVID-19 pandemic that breathing is a social relation. Meanwhile, a major public health official in the province of british columbia insists that individual, private actions have nothing whatsoever to do with public health. This public official has said this repeatedly in one form or another for almost the entire pandemic so far, even though the claim is ridiculous on its face. For one thing, respiratory diseases impinge upon the social relation of breathing, and let's not try to fool ourselves that breathing isn't social. Of course it is. We don't all have individual, immiscible air bubbles, and we are hard pressed to avoid respiratory diseases for that very reason. We can reduce our vulnerability via some individual actions like taking steps to help our immune system along, and by using masks to avoid at minimum spreading our own germ particles if we can't stay home when we are sick. Better quality masks like N95s can provide protection for both ourselves and others when worn properly. The only way none of this would be true is if we never saw or interacted with anyone else at all. Otherwise we can't help but share air. The same principle applies to other things that we all use and may both take into our bodies and release material from our bodies into. Water is another obvious example, as is food we may prepare and share with others. In those cases nobody claims for a hot second that we can't keep drinking water clean and so we should give up on building and running sanitation plants, or that we should ignore all we have learned about safe food handling for both preparation and distribution. That's obviously ridiculous. So how did air and breathing somehow become an exception all of a sudden, when the effective handling of the original SARS outbreak was so successful? The poster illustrating this thoughtpiece is one I selected precisely because it depicts what we could reasonably assume would be four strangers in a train or bus. They may be strangers, but for the duration of their ride, they are in a social relationship defined by their shared air.

There is a trouble with social relationships though, from the perspective of people like that provincial health official. That trouble is that they cannot be overseen, counted, and interfered with by technocrats in an absolute way. Technocrats are highly unfortunate people whose jobs have gone to their heads, such that they have developed a messianic belief that they always know best and should never be questioned, and therefore what they say should go, no matter what. It doesn't matter what the evidence before their eyes shows, it doesn't matter if their orders whiplash about until the result is utterly incoherent and counterproductive. They know better, they said it, it must be true. The cognitive dissonance they must suffer the moment anything gets through their bubble of underlings must be excruciating. But of they can just get us all behaving as nearly as possible to completely atomized beings with no relationships apart from the ones the technocrats agree exist, then everything should turn out exactly as they say. However, real life could care less about technocrats' beliefs about their superiority. In real life a top down order does not negate the social relationship entailed in any environment where we interact with others, whether those others be strangers we only loosely interact with by being in the same place, or close relatives we live with. I appreciate that the independence and unpredictability of unique human beings who don't slavishly obey what some technocrat demands must be a serious frustration for the technocrat. But they are also convenient scapegoats for when the real world refuses to go along with what the technocrat insists must be true.

There is a big difference between cooperation and coerced obedience. Cooperation is a product of trust, itself won by a demonstrated track record of competence and constructive response to criticism, together with a willingness to let go of what is not working guided by social priorities. That sort of cooperation can hold through extraordinary and tough times. Coerced obedience achieved by such actions as threatening people's livelihood, to take away their children if they have any, arbitrary rule changes at irregular intervals that keep people confused and off balance, or outright balance, is tenuous. Tenuous, and a great way to experience severe blowback and rebellion with no recourse. Among the blowback elements are things like irretrievable loss of public trust, or a period of completely uncontrolled pandemic conditions. There is no way on hell or Earth such results can be defined as "working," even for the most selfish, short-sighted, and greedy of technocrats. The difficulty for them being, they spend a lot of effort avoiding the knowledge and evidence of how much coerced obedience is a failure and a recipe for serious problems that will hurt them. (Top)

Documentation Woes (2023-12-18)

Sample two page spread from the Voynich manuscript, a still controversial and not wholly understood book dating to circa 1639. Image courtesy of the internet archive. Sample two page spread from the Voynich manuscript, a still controversial and not wholly understood book dating to circa 1639. Image courtesy of the internet archive.
Sample two page spread from the Voynich manuscript, a still controversial and not wholly understood book dating to circa 1639. Image courtesy of the internet archive. For more information on the manuscript, René Zandbergen's site is a great place to start.

With documentation being such a bugbear when working with computers, it is worth giving some more thought to what makes it such a difficult problem in general. There are few manuals and guidebooks as immune to interpretation as the Voynich manuscript, although in moments of greatest frustration it often feels that way. Until machine translation became more common, some of the most infamous instruction documents around were those provided with ikea furniture. Since ikea is a multinational corporation striving constantly to minimize its costs, one cost it has maintained a special eye on is translation, which it evidently seeks to avoid whenever it can. Hence the often enigmatic diagrams, which are at their worst when it comes to similar small hardware items used to bolt the pieces of whatever piece of furniture together. That could be overcome by stamping the packets the hardware comes in with the number of the part, but apparently that is not a solution that ikea is willing to countenance. Teaching by diagram is difficult, so when it is possible to do it using text, it seems like it should be easier. This being the real world, of course it isn't, but at least there are more options when applying text to the challenge. In the end there is a double-headed issue driving the people who try to make effective documentation and the people trying to use what they produce half crazy. One head of the issue is the generally low view taken of people who focus on writing documentation; the other head is the implicit knowledge of those who made whatever the documentation is supposed to be about. The implicit knowledge issue is in many ways the crux of things, because for those who have it, being prodded to express it is an annoyance at best, a highly resented chore at worst. For those who don't have it, striving to get it tends to be an exercise in getting insulted, talked down to, and treated as if they don't understand the basics of a graphical user interface, a mouse, or how to use a power button to turn any given machine on. Encounters between these two parties all too often end with everyone walking away pissed off and nobody's needs satisfied. I totally underestimated how nasty the implicit knowledge barrier was until recently when an obliquely related example made the nature of it clear to me in a specific way.

Due to my own educational background, I have spent concentrated time learning and applying advanced mathematics related to signal processing and energy transmission. Accordingly, I have spent plenty of time in the company of mathematics textbooks, including the standard calculus doorstopper that covers both single and multivariable equations. One day, in hunting up a proof I hadn't used in some time and needed a refresher on, I noticed again how for class we had worked through only about a third of the relevant chapter, and this was true for a surprising number of chapters in the book. Still, it's a huge book, and even taking formal classes that use it for three semesters it makes sense that plenty of material would still be left over. But it stuck with me that officially so much of the book was not used. Even accounting for differences between undergraduate programs, there is still quite a lot that students would not touch in that book, even if they actually worked with quite a bit of that material later on. By the time "later on" came up, they would be meeting that material in the context of a different, more focussed textbook. Hence the utility of the doorstopper as a reference, at least for awhile. That's a happy accident though, and it is more likely that much of that "other" material is the product of accretion based on the what is in earlier textbooks that authors wouldn't dream of leaving out, plus adding what other instructors requested more problem sets and a bit more explanation of. But in the end, I couldn't quite shake my sense that this didn't cut it as an explanation, even though it is fine as far as it goes. It seemed to me that this did not adequately explain the numerous expansion formulae and similar items printed on the end papers. Sure, I use Fourier transforms and expansions now, but their presence in this door stopper textbook seemed superfluous.

Don't get me wrong, I realized then as now that these formulae were never superfluous. It's just that until I read a more detailed account of Charles Babbage striving to calculate tables of numbers mechanically the penny was stuck in mid-air, so to speak. Ordinarily as an undergraduate when I resorted to the end papers of the textbook, it was to check my memory of the composition rules for derivatives, or the trigonometric identities. These had to be thoroughly memorized by the final examination, of course. At no time were we tasked with calculating out something like the first ten values in a ballistics table for given velocity and mass over a range of firing angles. If we had, then we likely would have turned our attention to some of those expansion formulae with more attention. A different approach to learning logarithms might also have led us to them. So there is a great example of implicit knowledge at work. For the authors of the textbook I have in mind, both senior scholars now retired, they grew up dealing with such calculations without the benefit of scientific calculators or computers. They were indeed among those who learned many of these techniques and how to use a slide rule as part of the job. They had no reason to think that instructors might have no time to give a quick lesson in the anatomy of the textbook or a brief overview of the history of mathematics as hinted at in the arrangement and contents of it.

Many students are able to just accept what they are told in a math class and focus on regurgitating what they learned in the right places when taking the eventual exam. For students who are going to keep studying mathematics for whatever reason though, this sort of blind faith calculating is insufficient. Usually the gaps get filled in as they continue their classes, barring such bits of historical trivia like the one I just detailed. Without properly understanding the mathematical concepts they need to use, students working with more complex problems soon land in trouble because they can't generalize. Rules work by rote, but for trickier stuff we need to be sure to understand the underlying principles, and so we are subjected to the requirement to learn and work through proofs. The wild part is that once fully internalized, a principle or otherwise overarching concept doesn't feel so different from a memorized rule. In both cases, a person can apply them properly without spending much thought on it. To get from "not at all sure what is happening now or will happen if I try x" to "understand what is happening and can confidently test what may happen if I try x" requires a combination of explanation, worked examples, and exercises. This is true of many subjects of course, not just mathematics.

For computers and operating systems, I have found that the documentation challenge is properly identifying what it is actually meant to support. My doorstopper calculus textbook is intended to support learning fundamentals necessary for going on to use and apply differential equations, for example. Most introductions to GNU/linux seem fixated on one of two sorts of people. One sort is a starting computer science student who will be guided by an instructor or decide on their own to try to set up a GNU/linux system from scratch or using a command line only installer. The other sort is a person presumed to be mostly completely at sea if they are faced with a computer that isn't a work station already set up for them at the office. There is no single manual that can meet both of their needs. Nor are there manuals out there just yet that properly handle such matters as differences between say two of, GNU/linux, freeBSD, windows, and macosx. People who have long experience with one system and comfort with customizing it have quite different needs from those with perhaps similar experience in terms of time but who have little experience with customization. But maybe in that case the way computer documentation has developed has got things the wrong way around.

Historically, the first computers most people might have interacted with on a regular basis were mainframes. They would never have worked directly with these machine behemoths, instead they would have used a terminal of some kind, what now is colloquially referred to as a "dumb" terminal. Not only were they not expected to learn how the underlying operating system worked, they were actively discouraged and prevented from doing so. Accordingly, the documentation avoided presenting any material that would encourage or support creativity or exploration. Correcting data entry errors or debugging programs took a fearsome amount of time, and the technicians keeping the mainframe running understandably resented having to fix issues accidentally triggered by the people who were supposed to be working quietly at the terminals. This old style of documentation is still the kind most habitually produced. To better resist the current pressure to make everyone's home computer into a dumb terminal and help people better help themselves before resorting to calling the helpdesk, we need a style of computer documentation that is more like the calculus textbook. We don't need that textbook to spend two to seven chapters telling us how to open it, look up things, and use our pencils. We need it to do just what it does, jump into the thick of things with an introduction to the uses of calculus and a first look at its fundamental theorem. Some manuals come ever so close to this approach – except they still have the two to seven chapters of stuff we already had to know to turn the computer on and log in, an accretion from the mainframe days. (Top)

Including Surprises, Part Two (2023-12-11)

Snapshot of the ur-code used to test C and C-variant compilers where the coder's first language is english. Snapshot of the ur-code used to test C and C-variant compilers where the coder's first language is english.
Snapshot of the ur-code used to test C and C-variant compilers where the coder's first language is english.

The saga of working out a portable means to provide similar file processing capabilities for html files as provided by bbedit continues. *nix systems generally have a wide range of built in utilities that may be run from the command line in a terminal window, so these are the primary candidates to take over the job. I mentioned a couple of these already, the C preprocessor, and make, besides the scripting language perl. Of course, that is hardly scratching the surface of what is available, as anyone who has watched an application installer running in a terminal window has seen. The title bar of the terminal emulator window flashes with the names of the different utilities running at different parts of the installation process. There are a fascinating number of unusual looking names that are actually banal in origin, like "clang" which is just short for "c language family [compiler] front end." The standard selection available from the shell is easy to look up, for instance the commands invoking the utilities in the macosx version of bash are neatly summarized at ss64.com. Some of what I have found myself using is available from outside of the shell, including my cross-platform scripting favourite perl. Still others are not necessarily available by default on a macosx system unless the xcode command line tools are installed, but on other *nix systems they are usually all present, for example make, the C preprocessor, gcc, and so on.

So far the way things are coming together I have found myself working up a combination of perl, awk, sed, m4, and make with the standard file management tools available from bash. Inevitably early versions are less efficient when working on scripts of this kind, although in this case my strong suspicion is that due to not being very experienced yet with several of these utilities, I may have an approach more reminiscent of a Heath Robinson or Rube Goldberg machine than anything else. Nevertheless, the prototyping is instructive and working well enough to show that things are on the right track. The most difficult element so far has turned out to be automating collection of the names of the files to be processed and generating the names for the new files. For more common computer programming and scripting, this is not so difficult. The names of the main program or script files are often small in number, and since the programmer starts work aware that in due time they will use a makefile, they will often have a template ready to fill out. Certainly if I had started out with using make as a preprocessor helper in mind, then I could have updated the file lists over time by hand. That sounds like a way to miss things though, and I likely would have lost patience and worked out a way to automate the task. In other words, at the moment I am having fun and learning in terms of code options and approaches, and will ultimately settle down and do what most people do in this situation: write some perl script or adapt some free/libre perl scripts that are already available.

One of the trickier elements of the task at hand is that I am working between a couple of systems, and one of them has older versions of tools that cannot do things like edit a file "in place" without generating a back up file of the original. I should add, a visible back up (not a dot file). This is counter to the usual approach to updating html files, where all the edits happen in the pre-production version, and once finalized that is pushed out to production. I have back ups covered by other means. Generating back ups is of course not a bad thing at all, and for a multi-step process having a handful of snapshots is a great setup if something goes direly wrong, or simply for debugging. The end result though is that on the older system I need to write some code to clean up after the preprocessing to remove those extra files. The newer system will generate some of the same files, but not all of them, so that difference needs to be accounted for. People who work on installers know all about this sort of challenge, and deal with real versions of it compared to this html example. Overall though, this has been the least of the tricky aspects in the coding process. Finding decent documentation and properly working out the capabilities of each of the utilities is the most challenging part, on top of managing such quirks as comments and character escaping differing between them. I am still not getting into the horrors of quoting and double quoting, which I agree with wiser heads than mine should not be nested if it can be avoided, because it renders the code difficult to read and often unpredictable of behaviour.

The new to me specific tools I have been learning and working with are: m4, a macro processor; make, a program recompilation manager; sed, a text stream editor; and awk, a text processing language. These all have manuals and other introductory materials written and provided as part of the GNU Project, and except for the first two I know they have associated books published by familiar firms like O'Reilly and peachpit press. The GNU materials are a wonderful resource, but suffer from a difficult problem common to programming and scripting books, manuals, tutorials, and web pages. They regularly start with material so basic as to often not be worth actually running yourself, then skip completely over intermediate examples to rapidly escalate into undeniably relevant while also impossible to understand as yet examples and techniques. I am a bit surprised how often the writers forget to note briefly character escaping and commenting practices, because these are not standardized. They often seem standardized because so many utilities and languages have adopted C-style comments and PCRE-style character escaping (which may be C-style, actually). A few have not, maybe because of their archaic origins, although I suspect more often because of the processing level they work at and the data they have to pass to other tools. Writing up teaching materials for scripting and programming is a real challenge, so much so that Kernighan and Ritchie's The C Programming Language is still highly revered as a classic of exposition. Unfortunately a great many people seem to have thrown up their hands and left learners to struggle along with a search engine and close encounters with fora like stackoverflow or askubuntu. It is no easy task to answer queries in a text-only format in the first place, and inevitably there are times when a question triggers a flurry of argument about programming philosophy (seriously, I am not making an arch reference to flame wars). Interesting, but not necessarily an answer to the question in and of itself.

All that said, I also appreciate by personal experience with other computer programming and scripting languages that sometimes getting anything done is sheer torture until the shape of the language becomes clearer. For instance, regular expressions are utterly maddening for use on serious tasks until the learner has figured out variable substitution properly. It isn't difficult to use them once the syntax is worked out in the sense of representing it properly in typed code. This is actually where the drop begins between beginner and rapidly escalating towards expert coverage in many manuals. At times the authors have so deeply internalized how a thing like variable definition and use for substitution works in older text processing languages like sed and awk, they forget to provide a couple of examples to illustrate how it is done. Older languages often do things a bit differently than their descendants and cousins, and may also have a non-obvious necessary tweak or two for command line versus script usage. Looping is another tricky and highly instructive area that can be quite difficult to work out adequately in the early stages of learning a language. Once internalized, these are the very things that give a feeling for the language so that it is easier to compose in it. Perhaps official computer science majors develop an ability to go straight from reading a description to using the language without any examples to start from, although I find that hard to believe. (Top)

Forced Teaming (2023-12-04)

Stock image from the television series *Walking Dead*, posted by ahmadreza at pixabay, september 2016. 'But it's all good, we're all in this together!' Stock image from the television series *Walking Dead*, posted by ahmadreza at pixabay, september 2016. 'But it's all good, we're all in this together!'
Stock image from the television series *Walking Dead*, posted by ahmadreza at pixabay, september 2016. 'But it's all good, we're all in this together!'

The moment you hear or read someone declaiming that "We're all in this together!" be advised that you are encountering one or more scoundrels and the beginning of a serious effort at forced teaming. That is, an attempt to pressure the hearer or reader into compliance by implying a connection between them and the speaker or writer that isn't there. The point is forcing the hearer or reader into compliance against their own interests, and especially with the intention of preventing them from gathering appropriate information to make a properly informed decision. To my knowledge, the term "forced teaming" was coined by Gavin de Becker in his book The Gift of Fear, and it is a very nasty technique often predicated on taking advantage of a potential victim's desire not to offend or look bad in front of others. De Becker refers to forced teaming as a way to create premature trust between the person doing the forcing and their mark. Probably many people could read this far and click away in annoyance, because one of the most common contexts for the use of the obnoxious tag line "we're all in this together" is the COVID-19 pandemic. I agree, the tag line sounds plausible in that context, but that is the nature of forced teaming tactics, the use of a plausible-seeming hook to try to catch the unwary. It was in fact a terrible tag line to keep trying out regarding this pandemic or any other mass casualty event. Genuinely effective and helpful recommendations and tactics don't need forced teaming to make them happen. Forced teaming may work for awhile in the middle of a huge emergency like a pandemic, but the sad fact is that it backfires horribly if deployed for long periods. The attackers and abusers De Becker discussed in his book were using forced teaming as softening tactic to enable them to win access to their victims. The victims know they're being taken for a ride, which makes it all the worse. How anybody ever thought such a tactic starting from such negative foundations was a good idea for use in a public health emergency, I don't know.

Well, actually, to be honest, I think I do know. A great many people who are in positions of power these days are big on force teaming, because they are sure of a few potent and dangerous things. They are sure first of all, that they are right and know better than the majority of the world outside of their own hallowed circles. They are so sure of this, that they are sure that they are justified in preventing other people from asking questions and refusing to respond constructively to critique. They are sure that they have complete access to the presumed absolute trust of "the masses" in "the authorities" of which they are sure they are part. They are quite sure that whatever they say goes, and will work. Even if events show they are wrong, they are also sure nobody will notice when they change what they say and claim they never said what they did before. It's a gross mindset of contempt for anyone deemed "other," and a deadly characteristic trained into far too many people. At one time most of people with these ideas would have become missionaries, and indeed missionaries remain some of the most infamous users of forced teaming on the planet.

The sad fact is that forced teaming really is a prelude to disaster. On the scale of one on one interactions, maybe this seems easy to ignore as long as we aren't on the receiving end of the disaster. So long as no one is attacking or stalking us or trying to drag us into a cult or pyramid scheme of some kind, surely it's hyperbole to suggest it is possible to achieve at scale, we might think. That would certainly be a comforting belief. The trouble is, forced teaming is possible at larger scale, with the same awful ancillaries of forced conformity even when it results in acute danger or even death. It can be a highly dangerous invitation to group think and the paradox of quietism in the face of emergency joined with scapegoating of anyone who resists. People who are behaving ethically and know that they are advocating for actions that are reasonable don't need forced teaming because they are able to not just respond to constructive questioning and critique, they invite it. They don't need to wheedle and bully, they can lead by example and accept that they don't have all the answers. Such a frank attitude and approach demonstrates trustworthiness, so forced teaming is unnecessary.

So I stand by my starting statement, which derives from many recent observations, and not just of the debacle of how "the authorities" are pretending to manage the COVID-19 pandemic. In fact, I first began to notice the unsavoury whiff around the "we're all in this together" tagline back in the era of the 2008 financial crash, when "everyone" was going to have to tighten their belts because to "save the economy" many people would "have" to be left to lose their jobs and homes so that banks and other corporations could be bailed out with public money. (Top)

Saint-Simon's Here-topia (2023-11-27)

Map of lyon from the 1898 new illustrated larousse universal encyclopedic dictionary, courtesy of the robarts library of the university of toronto via wikimedia commons. Map of lyon from the 1898 new illustrated larousse universal encyclopedic dictionary, courtesy of the robarts library of the university of toronto via wikimedia commons.
Map of lyon from the 1898 new illustrated larousse universal encyclopedic dictionary, courtesy of the robarts library of the university of toronto via wikimedia commons.

Every now and again a search engine or site will pop up a sort of lateral search result or connection reminiscent of the early days of the web, before the porn companies made a successful beach head for corporate enclosure and exploitation. In this case, I happened upon an apparent well known in its day book that is not necessarily so well known at least in english translation today, political cartoonist Rius' 1966 book Marx para participiantes, in english Marx for Beginners. There is a brief biography of Rius posted at the People's World website. Not having seen the book before, it was intriguing to see for the first time work by an artist whose influence is immediately recognizable in many graphic introductory books. I thought immediately of Larry Gonick, for example. In the case of Marx for Beginners though, what really caught my eye was his summary of the supposedly "utopian socialist" ideas of claude Saint-Simon, on page 87 of the 1976 pantheon books edition. Here is a quote of the summary, which was indeed in point form:

  • ...planned economy under the direction of a central bank
  • ...end the rule of the leisure class (nobles, clergy and military)
  • ...organize a new society directed by industrialists to promote the welfare of the larger and poorer classes
  • ...found a new religion which recognizes work as man's only merit

This is certainly not socialist, since there is no sign whatsoever of any acknowledgement that there are workers whose drive for freedom and a fair and just living is their own, as is the ability to achieve those things. It's not even a "utopia" it seems, because it sounds suspiciously like the current mean-spirited age of fundamentalist capitalism characterizing the "west" right now. Admittedly the point about the military not ruling has not made it to the present after all, but the other bits are all there. Oh, except of course that contrary to Saint-Simon's naïve expectation, the technocrats and industrialists aren't all that interested in "promoting the welfare of the larger and poorer classes." Not knowing much about Saint-Simon as a person or philosopher more broadly, I am going to give him low level benefit of the doubt to the effect that being from a decayed "noble" family he was more oblivious of than inclined to view the "larger and poorer classes" as analogous to sheep or cattle.

But then I thought, maybe there is more than a little something lost in the translation here. After all, it may be that things have passed through Saint-Simon's french, to Marx's german, eventually to a spanish translation, Rius' reading, and then the translation of Rius. Then there is the whole effort to condense a great deal of material into a relatively small book, and that a great deal of material generally in the air in the late 1960s is buried in books for the most part instead in the present. Yet I have also read David Harvey and a part of his reflections on nineteenth century paris and the escapades of the Pereire brothers in the context of his lectures on Marx's 3 volumes of Das Kapital. Saint-Simon wrote a great deal and his books and articles were collected and republished fairly frequently through to the early twentieth century, so it seemed reasonable to try to hunt up an introduction to his ideas. A query at the internet archive yields an approachable collection of scans of some of the later editions. As it turns out – admittedly not really to my surprise because as Lily Tomlin has said, it's impossible to be cynical enough to keep up – Rius did an excellent job drawing together that summary.

Not that I am the first to notice the issues with Saint-Simon's ideas even on a cursory glance here, because Karl Marx and Friedrich Engels did. More recent scholars have considered the question of whether Saint-Simon's ideas have ended up in de facto use today, and indeed I found a paper by Riccardo Soliani, who observes up front that "...Saint-Simon never claims for the real emancipation of the working class: in his theory, economic and political hierarchy would be dominated by the top management, as a big private enterprise. His utopian society, which would become the real society one hundred years later, is managerial capitalism." This is from his 2009 article, "Claude Henri de Saint-Simon: Hierarchical Socialism?" in the journal history of economic ideas. In Saint-Simon's view then, "advancement" of the larger and poorer classes must be beyond their small ken, so the supposedly superior people would take care of that. As we see too clearly today, depending on the ethics of technocrats and industrialists is little more than a fool's game, but in the end Saint-Simon could not let go of his faith in an "elect" of some kind imposing solutions from the top down.

Saint-Simon's ideas were notably popular among those whom he considered the "savants," including many nouveau riche industrialists and stock market speculators. Among those playing money games were indeed the afore-mentioned Pereire brothers, and still others tried to found a religion along the lines that Saint-Simon envisioned. That only went so well, although apparently there were enough "Saint-Simonians" to make the catholic church nervous for a time, especially the ones who liked to emphasize "early Saint-Simon" over "late Saint-Simon." In his younger days Saint-Simon briefly flirted with the idea of supporting women's suffrage, which led to him getting far too much credit for being some sort of proto-feminist, which he quite evidently was not. (Top)

Including Surprises, Part One (2023-11-20)

Snapshot of the ur-code used to test C and C-variant compilers where the coder's first language is english. Snapshot of the ur-code used to test C and C-variant compilers where the coder's first language is english.
Snapshot of the ur-code used to test C and C-variant compilers where the coder's first language is english.

Hardware updates continue where The Moonspeaker interfaces with firmspace, and among other things that means setting up file management and editing software on the new machines. Since The Moonspeaker is not a database with a web application pasted on top of it, this is a relatively easy task that mostly entails copying over repositories and setting up mirroring between different servers. Having done core editing primarily with bbedit for so many years, I have also sought to find and learn to do the same jobs it covers on alternate software. Partly this is a matter of practicality, because bbedit is a macosx-only program. Checking its size after install indirectly reveals that the developers' decision to write it in C has paid off in a small footprint. The long development within the macosx world has enabled them to take full advantage of the specific quirks of that operating system, including tight integration with services that they then did not have to recreate and bind into their codebase. Alas, this reinforces that the likelihood that barebones software will ever produce a version that runs under linux is low to zero. Between code optimization and their successful maintenance of a solid user base, they have no practical reason to. Their software in general – not just bbedit – is excellent, and they have no truck with subscription models or constantly striving to collect personal data. Their free version of bbedit does not nag for an upgrade to the paid version, and if you do upgrade, you don't receive a subsequent stream of marketing emails from barebones software for anything else. If you need software support beyond what the regularly updated user manual provides, they follow up quickly and professionally. I can't think of another proprietary software company that behaves in this manner, and it is all these details together that make bbedit the one non-free/libre software program I will miss when eventually my mac hardware can't run a macosx anymore. And one more detail besides, which until recently I had no idea was the software rabbit hole it is until recently: html preprocessing.

UPDATE 2024-03-02 - Yes, since this thoughtpiece was originally written I have finally happened upon the well-developed perl preprocessor gtml. Unlike the others, gtml explicitly integrates makefiles and appears to use a similar approach to BBEdit with html comment-based placeholders. (Gihan Perera's original copyright year for gtml is 1996, while BBEdit went live in 1992.) Similar, but not the same, as gtml is designed to start from source files, and then depending on processing variables set at the start, generating new files. BBEdit can be set up to behave in that way if desired (via some modest scripting, including if you want, calling gtml itself), but it starts from an approach in which all the html files are updated in place, then the new versions synched to the server.

Not that I originally could have referred to the use of includes in html pages in this way, because it had seemed just obvious to me that a serious program used for coding would support them. By "includes" I mean what is perhaps most familiar to anyone who has worked with at least the rudiments of the C programming language, even if only to run a "hello world" mini-program. Since I had general programming experience before beginning to seriously code websites, I was familiar with checking that my program headers had the right collection of references so that the compiler would duly copy in the code for things like standard input/output functions and the like. On moving into website work, I learned about server-side includes, that is where in a web page a specially formatted placeholder is updated automatically once published to the production server. That's how most professional sites originally managed imposing a common header and footer on all pages, as well as adding a publicly visible "last updated" stamp and other such things. It didn't take long before webmasters began seeking to do that sort of processing locally, publishing the completed files to the server and thereby reserving the server's processing time for serving the pages and carrying out any additional required local tasks like site searches and handling web forms or email. Bbedit handles this via simple includes that insert pre-defined chunks of code, supports use of predefined and automatically updated placeholders, and even running external scripts to perform certain tasks in place. I have now learned that bbedit has some of the absolute best support for these features of any code editor out there. So much so that there is a company that has developed a suite of free to download and use tools to build and maintain websites with, all based on fully leveraging bbedit's include capabilities.

On the free/libre software front, so far it does not seem that the front runners for doing bbedit's duties on the new GNU/linux-running hardware, bluefish and kate, have the same level of include support. At first glance this seems very strange, especially for kate, which is part of the kde project and so has a broader number of contributors. Then again, maybe it isn't so strange, since it is possible to write html preprocessing scripts in perl (at minimum), and that can be done quite quickly. In fact, I have learned that the C preprocessor and make can be used to carry out html preprocessing as well (here is a reproduction of Jukka Korpela's web page explaining the basics of how). I have already found several people denouncing these applications of the c preprocessor as "abuses" so evidently this is already a technique in use. This is consistent with the general hacker principle of not rewriting existing code and maximizing use of established and well-tested tools unless there is an overriding reason to do otherwise. Based on my tests so far, it is certainly feasible to add html preprocessing capability to bluefish by taking advantage of its capacity for adding user-defined commands and accessing output parsers. It also looks like this should be possible to arrange in kate via judicious set up of a bespoke build profile. So not the end of the world that they don't literally already have this to begin with, but bbedit's support for includes "out of the box" is a real boon.

I should probably explain a bit about how it can be that these should all be relatively recent discoveries. Well, on one hand, it is because The Moonspeaker started out very small, and did not originally have a totally consistent underlying template for its pages. At one time it had barely thirty pages all piled into one folder and could fit on the now vanished format of the 3.5" floppy disk. It took some time before I finally had enough pages to realize that it was ridiculous to try to continue without using templates and includes. Since The Moonspeaker is a static site in the sense that the pages are not generated at access time, tipping the existing pages into one of the by then readily available blogging software programs made no sense at all. So I got to work and spent some quality time first learning regular expressions, which can be saved and used to do a major amount of html preprocessing and updating in themselves. Furthermore, bbedit's regular expression support is based on the pcre library, and that means in turn that I can easily translate them into perl scripts instead. Then I got thoroughly acquainted with how to script bbedit, which finally led me to learn how its include system worked. After a couple of weekends-worth of work, not only was the original challenge of keeping templated code consistent and updating what changed regularly but at intervals solved, I had learned how to take full advantage of a common feature among coding editors: code snippets.

Many of the people who wrote about and shared their solutions to the challenge of html includes started from quite a different use case. What they had was a whole collection of documents that they had been tasked with transforming into websites. They needed to somehow quickly generate tables of contents, site maps, links between pages in due order, and impose a common look and feel. The big era for this type of work looks to be from approximately 1995 to 2005, and a great number of "html preprocessors" were produced. A selection of links to the best known examples can still be viewed at the web design group (WDG) site. It's actually remarkable that the WDG site is still online, as it appears to be otherwise a fossil. In any case, a perusal of the surviving links as well as those viewable in the wayback machine soon reveals that many of the html preprocessors were written in perl. There are two still available that I may experiment with later, arthro software's HTML PLAIN and Pieter Hintjens' implementation of htmlpp (the link to his article providing an overview of it is broken, but it can be read on the wayback machine). In both cases, their approach tends to use codes that can't be left in place in the final version of the html file. That is, in the bbedit approach, includes are wrapped in html comments. The comments stay put, and the updated or inserted text goes between the comments. So there is no need in that case to generate a completely separate copy of the file with at minimum a new extension or updated name. When writing computer programs we can't get away with this, but with html (and css) we can.

There is at least one more commonly used technique for generating web pages out of pre-existing documentation widely used in that turn of the twenty-first century decade, and that is the LaTeX-to-HTML generator. The web pages produced by that means have a distinctive look and are most commonly found within the websites of people engaged in scientific research or computer programming in academic contexts. There are not many of these still in existence on the web these days, the most prominent example I am aware of being Eric Raymond's copy of the hacker's dictionary. The results of a LaTeX-to-HTML run can be frumiously ugly, but probably can be much improved by tweaking the flags and settings before the run. That said, the pages are structured so regularly that they could also be reprocessed with a judicious application of grep via a code editor or some perl scripting, and probably that is what was done to many of them. (Top)

Consuming Information (2023-11-13)

Retinted and despeckled image now in the public domain from Thomas Shaw's 1914 book on why and how to keep and exploit sheep, via wikimedia commons and the internet archive. Retinted and despeckled image now in the public domain from Thomas Shaw's 1914 book on why and how to keep and exploit sheep, via wikimedia commons and the internet archive.
Retinted and despeckled image now in the public domain from Thomas Shaw's 1914 book on why and how to keep and exploit sheep, via wikimedia commons and the internet archive.

A phrase that has been coming up frequently in articles dealing with the web, computers, and "social media" lately is "consuming information." The idea behind this is clearly not that we take in "information" from whatever we access by means of a digital device, thereby eating it. This is a nasty and clever conflation of the capitalist fundamentalist definition of "consume," by which they always mean buying something, and an act we need to do every day, eating and drinking. It is nasty on two levels. The obvious one is the attempt to make us feel like buying something everyday and/or somehow being paid for something everyday is necessary for our survival. The less obvious one is the subtle give away of the most hubristic members of the faith, the dream of keeping most people in thrall to them by standing between them and access to the necessities of life. This would merely seem like paranoid rambling if it weren't for the practical evidence of this very ambition that we can see just by going through the history of the so-called breton woods institutions and their contributions to the longterm anglo-american imperial policy of imposing cash crops to the exclusion of subsistence crops so that the countries that follow their dictates must import food from "western" countries. In any case, those are the particular connotations that go with "consuming information" at first read. On second read with an eye to the context of "digital access" it isn't difficult to figure out that the parallel ambition is to achieve an arcade model for using any digital device: for any use at all, the person must pay first. The payment must be either the equivalent of a few 1980s era quarters or some personal information, the latter demand backed up by a claim that if the information provided isn't true you can be charged for lying. At the moment such demands are completely unenforceable, except insofar as private companies insist they can cut your access for any reason.

I don't deny that the ways in which digital media have been developed and rolled out has presented probably the greatest temptation to try to take total control of people's ability to share information the societies we currently live in have faced. The execrable Bill Gates has never given up the dream of selling everyone a dumb terminal and then charging everyone a subscription to use his in my opinion deliberately bad software. Deliberately bad quite apart from the issue of deliberate back doors and gaping security holes that are the effect of treating staff poorly and trying to make a mess of spaghetti code work long after it should have been abandoned. Even the notoriously persnickety Steve Jobs recognized that apple wouldn't be able to keep the original system 1.0-9.0 lineage going. The thing about people who strive after such socially destructive goals as total information control and surveillance, which go inexorably together, is that they are happy to keep working for that goal over years and years, especially if they were already wealthy when they started out. The majority of the now prominent and influential so-called "tech pioneers" were and are from families that were already wealthy. There's a reason they are so often associated with expensive private schools and universities like stanford and harvard in the u.s. Whether or not we all agree about the character and motivations of particular corporate executives in light of their actions and the longterm policies and trajectories of their companies, we ought to be checking how they speak and write about the general population with a gimlet eye.

As a matter of practical day to day life, I generally avoid working with or purchasing from businesses or people who demonstrate contempt for me. Bearing in mind of course, that not everyone agrees on what constitutes an expression of contempt from specific persons or businesses, and that it is not always an easy call. In light of the computer-oriented articles going on about "information consumption" the way in which graphical user interfaces were once commonly sneered at as "slobberware" or being "slobber-proof" comes to mind. Merely developing and providing a graphical user interface does not in itself indicate contempt or anything like that at all. The metaphors a graphical user interface uses and how it is implemented may. For example, Maciej Ceglowski's hilarious and apt declaration that he shouldn't need sled dogs and pemmican to navigate a website's pages is one way to describe an at minimum disrespectful implementation. Or Alan Cooper's observation that a graphical user interface that provides no documentation and no safe means to experiment and get work done with it is a game, not a practical tool. I've always liked Cooper's description as a means to get through to people inebriated by the cool thing they just coded but that only makes sense to them and is impossible for others to make sense of or use. Of course it is transparent to them, they made it. That other people who don't share their minds can't breeze along using the software does not indicate those other people are fools. It indicates they are other people.

I have a serious dislike for terminology and habits of speech and writing observable in much of the what purports to be "tech journalism" that insistently talks about the majority of us as passive and incapable of making real choices. Somehow it always makes me think of a nest with young hatchlings in it, the sort where the chicks are still growing their first feathers, and the main thing they can do is peep and call incessantly with their beaks wide open to encourage their parents to feed them. That's at least rather better than the image that comes up most often with the term "user," which at least right now still tends to have more connotations of addict and social failure than its technically neutral meaning. The general corporate model of "the consumer" seems to be of a passive person whose every desire can be shaped and guided by propaganda because they are gullible and incapable of self-motivation. They don't have their own interests, they scan the propaganda feed for the latest offerings and pick from that. They don't understand how the money actually flows in say the music or movie industries, let alone food and clothing industries. Never mind of course that in real life, people in general, not just the "tech savvy" cheerfully opt to abandon cable television every day by choice, not due to economic pressure forcing them to give up relative luxuries. Or how many people may under the current legal regime be considered media "pirates" who supposedly "steal money from artists" while readily paying to attend their shows, buying their merchandise, and nowadays buying various types of direct to artist subscriptions.

If the self-proclaimed "tech journalists" as well as various corporate shills and captured government officials are unable or unwilling to accept that other people exist and have independent capacities to think and choose, even if those capacities may be constrained from time to time, such phraseology as "consuming information" will remain in use. The people who initiate and continue the use because of their low view of the majority of the world seem oblivious to the fact that what they have to say is reaching fewer and fewer people, unless they see it as justification for demanding more surveillance. (Top)

Browser Troubles (2023-11-06)

The logo and credits for the webkit-based, apple-hardware oriented browser icab by Alexander Clauss. The logo and credits for the webkit-based, apple-hardware oriented browser icab by Alexander Clauss.
The logo and credits for the webkit-based, apple-hardware oriented browser icab by Alexander Clauss.

Web browsers have become something of a conundrum again, not least because the main browser projects are run by advertising companies striving to destroy our ability to block advertisements and all the other ways they strive to steal our information and generally spy on us. These days I don't think many people harbour any illusions that these corporations are not de facto spying for governments for the right price, never mind the proper and ethical requirement for reasonable suspicion verified by a duly certified and registered warrant. Those of us with a willingness to read and learn from history also know that mass spying is ineffective for enforcement of just laws, but an absolute necessity for tyranny because it is so good for producing social atomization. The implications for what will happen if the mainline browsers are rendered even more porous and unable to block advertisements are already clear: people will abandon the current web in droves. That is why so many people have gone to "social media" when they think it has been implemented in a way that is ad-lite and privacy respecting. That was wishful thinking, although many people who still use such pseudo-services look to be treating them as insecure party lines, which seems the least bad trade off if they insist on using them. But what about web browsers?

I am in no way surprised that apple is destroying safari. The process is basically the same as what apple did to the once rightly praised and widely respected ClarisWorks suite of software. To my nowadays definitely not fully up to date knowledge, the last hold out from those days is Filemaker, and it is in poor shape. In fact, it looks like the dubious "Claris Connect" may be a cloud computing offering that bodes the worst for what is left of Filemaker as locally run software. The sequence tends to run roughly as: buy the company making the successful software; add "apple" to the name; rebrand the software under some dreadful version of "iWhatever"; begin loading the software with cruft so that it runs slower and slower; drop backwards compatibility; cough up a "cloud" version that is supposed to solve all problems caused by the previous two steps. Web browsers are not going to suffer the pretence at "cloudization" except for what I expect will come next, a demand that the person using the program have an account with the corporation to use the browser at all outside of a couple of corporate domains. That is, they will be rendered into yet another version of AOL, which people rightly pilloried as "AOHell" and "thanks for the coaster" back in the day. Still, there is a nice alternative that I learned is still an active going concern, Alexander Clauss' iCab. I used a version of it back in the waning days of System 8 on an iBook, and while iCab couldn't do the most up to date stuff then, it did an excellent job. Today it has a feature set that equals if not surpasses the similarly webkit-based GNOMEweb.

Truth be told, I am not sure that it is wise to celebrate the apparent and far too delayed demise of internet explorer as a tool for browsing the external web (it sounds like it is still the file browser program in windows itself). At my day job, which imposes microsoft workstations on staff, the switch to "microsoft edge" has produced the weird phenomenon of using an obviously rebranded version of google chrome made shitty with cruft and bad interface choices. Bad as in, it isn't possible to save a set of tabs from a browsing session bad (UPDATE 2024-02-17: This issue seems to be fixed, but edge is so crash-prone that it is hard to tell.). I guess it supports microsoft and head office competing over who will spy on staff hardest well enough to be worth the contract for the systems on the boxes. The results is an ungainly program that refuses to display pdfs consistently, hides urls no matter what, and regularly crashes tabs and complains it is running low on resources. Web browsers can be RAM and CPU hogs, but at the job by nature what I get to do with microsoft edge is look at web pages and open pdfs. At least I was able to install an ad blocker, which sped it up wonderfully, but overall since it is not my computer and I don't manage the security policy, I keep away from both audio and video files. Hilariously, for required online training, staff are directed to use a different browser, usually actual google chrome (already installed by head office) or not so subtly pressured to use our personal computers.

Mainline firefox is being wrecked from the inside by google's money, which is truly a shame. They have clearly opted not to provide an alternate version of firefox that does not include DRM and such execrable junk as pocket and the like, leaving it to the determined and the very technical to figure out how to find and install the beautifully pared down version Abrowser that is the default browser in the GNU/linux variant Trisquel. I have run GNOMEWeb aka epiphany, and while it is fast, has excellent built ins and a clean layout, it handles tabs poorly and has no option to open the bookmarks library in a sidebar or kill the "most frequently visited" page. The trouble with such default pages as "most frequently visited" is that they are an absolute love or hate sort of feature. I am of the numbers of those who hate it. Always have, because it is pointless when there is also a bookmarking facility. I already bookmark the sites I regularly visit, and would be amenable to a setting that said in effect, "Listen sunshine, you can have a bookmarks library with sidebar or an autogenerated page of frequently visited sites and a bookmarks library, not both!" My list of "see if there is a way to change this setting from the command line" includes one for turning off this page and having it show my actual chosen homepage instead. That would not make up for the lack of a bookmarks library sidebar, but it would make it an even better html preview tool, which makes the most of its poor tab display management.

After firefox variants, a very few webkit-based browsers of which I know for sure of two, one safari of course, the other the solid and not well enough known iCab, there is a wasteland of chromium variants. Truth be told, I don't really believe google hasn't fiddled inappropriately with the chromium codebase despite the fact that codebase is specifically supposed to be stripped of the various spy measures and integrations that are endemic to chrome-proper. It seems to me to be rather pointless to keep rebranding chromium and pretend it is a wholly different browser, although I appreciate that browser rendering engines are non-trivial to write and debug, which is why there are only a few of them. At this point I find myself seriously wanting a web browser with a rendering engine that sticks to rendering web pages, banishing audio and video play in-browser back to plug ins if a person wants to use those at all. Reading pdfs in-browser is a nice to have rather than a necessity, and for serious reading it is necessary to switch to a proper pdf reader anyway. It is a shame that the pdf standard has never included a solid preview method so that a person could view, if nothing else, either the abstract or first page, similar to how jstor presents pdfs that are not open access yet. On the other hand, thoughtful webmasters can provide an equivalent that best suits their sites, so perhaps that is better left to them to figure out rather than hoping a committee does the right thing when so many committees have been ruined by corporate entryism.

All this said, perhaps the at least momentarily parlous state of web browsers is a symptom, not the fundamental illness of the web. With so many former participants in providing the fundamental infrastructure of the web having abandoned it by shutting down website hosting, and so many people who think they are "building a website" when they aren't, the web is only just a hyperlinked medium. I have never agreed with the claim that "the web" needed to stop privileging text, because it was precisely the text hyperlinks that gave it its power and interest. Too many people never learn how to actually code websites by hand, or learn the necessity for actually linking to other relevant sites on their own as well as the original sources of materials they may cite or discuss. Clever as making wikipedia into a source that people could copy and reuse directly may be, the result has not been creative reuse but a great many rebranded clones that effectively abuse that most corporate search engine algorithms favour wikipedia above other sources after corporate website hits. In my own work, I have a default section added to each search that specifically says "dump all results from wikipedia or wikipedia clones, amazon, google, and yahoo." For some queries, I have to add some more exclusions for social media sites as well. The extremity of centralization and web pollution contributes to the poor performance of mainstream browsers and the lower visibility of alternatives. Neither centralization nor web pollution are inevitable, and indeed, alternatives to the current web are definitely growing. (Top)

Adventures in Trisquel and MATE (2023-10-30)

A cruelly downsampled snippet of one of the excellent desktop images provided in a stock installation of trisquel as of version 10.0 LTS Nabia. A cruelly downsampled snippet of one of the excellent desktop images provided in a stock installation of trisquel as of version 10.0 LTS Nabia.
A cruelly downsampled snippet of one of the excellent desktop images provided in a stock installation of trisquel as of version 10.0 LTS Nabia. A fuller sized and detailed copy of it is available at trisquel's website along with a selection of other screenshots.

As time has gone on, I have been fortunate enough to be able to start the process of major hardware upgrades for my computer, and am looking at a possible shift to self-hosting via the wider capabilities of the FreedomBox, probably starting with a gemini capsule. Due to the most recent iteration of windows, I have many friends and colleagues who are looking into switching to a GNU/linux distribution so that they can keep using their present computers. Many of them will select a windows-look alike, which is fair enough. Others might go for a more macosx-looking distribution, although most of those and quite a few of the GNOME-based distributions have serious RAM requirements. Being quite fed up beyond the back teeth with windows, which I have to deal with for work including the staff abuse now facilitated by the repeated forced reset of setting preferences from central office and complete lock down so that nobody can run a script to reset them to what they prefer (within the limits of security of course), I will certainly not be opting for anything intended to be windows-like. In the end though, the key thing is finding a distribution that runs well on the hardware in question, and I have been shifting to a free/libre software and hardware stack over the past several years as well.

One machine I administer is an older macbook air, just old enough that it is sturdy and has survived a fall onto its back corner on a tiled bathroom floor that bent its case but did not affect the screen. Aluminum being soft, a bit of careful pressure imposed on the machine with a wrapped towel between it and a solid wooden table sorted things out again. On the other hand, this type of laptop is not physically upgradeable without more effort than is really worth taking so long as the machine works, but the system software is too out of date to be dependable. I tested ubuntu on it, but ubuntu's default trackpad driver and settings utterly defeated all my efforts to reset them to get that working sufficiently. Since ubuntu 21.04 LTS uses a version of GNOME, excellent keyboard remapping was available so that I could do neat things like assign the command keys to two different things, although in the end I just inverted the mappings of the left control and left command keys, as that got rid of the majority of keybinding annoyances. But the trackpad issue never did yield to persuasion, so the in-case-of-emergencies wired mouse came back into its own. In time I soon found that it was not possible to use the multiple desktop feature properly, partly due to the lunatic way it is set up by default, but in time I began to realize that there was a deeper issue in that the laptop has only 4GB of RAM. On top of that, there is something peculiar in how ubuntu handles sleeping laptops, so that this never worked consistently. In the end, I did some more research and learned that Trisquel has a much smaller RAM footprint and uses the GNOME fork MATE as the default desktop. I decided to test it out, and if that didn't work well, start doing further research on freeBSD variants that might prove more suitable, considering macosx is a BSD variant itself.

In fact, I can go about the freeBSD research at a more leisurely pace since it turns out that Trisquel works beautifully. Trackpad support in Trisquel for the macbook air is brilliant, as the default sensitivity and velocity curves are well defined and assigned without intervention by the installer. Once I had reconfigured the mouse button mapping slightly, which actually worked and took only a few minutes of quality time with the dconf utility, I knew there would be no need to use the wired mouse in future, unless of course the trackpad suffers a hardware failure. While exploring the various options in dconf, I also found a feature added at one time to apple's system 7 that I have long missed, in which doubleclicking on the title bar "rolls up" the window into it. People who never found this feature useful will not be impressed, but for those who did it is wonderful to have it available. (Mainline GNOME has this feature too, but to my knowledge apple has never brought it back to the macos.) Multiple workspaces work just as they should, and can be named, a feature that seems to wink in and out depending on the desktop manager and version in the more graphics-heavy options. I did have to do rather more keyboard mapping than expected as media keys are not set up by default, similar to GNOME, but that was very quick to update and no real issue.

A fiddly element I have not been able to solve in either ubuntu before I gave up on it or Trisquel yet is the hp multi-function printer/scanner/fax machine challenge. I have been able to sort out printing without having to do anything but plug the device in, and faxing was quite similar (that requires more set up to do with the device not the computer). Scanning however, is a real annoyance because it simply isn't possible because I cannot sort out how to persuade the application Document Scanner to speak to the URI for the scanner. It can correctly identify the scanner exists, and is also picking up its URI. But when it is time to communicate with the scanner, it still has I/O errors even though it is definitely speaking to the right URI. From the look of things, the issue is probably a usb data throughput problem, as one device has an older standard than the other, and even though printing is fine and so if faxing, scanner communication on faster devices demands more than the software can make up for. All that said, this seems to be an hp all in one issue specifically, as on testing with scanner only units from other brands, there is no comparable issue so far. Caveat there being, I have not been able to borrow an hp scanner to test yet.

Overall, Trisquel is right up to date and running fast and stably on the macbook air, including sleeping and waking properly, although admittedly it is a bit mysterious in its automatic screen dimming and locking behaviour. Mysterious as that is for the moment, it is easily overcome via the old school habit of locking the screen by hand if necessary. That is the kind of light and easy workaround that is best to need, and I prefer that mysteriousness over never knowing if sleep mode is working or whether the laptop will wake up after going into it. In much earlier days of my work career, being a contractor I was assigned to use one of the oldest and most decrepit machines at the office, one so old that it was running windows NT and had a catastrophic memory leak. To deal with this, it had to be restarted at least twice a day. This was a machine that I was primarily reading and writing work documents on, and it was too slow to look around even the office intranet. (The implications about management attitude to contractors from all this were not lost on me.) This was when restarting a windows box took even longer than it does now, so I would do my best to drag the machine along until it was just about time for a coffee break. Luckily there was one of those in the morning, and one in the afternoon at that office. Wild times! (Top)

Unfortunately, Wifi is Awful (2023-10-23)

July 2016 photograph of a netgear nethawk wifi router by Evan-Amos of vanamo media, released to the public domain for educational use via wikimedia commons. July 2016 photograph of a netgear nethawk wifi router by Evan-Amos of vanamo media, released to the public domain for educational use via wikimedia commons.
July 2016 photograph of a netgear nethawk wifi router by Evan-Amos of vanamo media, released to the public domain for educational use via wikimedia commons.

Well, that is probably too provocative a title. Wifi isn't completely awful, except for almost every one of its most common uses these days. For instance, every cable modem provided by an internet service provider now has a wifi modem in it, many with the ability to transmit on two main wifi radio channels, an ordinary channel and the so-called "5G" which just refers to the latest wifi standard with hopes of convincing you to spend more money on it because it sounds sort of like the travesty "5G" network for cell phones. Marketing nonsense aside, I agree this wide availability of wifi modems and a couple of channels is not in itself a bad thing. When I still ran my own wifi radio, it came in very handy to be able to revert to the cable modem's instead while upgrading my equipment. What has finally forced me to admit that wifi sucks is something quite different, and it led me to think through where wifi absolutely shines, and there are more than a few.

An excellent use case for a properly secured wifi unit or units is where access to a local network is provided for some random segment of the public, who will not have fixed places to plug in a laptop or whatever. Hence just about any coffee shop, public libraries, and mass transportation terminals. No question, wifi rocks the house there. Security can be a real headache, and places that find themselves suffering major usage spikes will have some load balancing trouble, but those are both possible to fix. Alas technical support staff "on the spot" may not know the first trick in load balancing for wifi access points in larger areas like libraries. I am not sure why they don't get trained right away that when connection problems arise and software and hardware for the person connecting has not changed, the thing to do is to recommend they walk down the hall away from the wifi node their machine is automatically trying to talk to so it will pick up a different one. No, telling people they should install another browser is not going to fix that issue. But, evidently, I digress.

In a home network context, my earliest experience with having a wifi access option was at a perennial in university towns: a large house co-rented by at any one time between six and eight tenants. There tended to be a lot of churn in the people renting the upstairs bedrooms, and the house being older access to the modem via ethernet cable was not simple or practical at that point. Unfortunately it was not possible to use the trick I did to get around a lack of conveniently placed electrical outlets when I needed to connect my usb printer to my computer. In that case, being in the basement, I could actually run a longer usb cable along the ceiling to the right place. Luckily even at that time this was a feasible option with the cables available, as they were no longer so expensive. I used similar tactics with ethernet cable later, but again this was as a person who ended up renting the basement suite, so the ceilings were lower, and by then my internet access was separate from the other suites in the building. So in that original experience, wifi was the least expensive and most effective way to manage providing internet access to the rotating cast of university and college students who always needed at least some to manage their classes.

Eventually I got to move to a much larger apartment building, and found myself dealing with something in a non-avoidable way that previously had only been at issue at hotels during work-related travel. For anyone who has travelled and stayed in a hotel for whatever reason, wifi is the nightmare that replaced once reasonably helpful ethernet service in each room. (To date I have not yet tried disconnecting the suite's television set from the network to use its ethernet, if the televisions are even wired to the network anymore.) Turning on the wifi of our trusty device, in a hotel we will see a riot of wifi networks, the majority secured as they should be, a few not, a major proportion from entirely different buildings, and a smattering of personal hotspots and printer/fax machines. Out of that mess, it is necessary to pick out the secured network that access to is supposed to be included in the price of the room. If that works, there is usually a second level of sign in where the hotel tries to pretend its guest intranet is "the web" but does let slip that it is tracking everything you may do online. Observing this, I switched to using a type of wifi access provided by my internet service provider when travelling, and then primarily to facilitate downloading boarding passes and the like. Now that can all be done for free in the business centre of most hotels, and so I have given up on internet access via hotel networks while travelling. It can be fun to peek at the weird names people give their personal hotspots though, I admit.

In larger apartment buildings, while we don't have the building intranet issue to cope with, we do have the riot of wifi networks to deal with instead. It didn't used to be a riot, and I used to use wifi and did indeed appreciate not having to have ethernet cabling set up when working on my laptop and being able to stream music or whatever from devices that are ethernet incapable in the first place. After all, my apartment rentals have been of small units, so distance from the wifi radio for my own network and potential interference from the structure of the building have not been issues. And I should acknowledge that yes, there may be many other wifi networks still that do not advertise their names and so they are not obvious to a typical consumer wifi card. Or rather, these things had not been issues. Having moved to a warmer part of the country where most apartment buildings of a certain age are built of wood, I soon discovered that both weird signal interference effects and interference from other wifi networks were going to be matters to be managed more actively. A first world problem to be sure, and when it caused difficulties for serious work, out came the ethernet cable and a shift to saving certain tasks for the office where the network was all wired and much faster anyway.

Where I live now however, I have had to concede defeat and revert to ethernet all together when using the internet at home, except for the small amount of music streaming I run through an older ipad in the sweet spot for wifi in the apartment. What has happened of course, is partly the effect of so many people working from home and otherwise needing to make more use of the internet at home. Accordingly, they have added capacity when they could, including capacity to allow many members of the same family to be online at once, more than the typical four or so ethernet sockets in a cable modem generally allows. To be sure there are people making judicious use of routers as well, but I suspect as a matter of quick deployment and what is already included, more wifi has been added than ethernet. Now there is simply too much interference for wifi to work well in my apartment, but, luckily, ethernet cabling is quick and easy here. I went to pick up more ethernet cable, and also looked into a better grade of router with an eye to ensuring my ability to apply firmware updates and hardware repairs, over the salesman's efforts to persuade me to buy something that would provide stronger wifi signal. Besides the annoyance of him ignoring the technical issues I was actually seeking to handle, he seemed unable to understand that more powerful wifi transmitters are actually the problem when it comes to apartment block wifi, not the solution. Effectively turning up the wifi volume just means yes, the guy five blocks away can see your network and if he's really unlucky his signal is messed up by interference too. But when everybody does that, the end result is that for consistent internet access when carrying out tasks that move more data, wifi sucks. (Top)

Website Optimization Follies (2023-10-16)

Kotetsu the ginger cat, photograph by Yoppy, August 2011 via wikimedia commons underC reative Commons Attribution 2.0 Generic license. Kotetsu the ginger cat, photograph by Yoppy, August 2011 via wikimedia commons under Creative Commons Attribution 2.0 Generic license.
Kotetsu the ginger cat, photograph by Yoppy, August 2011 via wikimedia commons under Creative Commons Attribution 2.0 Generic license.

I don't know of anyone who doesn't think that things have gone awry on the web, although what specifically has gone awry and when the going started is not universally agreed on. The people gungho for the corporate take over of the web in the form of non-stop advertising and surveillance because supposedly otherwise "it couldn't pay for itself" are likely to define things as just about on track now after a bad start from a mostly military and academic early base. There are those who would argue for the troubled time being when pornography first went into serious distribution via the web, with their pressure to grow "multimedia" and such evils as early advertising banners and pop up and pop under windows. "Social media" is another development that can be identified as a point where things began to go awry on the web. Or perhaps the apparent implosion of blogging, once so vibrant with its sensible apparatus of rss feeds and that next stage of the webring, the blogroll. Well, except blogging didn't exactly implode on its own, as they suffered one of the earliest undeniable examples of google abusing its power over search results and began the process of downgrading smaller websites and blogs in favour of what has become the roster of a new-style cable television line up with a growing portion of every virtual hour devoted to commercials. The open pursuit of surveillance because supposedly spying on everyone online incessantly will make them helpless putty in the hands of propagandists is more recent still, but reflects a clandestine practice that was much older. All these complex things have contributed to the terrible mess the web is in now, and which one marks "the event" for a given person will derive from the things they use the web for. But one is not spoken of much outside of the assorted people who spend considerable effort building and maintaining non-corporate websites online, and that is the ways in which web standards have been and are still manipulated. Those manipulations have underwritten many of the problems we see today, including moves that support the abusive take over of the web by so-called "web applications" that depend on making heavy use of client computers and making it difficult to share links or other data between them.

UPDATE 2022-06-23 - I have happened on Rohan Kumar's site, and he has some interesting thoughts about website optimization on a post that he continues to update as he learns more and web standards change. It is Best practices for inclusive textual websites. Quite a lot of it deals with adjustments to the hosting web server, which may not be immediately usable for many, but just skim over those to the other material, which deals more with page design, and scripting recommendations. A quote that pertains specifically to the issue that started this thoughtpiece off: "To ship custom fonts is to assert that branding is more important than user choice. That might very well be a reasonable thing to do; branding isn't evil! That being said, textual websites in particular don't benefit much from branding. Beyond basic layout and optionally supporting dark mode, authors generally shouldn’t dictate the presentation of their websites; that should be the job of the user agent. Most websites are not important enough to look completely different from the rest of the user's system." It is also worth checking out the brief, to the point, and only partly satirical best motherfucking website. The reference to men growing their beards out is kind of strange, but the general point still gets across!

Unfortunately, all too many webmasters have been lured into going along with this latest enclosure trend by a fake carrot labelled "consistent control over the appearance of your website across devices." I was flabbergasted recently to read one webmaster's careful enumeration of the research and steps they undertook to optimize the size of the images they use, their decision to drop javascript altogether as not necessary to their use case – rounded off with a long section on all the effort it took to optimize the web fonts their site uses by deleting glyphs they don't use and so on. So, with all this effort at optimization, they are nevertheless so intensely committed to controlling minutiae of their website's appearance that they do their utmost to impose web fonts? And more than one of them, so that it makes sense to them to delete unused glyphs? This is the web, not a printed work, and I was not reading a pdf either. This person's site did remind me why long ago I turned off default loading of third party images: besides the security risks and spying annoyances like web beacons, they slowed sites down terribly and in my experience often don't add much. Let alone that I have been online long enough to remember when third party images were rare, because linking to somebody else's resources used up their bandwidth without giving their site traffic, and that was considered at minimum bad manners. Hosting site images locally is an old optimization trick, not just good manners, and of course the person whose long post I am summarizing was doing that. But the fonts, that blew my mind. This goes along with my growing frustration with people who insist that they can't write and code a website without plenty of study on say, the w3c schools website or similar. If a person just starting out in website building saw that post, and also gave in to the claims that they "need" blogging software to have a site at all, then they would be all too likely to conclude it just isn't possible for a regular person to make a site these days. And they are so wrong, but these are the impressions that are turning creative, interested people away, and leaving a blighted space for corporations and other at best dishonest actors to move into.

While I do commiserate with the webmasters who build websites and feel total frustration about how they can't seem to get the damn layout to not do appalling things when loaded onto a small screen device, or the most ideal fonts are not there so the text ends up with the screen equivalent of rivers and word or line orphans. I understand the urge to pin that shit down with web fonts, loads of javascript detection and fine tuned css with conditions, fierce insistence on using divs because tables are supposed to be permanently deprecated from use as layout devices, and more javascript in hopes of preventing certain ways of using the site that could spoil the balance of all that hard work. I agree that it does seem like all this should be fixable by using code, by updating web standards and pressuring browser makers to support them. There is a definite allure to the idea of somehow making the web into a sort of super ebook, but better, with more aesthetic control and all the advantages of multimedia. But as the incessant gobbling of bandwidth and processor cycles of the most technically "up to date" websites, which are often actually web based applications now, this is a sort of better that can never be achieved. Worse yet, its demands on hardware and software with the attendant problems of bigger codebases and worse security problems is steadily shutting out the very site visitors that most people building websites or even making a quick webpage hoped to reach.

The attitude that seems to have gotten entrenched out there strikes me as all wrong. People don't demand a new and improved pencil with a thousand new features so that it can be also a pen, paint brush for both watercolours and oils, and a pastel. Evidently that is absurd, because of its physical limitations, yes. But there is a reason that pastels are not regularly encased in wood for use, though certain types of special coloured pencils may be very similar. There are watercolour pencils, but they have their own niche in making artworks, they cannot take over the tasks of watercolour brushes and colours in blocks. Yes, the web has been augmented so that it does not have the same limitations as when the early web browser mosaic was first released into the wild, and many of those augmentations are good ones. All too many of those augmentations however are bad ones, partly because they seek to do two contradictory things. A bunch of those alterations are meant to make it harder to share code and resources by hiding it all behind proprietary fronts of various types, and therefore make it appear much harder to actually contribute online rather than passively consuming or reactively pouring outrage onto monetized "social media" and comment boards. They constitute the "make the web into newfangled cable television" stream. Still other alterations keep trying to fix "building a website" by imposing a single "everybody needs to use a database content management system" model, whether the site is dynamic or static, and even if there is only one main contributor. The people working on these are sure that it is just too hard to build websites otherwise, so they have built free/libre software versions to prevent cost and licensing from being major barriers. They make up the "throw more software and features at it" stream.

Tragically both streams reflect a founding assumption, not necessarily shared or agreed to at all by the contributors to these projects, that most people who are willing and able to build sites online are actually pretty stupid. That there is no way they will be interested in exploring, learning, and building online if they have the time and ability to do so, unless they are handed elaborate software that will do most of the work for them. Even for people who are striving to make a basic living with the help of their own website, it is not necessary to take on the crazy software overhead being flogged these days, nor to go along with this rotten founding assumption by accident or by design.

One of the easiest ways to optimize a website is to accept that it is a particular medium, and it does best by accepting its limitations, limitations deriving from the fact that bandwidth online is not infinite, nor is the power of any given computer. Resist the urge to try to control everything and get near exactly the same appearance in all cases, and go for options that gracefully adjust. Don't get stuck on wretched web fonts, instead set up stylesheets to call for an ideal font that is commonly available on many computer systems, with a fallback to serif, sans-serif, or monospace font families. Then forget about it! Go on to other things. The Moonspeaker header uses an obscure, free to use font and maintains its appearance because it is a jpeg image. The large version is 24 kilobytes, the small version is 7. For years those were pretty much the only images on the site because it was more work to find images to use just to spruce things up then, and everything else had to be based on my own artwork. There is nothing wrong with having a mainly text-based website, nor with having actual full bore paragraphs. There is also something to be said for pages that provide audio and video to be less busy so as not to take away focus from the main show. Most of this can be managed with one or two programs: a good text editor that supports code colouring, and an ftp program to copy files to the server the website will be served from. I am leery of depending on editors provided by free or paid web hosts, because that typically makes it more complicated to maintain a local copy that serves as both back up in case of server failure and testing space for experiments. Neophyte webcoders can add a simple image editor if they wish once they have the basics under their belts.

Perhaps it sounds like a cliché, but the best optimization plan is genuinely not to have all the bells and whistles to start with. Starting with the minimum software to make web pages is wonderfully clarifying, because it supports focus on the question of "do I need that to do the core of what I want?" It's easier to add bells and whistles later if you must, then try to excise them later. (Top)

Pipelines Versus Trains? The Answer Is Not Obvious (2023-10-09)

Section from a steam pipeline that burst due to failure after internal erosion taken by Uwe Arams of CE Photo, april 2015, via wikimedia commons, following the terms requested for reuse on a webpage. Section from a steam pipeline that burst due to failure after internal erosion taken by Uwe Arams of CE Photo, april 2015, via wikimedia commons, following the terms requested for reuse on a webpage.
Section from a steam pipeline that burst due to failure after internal erosion taken by Uwe Arams of CE Photo, april 2015, via wikimedia commons, following the terms requested for reuse on a webpage.

For anyone who saw the grim footage from lac mégantic after the terrible explosion in 2013 due to a train car carrying crude oil derailing, a great deal of not so honest corporate public relations money has gone into claiming that "obviously" pipelines are better in terms of safety for hydrocarbon transportation. Quite understandably, many people at lac mégantic think the same. The 2013 explosion came down to evil corporate policies incentivizing unsafe braking practices that combined with a flammable product, enough of a gradient, and terrible luck to kill 47 people and flatten lac mégantic's downtown, and correcting those policies is an absolute necessity no matter whether there are pipelines or not. Still, this cruel incident is not the slam dunk argument for building more hydrocarbon pipelines that the companies interested in building them want us to think, and as people affected by disasters like that at lac mégantic have every right and reason to want to be the case. Would that it were!

The trouble is, pipelines by nature must operate under pressure, and that leads to still other risks that are not uncommon. Certainly outright pipeline explosions, as in the case of natural gas pipelines are comparatively rare as train car derailments are. The trouble is that pipeline failures are not so uncommon, and very few of them leave spectacular pipe debris like the chunk of blown out steam carrying pipe in the illustration to this thoughtpiece. Gaseous leaks in inhabited areas typically draw attention right away because of additives to give the gas an especially foul smell, typically like that of rotten meat. Since gaseous hydrocarbons are highly flammable in concentration and often heavier than air, they are a non-trivial risk at any time if they are accidentally released. To date, those are usually a product of equipment strikes, themselves a product of trying to hurry things and not use services like "cal before you dig." The case of liquid hydrocarbons like crude oil is different on several levels.

For one thing, since liquid hydrocarbons are not widely used in homes for heating and energy generation as gaseous ones are, there tend to be far fewer oil lines in residential areas or crossing through current downtown areas. Oil or diesel lines are more likely to supply larger industrial installations, such as airports and refineries. The pipes used are often, though not always, larger than gaseous counterparts, and need especially frequent cleaning to keep them from clogging up. This is just a fact of liquid fractions, they inevitably carry some heavier stuff with them even after refining. Overall companies that build and run such lines like them to be as straight as possible because any bend not only slows down the flow, so they may need to add a compressor station to get the pressure back up, those bends tend to accrue the heavier stuff faster and are harder to clean out. In the case of lines carrying liquid hydrocarbons, the lines are more likely to develop the same sort of leaks older and well-used garden hoses do: pin hole leaks, and sometimes outright breaks along older and weaker seams. Hence there are whole programs of maintenance and monitoring of these pipes, although sometimes it can be surprising how little there is from an outside perspective. After all, leaked oil is lost money. It is not easy at all to clean up spilled oil, especially longterm, undetected leaks, let alone accidental in town gushers that cities like burnaby in british columbia can speak to. Faster, more dramatic pipeline spills of that type are not easy to deal with, as the hydrocarbons give off considerable fumes that people can't tolerate for long, and what gets loose on its way to a refinery is thick and sticky even if it did not come from the tar sands. Very difficult to clean up.

But, a person might quite reasonably protest, hard as an oil spill or gas release may be to clean up, that is trivial compared to human lives. That is absolutely true. The difficulty is that people dealing with the aftermath of pipeline failures can and do suffer significant health impacts. Many pipelines have been built deliberately straight across or through major water sources, so that a spill is not just "a spill" but a critically dangerous water pollution event. Slow leaks may contaminate huge areas and volumes of soil, rendering it impossible to use for growing crops for at least the immediate future, and leaks that impact aquifers are a serious nightmare because they simply can't be cleaned up. So the trouble is, while it is absolutely true that pipelines may be able to mitigate risk of acute major disasters like explosions, so long as they are adequately monitored and maintained, they may still produce critical slow-moving disasters that cannot be corrected in a human lifetime. It's really quite a maddening situation.

If we think back to more local scale questions though, it is clear that which means of transporting any dangerous product is better will very much depend on the local conditions. In some places the answer may be trains, some pipelines, others even trucks. At one time everyone wanted a railroad to run right through their downtown because that in itself was a major source of potential moneymaking opportunities. Before tourism was marketed beyond the especially well off, anyone farming a crop in such quantities that it had to be transported away to a larger market certainly needed a railroad nearby. But now it is far more common to transport dangerous substances by rail, including many that can't be transported safely in any pipeline. Many of those products were not moved in this way when the railroads were first constructed, nor was their regular movement by railcar contemplated. They simply weren't designed for such things. Pipelines were a feature of hydrocarbon production almost from the beginning, because moving liquid by pipe was a longstanding familiar practice already, although again, very different products at much higher pressures changed the risk factors significantly. So there is another tangle, of trying to retrofit older infrastructure for newer uses, where there is heavy pressure to move fast but not spend too much.

The presence of such apparent catch-22s indicates that the root of the problem is not where we are being guided to look. The pressure to build more pipelines, which takes years, in order to move more oil and gas in hopes of catching the latest price peak in the market, doesn't make any sense. The pace of pipeline building cannot be matched to the dips and valleys of oil prices, nor does their presence or absence relate very much to the price of oil at the pump or gas when used in home heating, be it natural gas or diesel. The difficulty lies in the inappropriate use of a limited resource that also has severe environmental impacts when used in huge amounts over a short time. The thing we are not supposed to understand or talk about is how the major consumers of oil and gas across the board are the militaries of the world, then certain branches of industry dependent on huge hydrocarbon inputs to support constantly increasing production for the impossible to maintain increasing profits demanded by capitalist fundamentalists. It is true that the so-called "western lifestyle" cannot be successfully adopted by everyone in the world not least because there is literally not enough oil and gas to support it. Yet it is also true that most people living some facsimile of that lifestyle could live well without such things as "consumer goods" that barely make it through their actual use, or accruing bric-a-brac that goes regularly into the ever growing landfills. Many people living in "western" countries actually can't afford to live according to a "western lifestyle" in the first place, regardless of their own opinions on the matter.

It is possible to do better by all of us, from those rightfully afraid of the risks raised by trying to transport more quantities or more marginal hydrocarbons, to those who want nothing to with dangerous corporate policies that make their daily work active death dealers to them and their neighbours. It is possible to do better by means of just changes that don't punish people trying to make a living. Seeing and acting on the possibilities won't happen if we persist in getting caught up in a not so helpful argument over which is a "better" transportation option for oil and gas. (Top)

Volunteer Challenges (2023-10-02)

Snippet of from the online resources page of today's revived mass observation project in england, accessed june 2022. Snippet of from the online resources page of today's revived mass observation project in england, accessed june 2022.
Snippet of from the online resources page of today's revived mass observation project in england, accessed june 2022.

Among the excellent snippets that have turned up among my readings lately are some instructive reflections on the difficulties raised by trying to gather sociological data from a wide range of people without paying them for it. The author of the quoted paragraph below, Dorothy Sheridan, is far too diplomatic to put it that bluntly, but that is the basic issue. In a capitalist environment in general, let alone the period of the data gathering exercise in this case, who has time to answer surveys is very much a question of class. Dorothy Sheridan is a long-time curator of the archival materials from the mass-observation project originally founded just before the second world war in britain. This excerpt is from her 2002 anthology of excerpts from the original iteration of the project, Wartime Women: A Mass-Observation Anthology 1937-45 (London: Phoenix Press). The illustration here comes from today's mass-observation, a revival in order to take up the task of its predecessor again, chronicling the times and views of as broad a cross-section of the british population as they can manage. They still face many of the same sampling challenges.

Just as there are gaps in the subject matter, there are absences among the people. Few working-class people became diarists; even fewer working-class women. Mass-Observation has sometimes been scorned for its middle-class bias but at least in relation to the panel of volunteers, this isn't entirely fair. A project which depends upon the willingness of people to take part does not operate in a cultural vacuum. Whether people see themselves as having something to offer is dependent on the cultural notion of what it means to be 'a writer' so it is not surprising that M-O appealed to students, librarians, clerical workers, teachers and journalists, people from the middle and lower-middle classes, and working-class people involved in politics and adult education. M-O's radical stance, its determination, certainly in its first years, to be seen as independent, anti-establishment, and democratic (in that it championed the value of people writing their own history or creating their own social science), drew volunteers from the left, from the same social networks as the WEA [Workers' Education Association], the New Left Book Club, the left of the Labour Party and sections of the Communist Party. (page 7)

This is an excellent summary of the general issue for any such project, regardless of political associations. It is too bad that being "left-" or "right-" associated is prone to persuading people who do not share the same politics from participating, especially in the case of a project of mass-observation's type. While its founders had a particular politics, it does not seem like they deliberately used their politics as a filter on who they accepted as participants. That said, it is not a given at all that people who had different politics avoided participating. The sort of political polarization in evidence today cannot be simply mapped onto even the fraught period just before and during world war two. Still, it seems the toughest element in getting people on board after class was framing the task in a way that did not immediately turn them away because of it sounding like something they couldn't do, even if they kept a regular personal diary or work log analogous to one of the activities mass-observation asked people to do.

The mass-observation case is also a wonderfully accessible example for learning how historians engage with complicated masses of records. The introduction to Wartime Women is rather short from that perspective, although of course the book is meant to focus on what the women wrote, not perhaps arcane researcher-oriented details. A short introduction, but a packed one in which Sheridan describes the records she is excerpting and the ways she edited and selected them for the book. For those skeptical of historians' or anyones' ability to make sense and responsible use of inevitably incomplete and biased records, Sheridan explains indirectly how both things are possible. The subsequent chapter introductions add more specific detail again, also revealing more about the challenges facing women who did take part in mass-observation work. Reading through the text, it looks like an important element of winning the trust and commitment of contributors was that their identities would be shielded under a pseudonym and where necessary obfuscation of their specific location. At times contributors must have taken care to withhold details of their geographic surroundings independently due to their appreciation of wartime security concerns. A whole other level of care and respect for the privacy and safety of everyone involved is evident that is certainly not widely held outside of academic-based studies today.

Then again, people will find time and effort to put in some way if they can see that what they do will have a meaningful positive impact on their lives or those of their own or children more generally. Mass-observation's data did contribute to material and positive change in britain, one of many elements in a sadly truncated era of genuine sociopolitical change that acted against the negative factors that had just helped fuel over fifty years of war just in the twentieth century. Even if a project does not have as wide a range of volunteers as would be ideal, it may be that the ability to undertake and successfully complete the project itself indicates important social change in and of itself. At the very least, it may indicate nascent real change that may be well worth nurturing. (Top)

What Idiot Decided it was Time to Reintroduce SCSI Voodoo But Worse? (2023-09-25)

SCSI cable with samples of terminators in a photograph by Speters33w, via wikimedia commons released to the public domain. SCSI cable with samples of terminators in a photograph by Speters33w, via wikimedia commons released to the public domain.
SCSI cable with samples of terminators in a photograph by Speters33w, via wikimedia commons released to the public domain.

There is something uniquely passive aggressive in the way that corporations are manipulating the usb-c standard. I vividly remember the introduction of usb-a, also known as ordinary usb that generally works and doesn't put you at risk of blowing something up or burning the house down if used with a charger. On one hand it was the beginning of apple's obnoxious manipulation of the peripheral and cable market. On the other hand, at the time it was genuinely a good idea, because SCSI, "small computer system interface," was an absolute nightmare. At one point in the bad old days before wide usb-a adoption, I had a hard drive and a cd drive connected via that cable type. The 56k cable modem and the printer used PCI ports, which I actually remember fondly. They had their issues but at least were predictable. The SCSI connected devices were a nightmare, because despite the clear logic of how it was supposed to work, real life behaviour was different indeed. I had the hard drive and cd reader daisy chained by necessity because the computer had only one SCSI port, but that was supposed to be fine so long as all the connectors were seated firmly and the unused port in the last device was closed using a terminator. It is likely that the cd drive in particular had one defective and one good SCSI port, since I had tested the cables and found them all functional, hence all the headaches and me accidentally learning by real life example about combinatorics. For whatever reason, SCSI became a notoriously difficult to depend on standard by the mid to late 1990s, and it was common to hear colleagues refer to "SCSI voodoo." To my knowledge the SCSI standard is still in use, just not much in the consumer market. I suspect the really good SCSI equipment of that vintage was primarily in corporate environments. So usb-a filled a serious gap for anyone setting up connections to small peripherals.

As time went on, it made sense that newer usb versions focussed on greater data transfer speed would be developed, although the sight of more and more types to fit smaller devices was a bad omen. Of course, for many devices it wasn't about just transferring say, photos from a camera, but facilitating image capture direct by the computer from the camera for video. So it shouldn't have been a bad omen really, it should have been a logical outcome fitted to those cases. However, due to the apparent apple corporate policy of making their products impossible to repair by making them hyperthin and gluing them together, they also became severely allergic to ports. From a high water mark of eight ports on the old macbook pros that were designed to allow easy RAM augmentation and hard drive changes, 1 ethernet, 1 firewire, two usb-a, one usb-c, and mini-pci card. (Why that card port instead of an sd card reader is a bit of a mystery) to: 2 ports on a current macbook air, both the latest usb-c. Well, the result is certainly a smaller computer for those sticking to laptops, plus a great deal of misery to do with cables when neither wifi nor bluetooth connections are possible. This might have been an obscure apple-only issue that would go onto the list of things that people who purchase apple products get sneered at over, but the marketing of usb-c cables in "charge only," "data only," and "charge and data" variants has caused issues far beyond apple's wares. It has impacted the majority of cell phones, tablets, and of course desktop machines. There is now a small and growing number of articles warning people against possible issues with their usb-c cables if they try to use the wrong cable for the wrong task.

This has added even more trouble on top of the always present challenge of cable and connector quality. Most of us have learned the hard way that even formerly trustworthy "name brand" cables may turn out to be junk, and no-name cables are an unhappy crap shoot. This is especially bad when dealing with usb for charging devices. Slow or non-existent data throughput is a mishap, but unpredictable charging behaviour is a serious risk. I have seen a few different "watch out for usb-c articles" that rapidly descended into blatant product placements for particular cable brands – these are worth checking out for their wording and to see the comments. It doesn't take to identify planted comments elsewhere using the same stock phrases after that. Marketing aside however, we now have a situation that between shoddy or counterfeit products and a standard with variants that are not standardized in how they are marked, it is riding close to a consumer SCSI situation again. So much could be solved at minimum by insisting that charging cables should always be say, red, or have their connector ends be red; data blue by analogy to common ethernet wire; and leaving grey for both. Of course, that would doubly infuriate most cell phone sellers, who like to differentiate their products in part by the distinct colouring of their cables and power supplies, following apple's older practice, although I hear apple is trying to sell phones without cables and power supplies now.

Hopefully the european union's demands for an actual usb-c standard across cell phones will help improve the situation. By all accounts the european union's purpose is to do something about e-waste, which is well worth it. Hopefully this will also help drive an improvement to marking usb-c cables clearly and consistently according to their capabilities, or even abandoning having a power only or data only usb-c cable in the first place. The problem of production quality and curbing the prevalence of fakes, as well as curbing the false advertising of poor quality equipment as adequate is another challenge. (Top)

Another Attempt at a Little English History (2023-09-18)

Engraving by Gregoire Huret of a woman representing rhetoric by her pose, clothing, and accessories. The original document is held at the metropolitan museum of art, which donated high quality scans of it to wikimedia commons under Creative Commons CC0 1.0 Universal Public Domain Dedication. Engraving by Gregoire Huret of a woman representing rhetoric by her pose, clothing, and accessories. The original document is held at the metropolitan museum of art, which donated high quality scans of it to wikimedia commons under Creative Commons CC0 1.0 Universal Public Domain Dedication.
Engraving by Gregoire Huret of a woman representing rhetoric by her pose, clothing, and accessories. The original document is held at the metropolitan museum of art, which donated high quality scans of it to wikimedia commons under Creative Commons CC0 1.0 Universal Public Domain Dedication.

While the previous thoughtpiece on "english" as a school subject wandered merrily off in a couple of different directions, in the end I found myself still frustrated. What I found didn't really explain how or why the english courses that even a most definitely not an english major like myself took had the content they did. They didn't stick to writing different types of short pieces, rhetoric, reading comprehension, or grammar. Not even the genuinely difficult grammar of the more annoying subordinate clauses that today we are often instructed to rewrite to make them easier to read rather than make the reader follow subject and object pronouns accurately. Fun as it can be to reconstruct Dickens' london or debate the philosophical and ethical points taken up in Frankenstein or Sense and Sensibility, this seemed a bizarre thing to do as a class. Why was anyone expected to do this for marks in the first place? It couldn't be about trying to save teachers and other people faced with marking piles of papers from near terminal boredom, because the number of books, papers, and poems we studied was so consistently small. Always from a minuscule canon. On one hand, there is convenience and economies to scale of such limited numbers of items, risk of boredom aside. Still, I couldn't let go of the question of why this was considered appropriate long after we had learnt the basics of short essay writing and the like. Or why we were expected to start discussing the stories in that intensive close-reading way that sometimes ran off into areas rather like the weird patches in some mathematical functions. The places where the calculations get stuck at local maxima or minima, or shoot off to infinity, or suddenly come out to zero, but something about the calculations looks off even though it seems like the steps were logical and accurately done. Well, it turned out that a totally different book I had to read for another project revealed that I had gotten my ideas about what "english" was for quite back to front. The book in question being On Post-Colonial Futures: Transformations of a Colonial Culture by Bill Ashcroft (Bloomsbury Publishing, 2001). A few excerpts from one of the early chapters are instructive indeed.

There is probably no discipline which shares the peculiar function of English in the promulgation of imperial culture. History and geography constituted unparalleled regulatory discourses for the European construction of world reality in the nineteenth century, but they were widely studied in all European countries. No other imperial power developed a subject quite like 'English' in its function as a vehicle of cultural hegemony, and no subject gained the prestige that English achieved in the curriculum of the British Empire. (page 7)

When we examine the surprisingly recent development of this field called English literature we discover how firmly it is rooted in the cultural relationships established by British imperialism. Not only is the very idea of 'culture' a result of the European political subjugation of the rest of the world, but the construction of Europe itself is inextricably bound up with the historical reality of colonialism and the almost total invisibility of the colonized peoples to European art and philosophy. Why, we might ask, did cultural studies emerge out of English departments, of all places? Quite simply, the discipline of English was conceived, initiated and implemented as a programme of cultural study. Virtually from its inception, it existed as a promotion of English national culture under the guise of the advancement of civilization. Most histories of cultural studies focus on the work of Richard Hoggart, Raymond Williams and E. P. Thompson in the 1950s and 1960s, and the establishment of the Birmingham Centre for Contemporary Cultural Studies in 1964, but, in fact, the founding document of 'cultural studies' is Lord Macaulay's Minute to Parliament in 1835.

This document, as Gauri Viswanathan explains, signified the rise to prominence of the Anglicists over the Orientalists in the British administration of India. The Charter Act of 1813, devolving responsibility for Indian education on the colonial administration, led to a struggle between the two approaches, ultimately resolved by Macaulay's Minute, in which we find not just the assumptions of the Anglicists, but the profoundly universalist assumptions of English national culture itself. (page 8)

"Maculay's Minute" was specifically referring to education in "british" india, and can be read in full today at Frances W. Pritchett's website at columbia university. Another original source referenced below, Henry Newbolt's report on The Teaching of English in England is also available.

The advancement of any colonized people could only occur, it was claimed, under the auspices of English language and culture, and it was on English literature that the burden of imparting civilized values was to rest. It worked so well as a form of cultural studies because 'the strategy of locating authority in the texts of English literature all but effaced the sordid history of colonialist expropriation, material exploitation and class and race oppression behind European world dominance.' English literature 'functioned as a surrogate Englishman in his highest and most perfect state.' One could add that without the profoundly universalist assumptions of English literature and the dissemination of these through education, colonial administrations would not have been able to invoke such widespread complicity with imperial culture.

Consequently, English literature became a prominent agent of colonial control; indeed, it can be said that English literary study really began in earnest once its function as a discipline of cultural studies had been established, and its ability to 'civilize' the lower classes had thus been triumphantly revealed. To locate the beginning of English at the moment of Macaulay's Minute is to some extent to display the provisionality of beginnings, for this beginning is preceded by a significant prehistory in the emergence of Rhetoric and Belles Lettres in Scotland and the teaching of literature in the dissenting academies. The explicit cultural imperialism of English is preceded by a different cultural movement issuing from the desire, in Scottish cultural life of the eighteenth century, for 'improvement,' the desire for a civilizing purgation of the language and culture, a removal of barbaric Scottishisms and a cultivation of a 'British' intellectual purity. (page 9)

The ideological function of English can be seen to be repeated in all post-colonial societies, in very different pedagogic situations. Literature, by definition, excluded local writing. (page 10)

The conflation of the 'best cultural values that civilization has to offer' and English literature (i.e. the 'culture and civilization' form of culturalism) is present from Macaulay through Arnold, but it is in the Newbolt Report that it becomes an issue of national policy. From a post-World War I fear, among other things, of the power of Teutonic scholarship, and particularly of its tradition of philology, Henry Newbolt was commissioned in 1919 to conduct an enquiry into the state of English. The report, published in 1921 as The Teaching of English in England, became a best-seller, and established the study of English literature firmly at the centre of the English and colonial education systems. The language of the report and its pedagogic assumptions make it clear from which springs it drew ideological sustenance. (page 11)

What a shame to spoil the experience of reading and engaging with books written in english in this way. The increasing incongruence between the claims about "english culture," the content of the books used in the attempt to indoctrinate "englishpeople" in a sense of superiority and everybody else with a sense of inferiority, and reality has generated considerable cognitive dissonance. It has also intensely frustrated the many people who wanted those they deemed inferior to be duly indoctrinated, but prevented from studying the intended replacement for the original upper class twit qualifying degree, classics, by the newly created english degree. It probably didn't help that classics, now more often called greek and roman studies, turned out to be not so easy to keep the plebes out of because it is interesting and indeed useful in spite of its design. The same can be true of "english" as a subject, so the desire to keep it as a means of conspicuous consumption for one group and rote indoctrination for others didn't work out.

Truth be told, I still think it makes better sense to dump what is mislabelled "english" all together, renaming the subject literature studies. Then a person may undertake literature studies in any language, and best of all if they take literature studies in english, they needn't take on the mean legacy of colonialism, brainwashing, and pressure to become an utter prat. Without the obnoxious parts, it might just be less difficult to argue for the ongoing existence of departments in universities where literature is studied, or for instructing students in secondary school in more than just practical rhetoric and grammar apart from the need to keep them engaged. That said, I do think it is fair to insist that "literary theory" and its close reading techniques developed originally as a means to have something like philological analysis be purely optional for students not planning to undertake degree majors or minors in "english." (Top)

Marketing by Obfuscation of a Pre-Existing Technique (2023-09-11)

Sample image from a three-dimensional computer-calculated model of the Earth's interior by a team led by Jeroen Tromp, july 2019. Image courtesy of wikimedia commons under creative commons attribution 2.0 generic license. Sample image from a three-dimensional computer-calculated model of the Earth's interior by a team led by Jeroen Tromp, july 2019. Image courtesy of wikimedia commons under creative commons attribution 2.0 generic license.
Sample image from a three-dimensional computer-calculated model of the Earth's interior by a team led by Jeroen Tromp, july 2019. Image courtesy of wikimedia commons under creative commons attribution 2.0 generic license.

Way back in the mists of time, I worked on geophysical research, including development of plausible models intended to remove "noise" from data so that it would be easy to see subsurface structures in the processed datasets. In a way, what I was expected to develop and apply was the equivalent of corrective lenses for astigmatic eyes. This is a non-trivial problem, in both the mathematical and colloquial sense of the term. A great deal of the same mathematical techniques and research from this area are applied in medical tomography, where it works far better because compared to the Earth the human body is a very homogeneous medium. Plus, we know quite a lot about the interior of the human body, so if the algorithm applied in the medical tomographic case gives the right answer for normal body tissues and structures, chances are it can be trusted to give an accurate sense of what is going on where tissues and structures are not normal. This is a big part of why medical imaging using fancier methods like magnetic resonance imaging is usually applied to more than just where the trouble is, but also to surrounding tissue. When it comes to the Earth, it is of course much larger, generally quite heterogeneous, and our methods of measurement are crude. The most widely used and effective wide scale measurement method is based on directed sound waves, where they are emitted from a source and then recorded at multiple distances.

Back in april 2022, an article popped up at phys.org dealing with a new approach to "geophysical inversion," the fancy term for creating a model to reproduce the observations from sending out the sound and recording it at various receivers. The idea is that an accurate model that can be used to calculate something close to the measured data is presumed likely to match up with the actual structures below the surface of the Earth. The real data is what we "see" without the lenses, the model is what we would "see" if we had the lenses on. If the model is close enough to correct, it can be used to define filters to clean up other data from the same area, and best of all by some lights, fill in gaps where there is little or not data. In principle, this is all makes sense and should work, because rocks, sediments, and subsurface liquids don't act randomly and we understand them fairly well as long as the data we are working with isn't too deep. Once we get beyond the relatively shallow portions of the Earth's crust, things get much more difficult because while the rocks may be much more homogeneous (that is not a given), they are not or at least do not behave as solids but as dense liquids under high pressure. At least, that's what logic and what we know about physics suggests, and so far recordings from deep earthquakes are in line with that, as is the complex data gathered in areas like the infamous pacific ring of fire with its many volcanoes.

As the phys.org article by Morgan Rehnberg Testing A Machine Learning Approach To Geophysical Inversion, the most serious difficulty for geophysicists trying to carry out subsurface modelling is the lack of actual data. The fewer actual measurements we have to start from, the more possible models can fit them. The only way to narrow things down is to add data and if possible additional boundary conditions. Boundary conditions might include such things as parameters that reflect more general knowledge about the geology. Observations from the canadian shield are taken in a very different geological environment than that in the rocky mountains for instance. As Rehnberg notes, models tend to the oversimplified compared to the real world, so that adds further difficulty. So in an effort to overcome having less real world data than is ideal, and counter the tendency for proposed models to be oversimplified, a team of geophysicists is trying to apply what the article refers to as a "machine learning," neural network approach called the variational autoencoder. Not quite so clear from the article is that there is another reason models tend to be oversimplified, even today with far more powerful computers readily available than what I used to work with. While more real world data is better, it is very difficult to calculate with at a speed great enough to be worth the energy and programming time. So a big element of the hope here is that using "machine learning" methods will permit faster model testing and easier adjustment to better match the real data on a sensible time scale. Obviously if we have to wait a week or more for enough of the model to be calculated to see how well it fits the real life data, it is going to be difficult to adjust it.

The team busy trying to apply the variational autoencoder to this task is based in belgium, and they uploaded a preprint of their paper, Deep generative models in inversion: a review and development of a newapproach based on a variational autoencoder to arXiv, which I got to have a close look at. Another specific challenge with subsurface data is that, as the abstract notes, it is often non-linear. That is, the measured values don't tend to change in an even, steady way or hold steady for very long. The changes derive from the subsurface structure. In day to day life a reasonable analogy would be the way light passes through a translucent but very cracked and irregular crystal. Turn it under a string light, and it is often possible to make out cracks or parts of cracks, flashes from small areas of smooth breakage. But you certainly couldn't use it in place of a magnifying glass or other lens. When it comes to the modelling, the trouble is that the research team is trying to apply linear calculations to generate models that will mimic this non-linear subsurface behaviour. The data I used to work with was prepared in part by identifying "zones" in the datasets. That is, areas where the measurements showed evidence primarily of reflections, versus areas where there were diffractions, areas where the soundwaves scattered in many directions. The mathematics for modelling those two cases are quite different, and often in the cases I worked with, rather than try to model those diffracting areas too closely, which the computers couldn't handle, we focussed on the reflection-dominated spaces and strove to collect data more densely where diffractions dominated. That way it could become possible to model those more difficult areas on a small enough scale to find reflection-dominated areas again, or else confirm that the diffractions were coming from a discrete structure.

Some of this should actually sound familiar by now, although thanks to the very different terminology and wandering off to talk about diffractions and reflections might make it seem very much otherwise. At the moment, there is a massive speculative industry predicated on mass surveillance. The idea this industry's authoritarian proponents are selling is that "machine learning" will allow them to predict correctly human behaviour based on relatively few "measurements." But the more information they are able to gather on each person, the more sure they are that the predictions will be good, and therefore able to guarantee the creation of irresistible advertising and the enforcement of total obedience, which is itself usually marketed as "security." It's a crazy idea, but fundamentally an argument based on what looks at first glance like the same sort of data and modelling system as the geophysical one. It's just that they are trying to model human behaviour. Never mind that it isn't possible to model a "standard human" with a few tweaked details to make the "standard human" model produce predictions for a certain type of specific person. It won't work in general, but it incentivizes more surveillance that can be abused for other purposes. Now, maybe this sort of "human behavioural inversion" could work for certain narrow cases, such as in the context of team sports leagues. In that case there is lots of at least semi-structured data pertaining to specific behaviour for known ends, as opposed to all sorts of data recorded for thousands of people. Bear in mind that the people really flogging "big data" say they want to model people not already well known, not hyperfocus on individuals who tend to have their own habits and therefore to be relatively predictable on that scale. (Top)

The Outrageous Origins of the Invention of Modern "Just War" (2023-09-04)

Illustration by Albert Robida from the book *Le vingtième siècle,* published in paris by Georges Decaux in 1883 via oldbookillustrations.com. Illustration by Albert Robida from the book *Le vingtième siècle,* published in paris by Georges Decaux in 1883 via oldbookillustrations.com.
Illustration by Albert Robida from the book Le vingtième siècle, published in paris by Georges Decaux in 1883 via oldbookillustrations.com.

For reasons few of us indeed are unfamiliar with, the question of "just war" keeps coming up. Never mind that practically speaking, as Janette Rankin observed, "You can no more win a war than an earthquake." It probably doesn't help that the "just war" question is typically, and dishonestly conflated with whether it is justified to fight in self-defence. The conflation is especially dishonest when men try it, because at this particular patriarchal moment, "justice system" after "justice system" has effectively told women and girls that if they fight back, they will be charged and treated with the utmost severity of the law with no real recourse unless their plight draws serious public interest. The men get themselves in serious knots when they try to maintain racist and classist versions of the same thing that they want to use to keep down the racialized and poor members of their own sex. So there are quite notable disjunctures on the question not so much about what counts as fighting to defend oneself as who is allowed to fight for themselves. People deemed to be lower in the current power hierarchies are also denied the right to defend themselves against those above them in the hierarchy, while being encouraged to turn on those below them. This is hardly new, serious scholarly exploration of who gets a license to freely defend themselves when attacked, who gets to commit violence freely on whom, and who gets to order others to commit violence for any reason are old, serious topics of discussion and research. But relatively speaking, the notion of a "just war" is quite newfangled. I had no idea how newfangled until I read this in Ward Churchill's 2015 essay collection Words Like Weapons: Selected Essays in Indigenism, 1995-2005 (PM Press, 2017, page 39).

The criteria for a Just War is defined quite narrowly in international law. As early as 1539, [Francisco de] Vitoria asserted there were only three: 1) when a native people refused to admit Christian missionaries among them, or 2) arbitrarily refused to engage in commerce with the discovering power, or 3) mounted an unprovoked physical attack against the power's representatives or subjects.

The year is rather telltale, 1539, when a post-facto rationalization for relentless and completely unhinged violence against Indigenous nations was desperately wanted for propaganda purposes. Note that refusing to trade or refusing to allow the cultural warfare practitioners referred to as missionaries are defined as pretexts for a supposedly "just war." Not that it is surprising that europeans decided that somebody else not wanting their stuff was a great reason to make total war on them, since there is this odd insistence in european and european-derived culture that if they want something the other guy has, they ought to be allowed to have it. Not only that, if they offer something else for it, they ought to be praised and heaped with more than what they offered is worth to them compared to how much they expect to get. This is a demented way of thinking, and it has contributed directly to the serious environmental, social, and political mess so much of the european-enculturated world is in right now. Everyone has the right to say no and insist on maintaining their borders. There can be a lot of damn good reasons people would not want to trade specific things they make or substances or beings they use on their own lands, or carefully leave alone on their own lands for that matter. I have an example in mind, but first, it is worth filling in a bit more information about this Francisco de Vitoria character.

I doubt it will surprise anyone to read that he was a cleric and legal scholar – I was tempted to refer to him as a lawyer, but today that tends to imply someone who practices law more than someone who studies it even though a lawyer may certainly do both. My brief perusal of secondary sources are quite clear that Vitoria taught theology and law, writing copiously on both, but he does not seem to have done the sorts of things lawyers generally do outside of the academy. Among Vitoria's works, written as was the standard then in latin, is an extensive series of what he called "theological reflections." There is a printed edition in english translation scanned and readily available for viewing at the internet archive, under the title De Indis et de ivre belli relectiones, being parts of Reflectiones Theologicae XII, Franciscus de Victoria. The specific section on "just wars" begins on page 302, and may be found under several closely related latin or english titles depending on the editor or translator in other books. I found extracts with the titles "De indis insularis," "De Indis noviter inventis," or "Relictio de Indis Recenter Inventis." There is a more modern translation of this excerpt posted at the Alexander Hamilton Institute as well.

Okay, now what about the example of not wanting to trade something that it is quite understandable the actual locals didn't want to trade. There are many, many others. But this one has always stuck in my mind, because I have been to the place, and seen the still acutely visible aftermath of the consequences of the trade, and the creepy way the site has been made into a sort of tourist draw. It's famous as the deadliest rockslide in canada, which you can visit and spend time at the Frank Slide Interpretive Centre, and "Enjoy engaging storytelling, interactive displays, gripping accounts, and award-winning shows." There are still an unknown number people buried under the rubble of this slide. As even the canadian encyclopedia notes, "The Blackfoot and Kutenai people knew Turtle Mountain as 'the mountain that moves.' Their stories say they wouldn't camp near it." Turtle Mountain had, and still has, layers of low grade coal within it. The settler town of frank was founded by people who mined the coal. Removing the coal undermined the structural integrity of part of Turtle Mountain, and a dread winter with heavy precipitation, and a sharp freeze contributed to the face of the mountain sheering off and burying the nearby town. The Blackfoot and Ktunaxa peoples knew not to mess with the mountain or camp close by, and they certainly didn't advocate removing its coal measures. They knew what they were talking about. But if they had been deemed able to prevent the mining and sale of the coal, they would have been treated as at war with the settler state. (Top)

There is No Such Thing as Free Hosting (2023-08-28)

An illustration from a 1919 illustrated edition of Edgar Allan Poe's *Tales of Mystery and Imagination,* captioned 'I had myself no power to move from the upright position I had assumed.' The original artist was Harry Clarke, the publisher George Harrop in London, and the source oldbookillustrations.com. An illustration from a 1919 illustrated edition of Edgar Allan Poe's *Tales of Mystery and Imagination,* captioned 'I had myself no power to move from the upright position I had assumed.' The original artist was Harry Clarke, the publisher George Harrop in London, and the source oldbookillustrations.com.
An illustration from a 1919 illustrated edition of Edgar Allan Poe's Tales of Mystery and Imagination, captioned "I had myself no power to move from the upright position I had assumed." The original artist was Harry Clarke, the publisher George Harrop in London, and the source oldbookillustrations.com.

There may be complimentary hosting, although it is an exceedingly rare thing now for "internet service providers" to do so, even though it used to be standard and a minimal amount besides. Even universities have decided, at least so it seems in my small experience of them, that they will not provide any sort of web hosting for students anymore, cutting off local hosting of blogs, small and intentionally ephemeral academic projects and the like. I know that in many cases a person may fill in a form and request new space or affirm they will continue using the space they already have, but those bureaucratic steps are imposed all the same. The message is clear: "we don't want to host anything that isn't controlled by the university administration." Then there are the hosting sites that claim to be free, but very often they have interesting fine print. In the early days of the corporate invasion of the web, the fine print declared that if anything a person produced got really profitable and worth exploiting for cash, the web host owned it and all the profits. Just like that. But then that got a little to obviously exploitative and greedy, and I suspect also helped ensure those hosts began sprouting a wild collection of heinously ugly and often malicious-software spreading sites. The latest version of free hosting is mostly predicated on supporting "free" blogging, while insisting on the "right" to constantly surveil and invade the privacy of the blogger and all their visitors. This is a far cry from even the most minimal complimentary hosting with early paid web hosting companies, which in my experience were actually middlemen charging a modest markup for registering domain names. They did that and threw in some storage that a website could be served from as well, which made total sense, because for the richer domain buyers that provided a means to provide a temporary site while refurbishing or setting up the main site for the first time. For the not so rich domain buyers, it was a great way to get started posting online independent of pseudo-free hosting.

By the time the Moonspeaker went live on its own host in the modest complimentary storage and hosting space that came along with its original hosting and domain registration plan, there were other concerns to deal with. Like managing privacy concerns by ensuring the web hist kept their servers in a sensible country. This was before the new-fangled resellers of amazon web service space and microsoft azure equivalents began springing up. This was back when a website would be remarkable if it was too big to fit on the very new, insanely expensive usb flash drives in their middle storage sizes, then a mere 256 megabytes. It was not so long ago that a 1.5' disc with maybe 1.4 megabytes was an astonishing amount of portable storage after all. There was quite a bit of excitement around the improved support for images and sound files online, although both were encumbered by inappropriate patents and the corrosive influence of microsoft proprietary formats. Of course, it was the horrible animated gifs, offensive advertising banners, and the never should have been implemented in any way blinking text tag in the adolescent version of html that also drove many people to stick to the hosting they could get if they had an internet service provider.

Still, it was tough to do anything but use dubious free hosting or find a way to finance hosting to the tune of five to ten dollars a month for a basic small site. Five to ten dollars is hardly nothing, but quite feasible for a person with enough income wiggle room to accommodate say, a daily store bought coffee habit. On the other hand, uncomfortable rumbles indicated the ranks of people who had that kind of wiggle were thinning in general, let alone among those interested in maintaining a website long term. On top of that, new self-declared professionals began claiming that to have a "successful" website, there had to be elaborate multimedia assets (a telltale word), those fast-space eating images and sounds, and soon videos. Furthermore, they insisted that nobody wanted to read online, so the text should be suborned to the spectacle, and drafted to allow easy insertion of propaganda, because supposedly nobody wanted to pay for anything online. Lots of wild generalizations that depended on throwing around terms that supposedly everyone had agreed on the meanings of: a successful website, the uses of the web, what "everybody" supposedly wants and will or won't pay for. They added a side of, "everyone should be using a content management system and/or blogging software to implement a good design and manage files" because supposedly none of this could be done in a website without it. So of course, many people interested in sharing their writing and interests online began to get spooked, and prone to conclude that learning even just plain html was going to be too hard for them, and that any website needed lots of space and an expensive hosting plan unless they could somehow take advantage of a generous free hosting plan.

When it comes to the coding side of making websites, there has been a whole lot of effort put into persuading people that they should never look at the source code, that it is too hard to do it, or even that doing so is "hacking" by which they actually mean "cracking." I despise this kind of mystification. It is so full of contempt for the people it is deployed against. A web browser that doesn't have a built in "page source" or "view source" option that can be accessed by keyboard shortcut or pull down menu selection isn't worth the code it was written in. But even if it doesn't it is simple to save a copy of the web page and open that in a text editor – even the awful microsoft notepad will do, and have a look around. The code in web pages is free for anyone to look at and reuse, in principle it is far more like the regular letters and numbers we read and write than the so-called "intellectual property" nonsense that corporate profiteers and rent seekers are so obsessed with trying to persuade us must exist because it will support their desire to fill their pockets with money by exploiting others. Just as it is not right to simply slap your name on somebody else's poem, picture or whatever and claim you made or composed it instead of them, it is not right to just take over somebody else's web design or make a near facsimile of their site. But it is fair and reasonable to seek to learn how to design a site along similar lines if you admire them, or even, perhaps especially if you don't. It is a wonderful demystification exercise, if nothing else.

The mystification drive is wider than this of course. After all, what makes a website successful very much depends on what it is for. I am happy to concede that if a site is meant to be a propaganda platform flogging cheap products, the measure of its success will be sales numbers and net profit. But the web was not originally or wholly envisioned as a new way to implement shoddy late night cable television. Many of us saw wonderful opportunities to share such things as our research, more accessible transcriptions of hard to find documents, and of course lots of wild community building among people who were at least interested in if not fans of various more or less obscure things. People who considered themselves fans of whatever popular culture marketing phenomenon hoped to win at least some kind of respect, to be treated as the customer they were being guided to consider themselves to be, rather than sheep to be sheered. There is a bittersweet element in acknowledging some of the problems of late capitalism people hoped to solve by taking advantage of fast and easy communication and information sharing using the web as opposed to some of the other services on the internet that were not always as friendly to image and sound, let alone newcomers. In all those other cases, the measure of success was and still is quite different. At one time a key measure was whether your website was part of multiple related webrings, for example. Some webmasters still try to measure success by the quality of their comment sections, if they have them and some moderators to help keep things on track.

A funny thing about that whole "people don't want to read online" and "people don't to pay for stuff" online. Well, not so funny, because both claims are blatantly false. That people insist on trying to share supposedly "pirated" content online, or that they refuse to read many failing newspapers and magazines that add paywalls to their sites does tell us about what people won't read or pay for online. But what we are being told is not the general whine that the big media corporations claim. People are rightfully resistant to paying for shoddy propaganda, and that is what the web is almost completely overwhelmed with. News websites that actually provide real journalism do indeed win monetary support from their readers, although yes, they may have to suffer some lean days, and no they are not likely to make a thirty percent or higher rentier profit like the established media do now. Respect people's intelligence, drop the propaganda and actually provide real information and creative storytelling, and people will certainly read, or watch, or listen. If artificial barriers are not placed to prevent them from participating constructively in their own right, like pseudo-free hosting and pretending that building websites is too hard and the only reason to build and share a website is to make money, people go right ahead and make amazing things. And best of all, now it is unnecessary to have somebody else host a small website you may be interested in making at all. For http/https sites, there are excellent guides to self-hosting using a single-board computer at The Cheapskate's Guide. For those interested in making gemini capsules, a quite new format, there are already a number of guides to self-hosting available, such as at Host Things Yourself. Or have a look at the FreedomBox project, or even drop by your local public library and see what they have going on. You are likely to be delightfully surprised. (Top)

Copyright © C. Osborne 2024
Last Modified: Monday, January 01, 2024 01:26:21