John Ankarström Articles written by John Ankarström. http://john.ankarstrom.se/ http://john.ankarstrom.se/explorer-git-ide/ Using Windows Explorer as a Git IDE I have a couple of different stationary computers. A custom-built PC running Windows, an iMac G3 running Mac OS 9 and an old HP microtower running Alpine Linux, acting as a home server of sorts. Out of all these, I spend most of my time on the Windows PC. I use Windows for the majority of my work, including a large part of the programming that I do.

Even though I mainly use Windows, most of my web/software development is grounded in or at least inspired by Unix in some way: most obviously, I use Git, which is a tool designed for the Unix shell. This turns out to be a rather awkward match: on the one hand, I use Windows Explorer to manage the files; on the other hand, I have a command prompt open to manage the repository.

For my workflow, it would be much better if the Git commands were integrated into Explorer itself. So I added what amounts to the following code to my AutoHotKey script:

GroupAdd, Explorer, ahk_class CabinetWClass
GroupAdd, Explorer, ahk_class ExploreWClass

#IfWinActive ahk_group Explorer

!a::Run, % "cmd /c cd "qp()" & git add "qip()" && git status & pause"
!c::Run, % "cmd /c cd "qp()" & git commit & pause"
!+c::Run, % "cmd /c cd "qp()" & git commit --amend & pause"
!f::Run, % "cmd /c cd "qp()" & git diff & pause"
!+f::Run, % "cmd /c cd "qp()" & git diff "qip()" & pause"
!i::Run, % "cmd /c cd "qp()" & git diff HEAD~1 HEAD & pause"
!+i::Run, % "cmd /c cd "qp()" & git diff HEAD~1 HEAD "qip()" & pause"
!l::Run, % "cmd /c cd "qp()" & git log & pause"
!+l::Run, % "cmd /c cd "qp()" & git log "qip()" & pause"
!p::Run, % "cmd /c cd "qp()" & git push & pause"
!r::Run, % "cmd /c cd "qp()" & git reset "qip()" && git status & pause"
!s::Run, % "cmd /c cd "qp()" & git status & pause"

#IfWinActive

Explorer(hwnd := "")
{
ShellApp := ComObjCreate("Shell.Application")
if (hwnd = "")
WinGet, hwnd, id, A
for window in ShellApp.Windows
if (window.hwnd = hwnd)
return window
return -1
}

qp()
{
return """" Explorer().Document.Folder.Self.path """"
}

qip()
{
return """" Explorer().Document.FocusedItem.path """"
}

(The actual code is contained in this file, most of it near the bottom.)

This adds a bunch of hotkeys to any Explorer window:

Alt-A
"git add" the selected file; then "git status"
Alt-C
"git commit"
Alt-Shift-C
"git commit --amend"
Alt-F
"git diff"
Alt-Shift-F
"git diff" the selected file
Alt-I
"git diff", comparing the previous and current commit
Alt-Shift-I
"git diff" the selected file, comparing the previous and current commit
Alt-L
"git log"
Alt-Shift-L
"git log" the selected file
Alt-P
"git push"
Alt-R
"git reset" the selected file; then "git status"
Alt-S
"git status"

All of these hotkeys open a new command prompt window, displaying the results of the Git command, which the user can close by pressing any key.

In my experience, these hotkeys make it incalculably less bothersome to work with Git on Windows, and I think it shows how incredibly useful even very simple AutoHotKey scripts can be. If you have any suggestions on other hotkeys, please write a comment below.

]]>
http://john.ankarstrom.se/html2/ Static versus dynamic web sites Some time ago, I wrote a short article called "Writing HTML in HTML", where I explained why I use plain HTML to write, edit and maintain my web site. As I've been living with my decision to eschew both dynamic content management systems and static site generators for more than a year, I've had the chance to reflect upon it, and I think it is time for me to update and clarify my original vision.

In this post, I want to explore two fundamental principles or criteria that underpinned my original article, but were more or less unpronounced: sustainability and power. I also want to update you on my current site configuration.

Before I begin, I should say that these criteria are my criteria, they're what I value, and don't necessarily line up with what other people value. I think there are others who might value the same things as I do; others will hopefully find the perspective interesting, even if they don't ultimately agree.

First criterion: sustainability

The criterion of sustainability corresponds to the following question:

How difficult will it be for me to maintain – over time –
    a) the entire site, and
    b) any given page on the site?

In my life, this is a rather harsh rule. I know that every extra step I need to perform in order to do something is going to make me avoid doing it more. For this reason, I will consider even minor things – like needing to install a single extra piece of software on my computer, even if it takes only a second to install – to be lowering the sustainability of the web site. Relevant here is not only the effort required for me to edit the site, but also how likely the site is to merely survive over long periods of time, through server migrations and infrastructure changes.

WordPress

As such, I am not talking about "ease of use". WordPress is likely the least difficult way to run a blog, but it is not the most sustainable way, because the effort required to maintain a WordPress site over long periods of time is higher than a folder of static HTML files. To migrate a WordPress site to a new server, you need to export the WordPress database from MySQL on the old server, setup MySQL on the new server and import the old database. This may not be a huge effort, but it is not as simple, neither physically nor mentally, as a copying a folder; thus it lowers the sustainability of the site.

Jekyll

On the other hand, a generated folder of static HTML files is not sustainable just because it can easily be copied to a new server without having to worry about "invisible" databases. While it is easy to migrate the site to a new server, it is not effortless to update the contents of the site, because you can't re-generate a Jekyll-driven site if you don't have Jekyll installed on the computer. Furthermore, it is not uncommon for there to be differences in behavior between different versions of the same generator, making it difficult to maintain the site over long periods of time; if new versions of your static site generator introduce breaking changes, your site is not sustainable, because you will have to update its code whenever you update the generator.

HTTPS

Sustainability is also the one of the many reasons why my web site is served over HTTP and not HTTPS. The problem with HTTPS is that you need to get a certificate from a trusted authority, which either costs money or is free but needs to be updated every month, and even if you pay for it, you need to renew the certificate every year or so. If the operation of your blog depends on a monthly certificate renewal from Let's Encrypt – even if the renewal is set up to be automatic – it is less sustainable than serving your site over HTTP. HTTP will always be around, will always be supported and will never require a certificate.

Another problem with HTTPS is that it is very difficult to revert back to HTTP once you have started using HTTPS. Every outside link to your web site will be incorrect, and you can't redirect HTTPS traffic to HTTP without a valid certificate. In other words, once you start using HTTPS, you will be forever dependent on the certificate renewal process.

The more a web site resembles a folder of ungenerated, static HTML files, served over HTTP, the more sustainable it is. As long as the web exists, there will be web servers capable of serving static HTML files without requiring any extra setup; static HTML is foolproof. With that said, there are other things you can generally count on being available without extra setup. One such thing is PHP. A folder of mostly static HTML pages, enhanced by PHP includes, is nearly as sustainable as a folder of completely static HTML pages.

Second criterion: power

The second criterion, power, corresponds to the following question:

How limited am I in my creation and maintenance of
    a) the structure of the entire site, and
    b) any given page on the site?

This is also a rather important rule in my life, at least as far as technology is concerned. I like doing things the way I like doing them, especially when it comes to a creative endeavor – which building a web site is. I don't like to be hindered by the software I use, unless the software is good enough to outweigh the irratation provoked by its limitations.

Streamlining

To be fair, to make the creation and maintenance of a site easier and more powerful in some regards, it seems necessary to introduce limitations in other regards. For example, if I want an identical navigation menu on every page, it is wise to use a PHP include or some type of templating engine; but this, in turn, limits my power to customize the navigation menu of each page specifically according to that page's needs. In streamlining the site's navigation menu, some freedom is lost.

This type of streamlining is not a bad idea. The problem with most content management systems and static site generators is that they streamline too much. If I implement a common navigation menu by putting a PHP include on every page that needs it, then I can choose not to include it on any given page. But if I implement the navigation menu by enforcing a single common template for all pages – which is what most CMSs and SSGs do – then I can not nearly as easily choose not to include the navigation menu on some of the pages. If I need to do so, it is possible, but I am forced to work around the system.

Structure

Likewise, static site generators like Jekyll usually make it very hard to freely choose a directory structure for your site. They are almost always built for very simple purposes: all blog posts go in _posts, and more complex directory structures are usually very difficult to implement, if not impossible. CMSs are generally more flexible, but they too can be difficult to bend to your specific needs. Ultimately, in terms of flexibility, a folder of static, ungenerated HTML beats everything else, especially when combined with an .htaccess file.

Markup language

CMSs and SSGs share the tendency to avoid plain HTML: most CMSs use a WYSIWYG editor, while SSGs usually use Markdown. This is an obvious and severe limitation: many things in this very article – such as the red text or the right-aligned headings – are impossible to write in Markdown (without reverting to actual HTML inside the Markdown document). But as I mentioned in my original post, there are actually HTML editors that don't suck, even WYSIWYG editors. I am using KompoZer, one such editor, to write this very post.

If you use a CMS or SSG, you constantly have to battle its limitations. You should consider whether its benefits are great enough to justify this struggle. They might be – but I suspect that, for many technologically talented people, they are not.

My current solution

Perhaps my own web site (the one you're visiting right now) can serve as inspiration as to how to build a web site according to these criteria. It is currently implemented in the following way:

  • The site is a folder of mostly static HTML files, enhanced by zero, one or two PHP includes, depending on the page:

    • Navigation menu
    • Comments section

    The comments system, written in PHP, works by appending new comments to an HTML file located in the same directory as the page being commented on. This HTML file is included via PHP on the page. This system is not nearly as efficient as a database-driven one, but it is drastically more sustainable. (You can try out the system below.)

  • I have an RSS feed implemented as a PHP script, which pulls the posts to include in the feed from the site's front page and subsequently extracts the contents of each post from their respective file. The front page is updated manually.

  • All files upon which the various pages on the site depend – the stylesheet, the navigation menu script and the comments script – are put in a folder at the site's root named 1. If I ever choose to re-write the site, I can keep this 1 folder, so that the old pages still work, and put the new files, like the stylesheets and scripts for the new pages, into another folder named 2 – and so on forever.

  • In my initial article, I put forward the idea of putting all files upon which a page depends in that page's directory. Pages would then be self-contained. I ultimately decided against this, because a) it is useful for some pages to share some common files, such as a stylesheet, and b) if the stylesheet always resides at the same URL, browsers will save it in the cache.

As you can see, I have introduced some dynamism to the site, most notably the comments system. (It has bothered me that very few blogs support comments nowadays. How am I supposed to respond to a blog post? E-mail the author? What usually happens is that the discussion takes place elsewhere – on Reddit, Lobsters, Hacker News etc. – and not on the blog post itself. Is this good?)

Conclusion: Static HTML is still king, but it benefits greatly from a little dynamism, written in PHP, to provide useful things, such an RSS feed or a comments system.

]]>
http://john.ankarstrom.se/ibm-g96-focus/ Adjusting focus on an IBM G96 IBM G96

I recently bought a used IBM G96 CRT monitor, in fairly good condition. The only problem was the blurry picture. Luckily, I was able to fix this by adjusting a pair of screws inside the monitor. Because I couldn’t find any service manual for the G96, I thought I’d share some pictures of how it’s done, so that it will be easier for other people to figure out.

My IBM G96 is still blurrier than my Eizo FlexScan T68, but adjusting the focus made it a lot better.

Warning: The insides of a CRT monitor are very dangerous to touch. Even if the monitor is turned off, components may still carry charges of over 20,000 volts. And even if you were to manually discharge everything, you still need to turn the monitor back on while you are adjusting the focus in order to judge the results. Please use this page only as a supplement to other sources, such as service manuals or more thorough online tutorials. With that said, if you’re careful not to touch any unknown objects, use an insulated screwdriver and keep your other hand behind your back, you should be fine.

Here is the general procedure:

  1. Unplug the monitor completely.
  2. Remove the four screws holding the cover in place.
  3. Remove the cover (I recommend placing the monitor face-down for this).
  4. Plug the monitor back into the computer and turn it on.
  5. Adjust the focus screws with an insulated screwdriver, all the while checking if the sharpness improves.
  6. Once you’ve found the best setting, unplug the monitor again and put it back together.

While you are adjusting the focus, use the highest supported resolution (1920×1600). When you are done, I recommend using a lower resolution, such as 1024×768, as the G96 is still not powerful enough to display crystal-clear text at 1920×1600.

The procedure is mostly straightforward and common for most, if not all, CRT monitors. There are, however, a few things specific to the IBM G96 that I wanted to cover.

How to remove the cover

The cover is held in place by four screws – one pair on the each side of the monitor. To reach these, you first need to remove the small piece of plastic that covers each screw, pictured below.

Remove the screw cover with a screwdriver.

As shown above, you can remove each cover by carefully prying it open with a flat-tip screwdriver. This will reveal the screw, pictured below.

A screw that holds the
                cover in place.

Remove the covers of all four screws and unscrew them with a cross-tip screwdriver. Then, after putting the monitor face-down, drag the cover up. Be sure to drag it all the way through the VGA cable, so that you can place it somewhere else where it won’t be in the way.

Where to find the focus adjustment screws

Once you’ve removed the cover, you can put the monitor bottom-down again.

The monitor without its
                cover.

As you can see, the insides of the monitor are concealed a big metal box. Luckily, you don’t need to remove this box. The focus adjustment screws are accessible through a hole near the bottom-left.

Close-up of the focus
                adjustment screw.

Above is a close-up of the hole. You can see the VGA cable coming out of the monitor on the left. If you look closely, you’ll see two screws, one labeled “1”, the other (slightly covered in the picture above) labeled “2”.

Both screws adjust focus! This is not the case for all CRT monitors – some have just a single focus screw and another screw called the “screen” screw (which on this monitor is accessible through the small hole at the bottom, pictured above) – but the IBM G96 has two focus screws, and you’ll need to adjust both.

Use an insulated cross-tip screwdriver to turn the screws. Because there are two screws, which interact with each other in a non-obvious way, it is possible to completely “lose track” of their correct positions in relation to one another. If this happens (as it once did for me!), keep fiddling with both screws to try to get back to where you were before – you haven’t broken anything, you just need to find your way back.

An earlier version of this page stated that there was only one focus screw. This is what I first thought, until I tried adjusting screw “2” (which I originally thought was the screen screw). When I adjusted both screws instead of just one, I was able to get much better clarity.

How to put the cover back

Once you’ve adjusted the focus and achieved as satisfactory a result as you can get, simply slide the monitor back on, but be aware that there are “rails” at the bottom of the monitor that the cover must slide into. Keep an eye on these and make sure that the cover goes inside and not outside of them.

Good luck! I’ll leave you with a close-up of my screen after the focus adjustment. It’s hard to take a good photograph of the results, but the adjustment was generally successful.

A picture of the screen after
                focus adjustment.

]]>
http://john.ankarstrom.se/zoom/ Is Zoom really that bad? As everyone seems to have noticed, with the current necessity to work and study from home, video conferencing services have become very popular. Especially striking is Zoom, a name that many people in the last two weeks have heard for the first time. Now, Zoom’s increasing popularity has been accompanied by some criticism and skepticism, especially regarding the service’s practices around privacy. But to what extent really does Zoom exploit the privacy of its users?

The criticism directed at Zoom can be divided into two categories:

  • that regarding accidental security vulnerabilities, and
  • that regarding intentional features and practices related to privacy.

The first category can largely be ignored. It is obvious that no company wishes that their products have security vulnerabilities. It is necessary to point these out and to criticize the company if they don’t fix them, but accidental flaws are not an evil that can be criticized as an inherent part of Zoom’s business model.

That leaves Zoom’s intentional privacy practices. Here too, it is relevant to divide the notion of “privacy” into two distinct categories:

  • “specific” privacy, i.e., that between you and your boss/teacher/co-worker/co-student/etc., and
  • “general” privacy, i.e., that between you, Zoom and other anonymous internet corporations.

There are many complaints directed at Zoom related to specific privacy. For example, conference hosts have the optional ability to be alerted whenever a guest has had the Zoom window inactive for more than 30 seconds. Just like that regarding security flaws, the criticism regarding “specific” privacy is largely irrelevant to the question at hand – namely, that of whether Zoom exploits the privacy of its users. Obviously, I don’t like that the host has this ability, but I dislike the host, who deliberately activates this ability, much more than I dislike Zoom, who merely provides it.

I think that what people should be more worried about is the question of general privacy. Is Zoom just another Facebook, another Google, another Microsoft/Skype, whose business model depends on the collection of my personal information?

The answer is no, for a couple of reasons:

  1. Their privacy policy, which I find fairly decent in general, outlines rather clearly what data they collect and for which purposes.
  2. They don’t use collected data for advertising. Their marketing websites, if you visit them, do collect data for advertising purposes, but the Zoom service as such does not.
  3. You don’t need to register an account in order to participate in meetings.

I think the last point is huge. All of the big, privacy-invasive companies – Facebook, Google, Microsoft – without exception require account registration. Facebook even requires your real, full name. Google requires your phone number. But even if Zoom were to use the data it collects about me for advertising purposes, they do not know who I am. I have never given them my name (apart from the alias I choose to be identified by in a meeting), nor my phone number, nor even my e-mail address. I think it’s worth giving Zoom credit here.



Now, there are perhaps some warning bells. Most glaringly, they have advertised their service as end-to-end encrypted, using an – ahem – unconventional definition of end-to-end encryption. And there’s no telling how the company is going to act in the future, especially if it becomes the de-facto video conferencing standard. Additionally, there is the fact that Zoom is non-free, closed-source software, which in an ideal world wouldn’t be the case.

But all in all, I’m currently fairly happy with Zoom. Not only is it not actively hostile to its users’ privacy, but it also is much more accessible than any viable alternative. It supports Windows XP SP3, Mac OS 10.7 and many distributions and versions of Linux. It supports iOS, Android and even BlackBerry. Among browsers, it supports Safari 7, Chrome 30, Firefox 27 and Internet Explorer 11. In other words, even if you haven’t updated your browser in the last six years, or your operating system in the last twenty, you can still use Zoom!

Anyway, I’m just happy that I don’t have to use Skype. Things could be much worse.

]]>
http://john.ankarstrom.se/html/ Writing HTML in HTML I've just finished the final rewrite of my website. I'm not lying: this is the last time I'm ever going to do it. This website has gone through countless rewrites – from WordPress to Jekyll to multiple static site generators of my own – but this is the final one. I know so, because I've found the ultimate method for writing webpages: pure HTML.

It sounds obvious, but when you think about how many static site generators are being released every day – the list is practically endless – it's far from obvious. Drew DeVault recently challanged people to create their own blog, and he didn't even mention the fact that one could write it in pure HTML:

If you want a hosted platform, I recommend write.as. If you're technical, you could build your own blog with Jekyll or Hugo. GitHub offers free hosting for Jekyll-based blogs.

Now, there's nothing wrong with Jekyll or Hugo; it's just interesting that HTML doesn't even get a mention. And of course, I'm not criticizing Drew; I think the work he's doing is great. But, just like me and you, he is a child of his time.

That's why I'm writing this blog post – to turn the tide just a little bit.


So what are the benefits of writing HTML in HTML?

There's one less level of indirection.

This point is simple, but hugely important.

Using a static site generator means that you have to keep track of two sources: the actual Markdown source and the resulting HTML source. This may not sound too difficult, but always having to make sure that these two sources are in line with each other takes a mental toll. Now, when I write in HTML, I only have to keep track of one source.

Further, you actually need to have your static site generator installed. Again, not a huge thing, but if you often switch between different operating systems, this is yet another chore. With HTML, you just need a web browser – which, if you're creating a website, you need anyway!

Finally, you constantly have to work around the limitations of your static site generator. Let's say you want your site to have a specific directory structure, where pages are sorted under various categories. With Jekyll, this is practically impossible, and even though you technically can get it working if you really try, it is only through much effort and with a source code that is organized much more unintuitively than if you just wrote it directly in HTML.

These seemingly small things tend to add up, and when you know that there are three or four extra things you have to think about before you write another blog post, there's a higher threshold to start writing.

And that's something that I've noticed: with nothing but pure HTML, there is no threshold. When I used a static site generator, I always had to do a dozen small things – start the auto-refresh server, research how to do something – before I was ready to do anything. Now, creating a new theme, a new post, a new page or even a new site requires no setup – I just open up a HTML document and start writing!

So what's the catch? There has be some reason why people don't write their personal websites in pure HTML. Well, it's simple:

HTML is unpleasant to write.

This is the only real reason. And it's true – HTML is a pain to write! But the solution, I argue, isn't to use other languages that are then trans­lated to HTML (we've seen above how many problems that causes); the solution is to use better editors.

The best HTML editor I've found is actually the WYSIWYG Composer part of Seamonkey. As long as the source is HTML 4.01 (which, for a personal blog, is surely sufficient), it can edit any HTML document. It's what I'm using right now to write this post, and despite its age and a couple of quirks, it works really well.

Screenshot of SeaMonkey's Composer. Click for full size.

Another very promising alternative, which isn't WYSIWYG, but more of a WYSIWYM editor, goes under the name HTML Notepad.

If you don't want a WYSIWYG editor, I'm sure that modern IDEs have reasonable support for HTML.

In any case, once you start writing a post, you'll notice that it actually isn't so bad, as long as you have an editor that is more modern than vi (no offense to vi users – I use it as my main editor myself).

Doesn't this mean that I have to type a bunch of boilerplate every time I create a new blog post?

My simple answer is: just copy it. My more advanced answer is this:

  1. Make blog posts and pages self-contained – in other words, have each post or page reside in its own folder, with its own scripts and stylesheets.
  2. When you want to write another post or page, copy the folder of an already existing post or page and edit it.
  3. If you find the previous step too much work, write a shell script that copies the directory and removes the old content for you.
But how can I then keep the style and layout of all my posts and pages in sync?

Simple: don't! It's more fun that way. Look at this website: if you read any previous blog post, you'll notice that they have a different stylesheet. This is because they were written at different times. As such, they're like time capsules.

Update (October 2020): I've now updated the design of the entire site (mostly), because some pages had style sheets that depended on modern CSS features and were incompatible with older browsers – so you won't really see any time capsules if you explore the site, at least for now.

In summary, I don't think this post will convince everyone – and it's not written for everyone. It's written for those who have found themselves in the same situation as me: regularly rewriting their website, fighting with their static site generator. For these people, I think pure HTML is the best choice.

Read more: Response from Lobste.rs

]]>
http://john.ankarstrom.se/learning-c/ Learning C as an uneducated hobbyist If you’re like me, you’ve never studied computer science, or anything even related to technology. You’ve never worked with computers, you’ve barely, if ever, produced a single line of code professionally; you’ve just always been interested – in your free time, as a hobby.

Maybe you started with HTML and CSS, building your own website, picking up bits of JavaScript along the way, eventually learning basic Unix shell commands for convenience. Making your way through the world of Unix-based web development, perhaps via Mac OS X or GNU/Linux, you began to use scripting languages like PHP, Perl, Ruby or Python. You learned Git and started to explore Vim and Emacs.

Today, if you’re anything like me, you feel pretty confident in the skills you’ve built up: you can spin up a website in no time, you know your way around a Unix-like operating system, you write clever shell scripts to make your life easier. You run GNU/Linux as your main OS, or perhaps even a variant of BSD.

Yet there is one mountain that you haven’t ever been able to climb. The big one. The capital letter. C.


To people like me, C feels almost mythical: a language that offers unparalleled power for the cost of unparalleled danger. Of course, C isn’t actually mythical; it’s been mythologized – and for people who are forced to learn C anyway at university, this isn’t a problem, but people without any formal education often never get the chance to learn, that after all, C is just another programming language, one that you don’t need any magic method or special knowledge to learn.

Many who give advice on how to best learn C are people who’ve already learned it and have long experience with it. Almost without exception, they tend to recommend reading The C Programming Language by K&R.

I, in constrast, am writing this as I’m just starting to get into C for real, meaning I have begun to feel reasonably comfortable with it and have started to use it as a go-to language for certain tasks. I haven’t read K&R, and if you and I are the same type of person, I wouldn’t recommend you do either.

At least not the whole thing right now. Think about how you learned the other languages you know. What was the first thing you did? To pick up a book? No! You tried to use it. Now think about at which point you felt that you were starting to actually learn the language. For me, it was always when I started using it to write small, useful programs, often ones that I could’ve written in another language, or perhaps ones that I already had written in another language.

None of that changed for C. I’m learning it exactly like I’ve learned all other programming languages. The only thing that has changed is where I look for information. Don’t worry, Stack Overflow is still one of my most important resources, but I complement it with my operating system’s man pages.

Whenever I find an answer to my question on the internet, I check out the man page for every function that I need to call. This makes sure that I never miss anything important, like checking errors or freeing allocated memory. Other times, I’m not confused about specific functions, but about larger concepts. In these cases, I do find it useful to look up the topic in K&R – it’s not a bad book, it’s just that it’s hard to learn something by merely reading and doing irrelevant exercises.

The point of this text is to emphasize that people learn differently, especially if they have different levels of education. While university-educated programmers might learn best from a book, self-taught programmers most likely don’t. This is something to keep in mind when giving advice to people who are learning a programming language or, indeed, when learning one yourself.


But what if I learn bad practice?

Well … there is a difference between programming and driving. Driving, at its heart, is an unconscious activity – you don’t have to think when you’re changing gears; you just do it, even while talking to someone on the phone or listing to the radio – and once you really learn a bad practice, like holding down the clutch pedal into an intersection, it’s very hard to unlearn, because your driving isn’t conscious.

Programming, however, is conscious. It’s an activity in which you have to think in order to act. Unlearning bad practice in programming takes no energy at all apart from that spent being told that the practice is bad and coming to understand and remember it. Once you’ve done that, it’s almost impossible to make the same mistake again.

That’s why you shouldn’t be afraid of learning “along the way”, “as you go” or “in an ad-hoc manner” because “you might learn bad practice”. If you learn the wrong thing, you can learn the right thing later. After all, you’re not a professional programmer. It doesn’t matter very much if you make a mistake; your job doesn’t depend on it.

]]>
http://john.ankarstrom.se/separation-of-concerns/ Have your separation of concerns and eat it too I recently came across a discussion about functional CSS, and it made me think about separation of concerns. See, I realized that both sides of the conflict are equally right: those who propogate for functional CSS and those who prefer the traditional method use the same argument.

In order to explain, I will first try to clarify what I believe each side’s argument to be.

1. Functional CSS

If we begin with functional CSS, I believe that its ultimate goal is this:

Your CSS should be reusable, and your HTML replaceable; if you create a good-looking stylesheet for one page, you should be able to use it for another page without having to modify the CSS.

The consequence of this argument is that all CSS should be strictly presentational and not depend on any specific semantic organization of the HTML.

For example, if you wanted to specify a style for buttons, you would create a CSS rule for .button:

.button { /* looks like a button */ }

Very reasonable! Now, you just have to add that class to everything in your HTML structure that should be styled as a button:

<a href="login" class="button">Log in</a>

With the popularity of CSS frameworks like Bootstrap, it is obvious that this discipline has met much success and won many hearts. If there are a dozen different CSS stylesheets that define .button styles, you could change the appearance of your HTML page by simply switching out the stylesheet.

2. Traditional discipline

Now, let’s explore the argument against functional CSS:

Your HTML should be reusable, and your CSS replaceable; if you create an HTML document, you should be able to change its appearance without having to modify the HTML.

As you can see, this is the exact same argument, except it’s been turned on its head. The consequence of this argument is that all HTML should be strictly semantic and not depend on any specific stylesheet.

For example, the aforementioned login button should rather be given a semantic id:

<a href="login" id="login">Log in</a>

And to define its appearance, you should create a CSS rule for #login rather than for .button:

#login { /* looks like a button */ }

This earns us the benefit of not having to change the semantic structure of our document whenever we might want to change its presentation. If somebody decides that our login link should look not like a button, but like a normal link, he or she needn’t modify the HTML to achieve this. This is reasonable: because the appearance of the link is a presentational issue, it belongs in the CSS, not in the HTML.

3. A comparison

The traditinal discipline aligns nicely with the role of HTML as a semantic definition of a document – but on the other hand, the functional CSS perspective respects the role of CSS as a presentational description of a document.

As for “separation of concerns”, we must admit it to be the cornerstone of both disciplines; they just approach it from different angles, and in doing so, each appears to miss the other’s point:

In the traditional method, while presentational issues are kept out of the semantic organization of the HTML, the semantics are bound to infect the presentational matters of the CSS. And while functional CSS keeps semantics out of the presentational description, it fills the semantic description with tons of presentational concerns.

The result is that neither discipline enforces separation of concerns.

4. A solution

If genuine separation of concerns is what we truly desire – that is, a strict dividing line between semantics and presentation – then I think I have a solution:

Presentational description:

@mixin button { /* looks like a button */ }

Semantic description:

<a href="login" id="login">Log in</a>

Semantic–presentational link:

#login { @include button }

In text:

  1. Forbid semantic1 information in the presentational description.
  2. Forbid presentational information in the semantic description.
  3. Create a linking stylesheet that defines the connection between the presentation and the semantics.

This way, you can replace either part, as long as you modify the linking stylesheet accordingly.

I have started to use this discipline – I guess we can call it “extreme separation of concerns” – for my own website, and my own experience is that it works relatively well.


Footnotes

  1. Note that I use the word “semantic” here to refer to the structure of the HTML. I don’t mean that the presentational stylesheet mustn’t describe elements like a or code – these selectors have very little to do with the document’s specific structure and are likely to be used identically in all HTML documents. I mean that it shouldn’t describe things like #login, .post-meta time or #header .subtitle.

    Personally, I do try to avoid even a and code in my presentational description, but this isn’t necessary for the discipline to work.

]]>