Journal tags: medium

711

sparkline

The syndicate

Social networks come and social networks go.

Right now, there’s a whole bunch of social networks coming (Blewski, Freds, Mastication) and one big one going, thanks to Elongate.

Me? I watch all of this unfold like Doctor Manhattan on Mars. I have no great connection to any of these places. They’re all just syndication endpoints to me.

I used to have a checkbox in my posting interface that said “Twitter”. If I wanted to add a copy of one of my notes to Twitter, I’d enable that toggle.

I have, of course, now removed that checkbox. Twitter is dead to me (and it should be dead to you too).

I used to have another checkbox next to that one that said “Flickr”. If I was adding a photo to one of my notes, I could toggle that to send a copy to my Flickr account.

Alas, that no longer works. Flickr only allows you to post 1000 photos before requiring a pro account. Fair enough. I’ve actually posted 20 times that amount since 2005, but I let my pro membership lapse a while back.

So now I’ve removed the “Flickr” checkbox too.

Instead I’ve now got a checkbox labelled “Mastodon” that sends a copy of a note to my Mastodon account.

When I publish a blog post like the one you’re reading now here on my journal, there’s yet another checkbox that says “Medium”. Toggling that checkbox sends a copy of my post to my page on Ev’s blog.

At least it used to. At some point that stopped working too. I was going to start debugging my code, but when I went to the documentation for the Medium API, I saw this:

This repository has been archived by the owner on Mar 2, 2023. It is now read-only.

I guessed I missed the memo. I guess Medium also missed the memo, because developers.medium.com is still live. It proudly proclaims:

Medium’s Publishing API makes it easy for you to plug into the Medium network, create your content on Medium from anywhere you write, and expand your audience and your influence.

Not a word of that is accurate.

That page also has a link to the Medium engineering blog. Surely the announcement of the API deprecation would be published there?

Crickets.

Moving on…

I have an account on Bluesky. I don’t know why.

I was idly wondering about sending copies of my notes there when I came across a straightforward solution: micro.blog.

That’s yet another place where I have an account. They make syndication very straightfoward. You can go to your account and point to a feed from your own website.

That’s it. Syndication enabled.

It gets better. Micro.blog can also cross-post to other services. One of those services is Bluesky. I gave permission to micro.blog to syndicate to Bluesky so now my notes show up there too.

It’s like dominoes falling: I post something on my website which updates my RSS feed which gets picked up by micro.blog which passes it on to Bluesky.

I noticed that one of the other services that micro.blog can post to is Medium. Hmmm …would that still work given the abandonment of the API?

I gave permission to micro.blog to cross-post to Medium when my feed of blog posts is updated. It seems to have worked!

We’ll see how long it lasts. We’ll see how long any of them last. Today’s social media darlings are tomorrow’s Friendster and MySpace.

When the current crop of services wither and die, my own website will still remain in full bloom.

Add view transitions to your website

I must admit, when Jake told me he was leaving Google, I got very worried about the future of the View Transitions API.

To recap: Chrome shipped support for the API, but only for single page apps. That had me worried:

If the View Transitions API works across page navigations, it could be the single best thing to happen to the web in years.

If the View Transitions API only works for single page apps, it could be the single worst thing to happen to the web in years.

Well, the multi-page version still hasn’t yet shipped in Chrome stable, but it is available in Chrome Canary behind a flag, so it looks like it’s almost here!

Robin took the words out of my mouth:

Anyway, even this cynical jerk is excited about this thing.

Are you the kind of person who flips feature flags on in nightly builds to test new APIs?

Me neither.

But I made an exception for the View Transitions API. So did Dave:

I think the most telling predictor for the success of the multi-page View Transitions API – compared to all other proposals and solutions that have come before it – is that I actually implemented this one. Despite animations being my bread and butter for many years, I couldn’t be arsed to even try any of the previous generation of tools.

Dave’s post is an excellent step-by-step introduction to using view transitions on your website. To recap:

Enable these two flags in Chrome Canary:

chrome://flags#view-transition
chrome://flags#view-transition-on-navigation

Then add this meta element to the head of your website:

<meta name="view-transition" content="same-origin">

You could stop there. If you navigate around your site, you’ll see that the navigations now fade in and out nicely from one page to another.

But the real power comes with transitioning page elements. Basically, you want to say “this element on this page should morph into that element on that page.” And when I say morph, I mean morph. As Dave puts it:

Behind the scenes the browser is rasterizing (read: making an image of) the before and after states of the DOM elements you’re transitioning. The browser figures out the differences between those two snapshots and tweens between them similar to Apple Keynote’s “Magic Morph” feature, the liquid metal T-1000 from Terminator 2: Judgement Day, or the 1980s cartoon series Turbo Teen.

If those references are lost on you, how about the popular kids book series Animorphs?

Some classic examples would be:

  • A thumbnail of a video on one page morphs into the full-size video on the next page.
  • A headline and snippet of an article on one page morphs into the full article on the next page.

I’ve added view transitions to The Session. Where I’ve got index pages with lists of titles, each title morphs into the heading on the next page.

Again, Dave’s post was really useful here. Each transition needs a unique name, so I used Dave’s trick of naming each transition with the ID of the individual item being linked to.

In the recordings section, for example, there might be a link like this on the index page:

<a href="/recordings/7812" style="view-transition-name: recording-7812">The Banks Of The Moy</a>

Which, if you click on it, takes you to the page with this heading:

<h1><span style="view-transition-name: recording-7812">The Banks Of The Moy</span></h1>

Why the span? Well, like Dave, I noticed some weird tweening happening between block and inline elements. Dave solved the problem with width: fit-content on the block-level element. I just stuck in an extra inline element.

Anyway, the important thing is that the name of the view transition matches: recording-7812.

I also added a view transition to pages that have maps. The position of the map might change from page to page. Now there’s a nice little animation as you move from one page with a map to another page with a map.

';"> thesession.org View Transitions

That’s all good, but I found myself wishing that I could just have those enhancements. Every single navigation on the site was triggering a fade in and out—the default animation. I wondered if there was a way to switch off the default fading.

There is! That default animation is happening on a view transition named root. You can get rid of it with this snippet of CSS:

::view-transition-image-pair(root) {
  isolation: auto;
}
::view-transition-old(root),
::view-transition-new(root) {
  animation: none;
  mix-blend-mode: normal;
  display: block;
}

Voila! Now only the view transitions that you name yourself will get applied.

You can adjust the timing, the easing, and the animation properites of your view transitions. Personally, I was happy with the default morph.

In fact, that’s one of the things I like about this API. It’s another good example of declarative design. I say what I want to happen, but I don’t need to specify the details. I’ll let the browser figure all that out.

That’s what’s got me so excited about this API. Yes, it’s powerful. But just as important, it’s got a very low barrier to entry.

Chris has gathered a bunch of examples together in his post Early Days Examples of View Transitions. Have a look around to get some ideas.

If you like what you see, I highly encourage you to add view transitions to your website now.

“But wait,” I hear you cry, “this isn’t supported in any public-facing browser yet!”

To which, I respond “So what?” It’s a perfect example of progressive enhancement. Adding one meta element and a smidgen of CSS will do absolutely no harm to your website. And while no-one will see your lovely view transitions yet, once browsers do start shipping with support for the API, your site will automatically get better.

Your website will be enhanced. Progressively.

Update: Simon Pieters quite rightly warns against adding view transitions to live sites before the API is done:

in general, using features before they ship in a browser isn’t a great idea since it can poison the feature with legacy content that might break when the feature is enabled. This has happened several times and renames or so were needed.

Good point. I must temper my excitement with pragmatism. Let me amend my advice:

I highly encourage you to experiment with view transitions on your website now.

Talks and workshops at UX London 2023

Back in November of last year I announced that UX London would be returning in 2023 and that I’d be curating the line-up again. That’s where I’ve been putting a lot of my energy over the last six months.

The line-up is complete. If I step back and try to evaluate it objectively, I’ve gotta say …hot damn, that’s a fine roster of speakers!

Imran Afzal, Vimla Appadoo, Daniel Burka, Trine Falbe, Vitaly Friedman, Mansi Gupta, Stephen Hay, Asia Hoe, Amy Hupe, Paul Robert Lloyd, Stacey Mendez, Ignacia Orellana, Stefanie Posavec, Hannah Smith, and David Dylan Thomas.

Take a look at the complete schedule—a terrific mix of thought-provoking talks and practical hands-on workshops.

On day one, you’ve got these talks:

Then on day two:

And that’s just the talks! You’ve also got these four excellent workshops on both days:

That’s a lot of great stuff packed into two days!

In case you haven’t guessed, I am very excited about this year’s UX London. I would love to see you there.

As an appreciation for you putting up with my child-like excitement, I’d like to share a discount code with you. You can get 20%—that’s one fifth!—off the ticket price using the code CLEARLEFT20.

But note that the standard ticket pricing ends on Friday, May 26th so use that code in the next week to get the most bang for your buck. After that, there’ll only be last-chance tickets, which cost more.

Looking forward to seeing you at Tobacco Dock on June 22nd and 23rd!

Nailspotting

I’m sure you’ve heard the law of the instrument: when all you have is a hammer, everything looks like a nail.

There’s another side to it. If you’re selling hammers, you’ll depict a world full of nails.

Recent hammers include cryptobollocks and virtual reality. It wasn’t enough for blockchains and the metaverse to be potentially useful for some situations; they staked their reputations on being utterly transformative, disrupting absolutely every facet of life.

This kind of hype is a terrible strategy in the long-term. But if you can convince enough people in the short term, you can make a killing on the stock market. In truth, the technology itself is superfluous. It’s the hype that matters. And if the hype is over-inflated enough, you can even get your critics to do your work for you, broadcasting their fears about these supposedly world-changing technologies.

You’d think we’d learn. If an industry cries wolf enough times, surely we’d become less trusting of extraordinary claims. But the tech industry continues to cry wolf—or rather, “hammer!”—at regular intervals.

The latest hammer is machine learning, usually—incorrectly—referred to as Artificial Intelligence. What makes this hype cycle particularly infuriating is that there are genuine use cases. There are some nails for this hammer. They’re just not as plentiful as the breathless hype—both positive and negative—would have you believe.

When I was hosting the DiBi conference last week, there was a little section on generative “AI” tools. Matt Garbutt covered the visual side, demoing tools like Midjourney. Scott Salisbury covered the text side, showing how you can generate code. Afterwards we had a panel discussion.

During the panel I asked some fairly straightforward questions that nobody could answer. Who owns the input (the data used by these generative tools)? Who owns the output?

On the whole, it stayed quite grounded and mercifully free of hyperbole. Both speakers were treating the current crop of technologies as tools. Everyone agreed we were on the hype cycle, probably the peak of inflated expectations, looking forward to reaching the plateau of productivity.

Scott explicitly warned people off using generative tools for production code. His advice was to stick to side projects for now.

Matt took a closer look at where these tools could fit into your day-to-day design work. Mostly it was pretty sensible, except when he suggested that there could be any merit to using these tools as a replacement for user testing. That’s a terrible idea. A classic hammer/nail mismatch.

I think I moderated the panel reasonably well, but I have one regret. I wish I had first read Baldur Bjarnason’s new book, The Intelligence Illusion. I started reading it on the train journey back from Edinburgh but it would have been perfect for the panel.

The Intelligence Illusion is very level-headed. It is neither pro- nor anti-AI. Instead it takes a pragmatic look at both the benefits and the risks of using these tools in your business.

It has excellent advice for spotting genuine nails. For example:

Generative AI has impressive capabilities for converting and modifying seemingly unstructured data, such as prose, images, and audio. Using these tools for this purpose has less copyright risk, fewer legal risks, and is less error prone than using it to generate original output.

Think about transcripts of videos or podcasts—an excellent use of this technology. As Baldur puts it:

The safest and, probably, the most productive way to use generative AI is to not use it as generative AI. Instead, use it to explain, convert, or modify.

He also says:

Prefer internal tools over externally-facing chatbots.

That chimes with what I’ve been seeing. The most interesting uses of this technology that I’ve seen involve a constrained dataset. Like the way Luke trained a language model on his own content to create a useful chat interface.

Anyway, The Intelligence Illusion is full of practical down-to-earth advice based on plenty of research backed up with copious citations. I’m only halfway through it and it’s already helped me separate the hype from the reality.

Hosting DIBI

I was up in Edinburgh for the past few days at the Design It; Build It conference.

I was supposed to come back on Saturday but then the train strikes were announced so I changed my travel plans to avoid crossing a picket line, which gave me an extra day to explore Auld Reekie.

I spoke at DIBI last year so this time I was there in a different capacity. I was the host. That meant introducing the speakers and asking them questions after their talks.

I’m used to hosting events now, what with UX London and Leading Design. But I still get nervous beforehand. At least with a talk you can rehearse and practice. With hosting, it’s all about being nimble and thinking on your feet.

I had to pay extra close attention to each talk, scribbling down potential questions to ask. It’s similar to the feeling I get when I’m liveblogging talks.

There were some line-up changes and schedule adjustments along the way, but everything went super smoothly. I pride myself on running a tight ship so the timings were spot-on.

When it came to the questions, I tried to probe under the skin of each presentation. For some talks, that involved talking shop—the finer points of user research or the design process, say. But for the big-picture talks, I made sure to get each speaker to defend their position. So after Dan Makoski’s kumbaya-under-capitalism talk, I gave him a good grilling. Same with Philip Lockwood-Holmes who gave me permission beforehand to be merciless with him.

It was all quite entertaining. Alas, I think I may have put the fear of God into the other speakers who saw me channeling my inner Jeremy Paxman. But they needn’t have worried. I also lobbed some softballs. Like when I asked Levon Sharrow from Patagonia if there was such thing as ethical consumption under capitalism.

I had fun, but I was also aware of that fine line between being clever and being an asshole. Even though part of my role was to play devil’s advocate, I tried to make sure I was never punching down.

All in all, an excellent couple of days spent in good company.

Hosting was hard work, but very rewarding. I’ve come to realise it’s one of those activities that comes relatively easy to me, but it is very hard (and stressful) for others. And I’m pretty gosh-darned good at it too, false modesty bedamned.

So if you’re running an event but the thought of hosting it fills you with dread, we should talk.

Tragedy

There are two kinds of time-travel stories.

There are time-travel stories that explore the many-worlds hypothesis. Going back in time and making a change forks the universe. But the universe is constantly forking anyway. So effectively the time travel is a kind of universe-hopping (there’s a big crossover here with the alternative history subgenre).

The problem with multiverse stories is that there’s always a reset available. No matter how bad things get, there’s a parallel universe where everything is hunky dory.

The other kind of time travel story explores the idea of a block universe. There is one single timeline.

This is what you’ll find in Tenet, for example, or for a beautiful reduced test case, the Ted Chiang short story What’s Expected Of Us. That gets straight to the heart of the biggest implication of a block universe—the lack of free will.

There’s no changing what has happened or what will happen. In fact, the very act of trying to change the past often turns out to be the cause of what you’re trying to prevent in the present (like in Twelve Monkeys).

I’ve often referred to these single-timeline stories as being like Greek tragedies. But only recently—as I’ve been reading quite a bit of Greek mythology—have I realised that the reverse is also true:

Greek tragedies are time-travel stories.

Hear me out…

Time-travel stories aren’t actually about physically travelling in time. That’s just a convenience for the important part—information travelling in time. That’s at the heart of most time-travel stories; informaton from the future travelling back to the past.

William Gibson’s The Peripheral—very much a many-worlds story with its alternate universe “stubs”—takes this to its extreme. Nothing phyiscal ever travels in time. But in an age of telecommuting, nothing has to. Our time travellers are remote workers.

That book also highlights the power dynamics inherent in information wealth. Knowledge of the future gives you an advantage that you can exploit in the past. This is what Mark Twain’s Connecticut yankee does in King Arthur’s court.

This power dynamic is brilliantly inverted in Octavia Butler’s brilliant Kindred. No amount of information can help you if your place in society is determined by the colour of your skin.

Anyway, the point is that information flow is what matters in time-travel stories. Therefore any story where information travels backwards in time is a time-travel story.

That includes any story with a prophecy. A prophecy is information about the future, like:

Oedipus will kill his father and marry his mother.

You can try to change your fate, but you’ll just end up triggering it instead.

Greek tragedies are time-travel stories.

Innovation

I did an episode of the Clearleft podcast on innovation a while back:

Everyone wants to be innovative …but no one wants to take risks.

The word innovation is often bandied about in an unquestioned positive way. But if we acknowledge that innovation is—by definition—risky, then the exhortations sound less positive.

“We provide innovative solutions for businesses!” becomes “We provide risky solutions for businesses!”

I was reminded of this when I saw the website for the Podcast Standards Project. The original text on the website described the project as:

…a grassroots coalition working to establish modern, open standards, to enable innovation in the podcast industry.

I pushed back on that wording (partly because I’ve seen the word “innovation” used as a smoke screen for user-hostile practices like tracking and surveillance). The wording has since changed to:

…a grassroots coalition dedicated to creating standards and practices that improve the open podcasting ecosystem for both listeners and creators.

That’s better. It’s more precise.

Am I nitpicking? Only if you think that “innovation” and “improvement” are synonyms. I don’t think they are.

Innovation implies change. Improvement implies positive change.

Not all change is positive. Not all innovation is positive.

Innovation goes hand in hand with disruption. Again, disruption involves change. But not necessarily positive change.

Think about the antonyms of change and disruption: stasis and stability. Those words don’t sound very exciting, but in some arenas they’re exactly what you should be aiming for; arenas like infrastructure or standards.

Not to get all pace layers-y here, but it seems to me that every endeavour has a sweet spot for innovation. For some projects, too little innovation is bad. For others, too much innovation is worse.

The trick is knowing which kind of project you’re working on.

(As a side note, I think some people use the word innovation to describe the generative, divergent phase of a design project: “how might we come up with innovative new approaches?” But we already have a word to describe the practice of generating novel and interesting ideas. That word isn’t innovation. It’s creativity.)

The intersectionality of web performance

Web performance is an unalloyed good. No one has ever complained that a website is too fast.

So the benefit is pretty obvious. Users like fast websites. But there are other benefits to web performance. And they don’t all get equal airtime.

Business

A lot of good web performance practices come down to the first half of Postel’s Law: be conservative in what send. Images, fonts, JavaScript …remove what you don’t need and optimise the hell out of what’s left.

That can translate to savings. If you’re paying for the bandwidth every time a hefty file is downloaded, your monthly bill could get pretty big.

So apart from the indirect business benefits of happy users converting to happy customers, there can be a real nuts’n’bolts bottom-line saving to be made by having a snappy website.

Sustainability

This is related to the cost-savings benefit. If you’re shipping less stuff down the wire, and you’re optimising what you do send, then there’s less energy required.

Whether less energy directly translates to a smaller carbon footprint depends on how the energy is being generated. If your servers are running on 100% renewable energy sources, then reducing the output of your responses won’t reduce your carbon footprint.

But there’s an energy cost at the other end too. Think of all the devices making requests to your server. If you’re making those devices work hard—by downloading, parsing, executing lots of JavaScript, for example—then you’re draining battery life. And you can’t guarantee that the battery will be replenished from renewable energy sources.

That’s why sites like the website carbon calculator have so much crossover with web performance:

From data centres to transmission networks to the billions of connected devices that we hold in our hands, it is all consuming electricity, and in turn producing carbon emissions equal to or greater than the global aviation industry. Yikes!

Inclusivity

There comes a point when a slow website isn’t just inconvenient, it’s inaccessible.

I’ve always liked the German phrase for accessible: barrierefrei—free of barriers. With every file you add to a website’s dependencies, you’re adding one more barrier. Eventually the barrier is insurmountable for people with older devices or slower internet connections. If they can no longer access your website, your website is quite literally inaccessible.

Making the case

I’ve noticed that when it comes to making the argument in favour of better web performance, people often default to the business benefits.

I get it. We’re always being told to speak the language of business. The psychology seems pretty straightforward; if you think that the people you’re trying to convince are mostly concerned with the bottom line, use the language of commerce to change their minds.

But that’s always felt reductive to me.

Sure, those people almost certainly do care about the business. Who doesn’t? But they’re also humans. I feel like if really want to convince them, speak to their hearts. Show them the bigger picture.

Eliel Saarinen said:

Always design a thing by considering it in its next larger context; a chair in a room, a room in a house, a house in an environment, an environment in a city plan.

I think the same could apply to making the case for web performance. Don’t stop at the obvious benefits. Go wider. Show the big-picture implications.

Spring

Spring is arriving. It’s just taking its time.

There are little signs. Buds on the trees. The first asparagus of the year. Daffodils. Changing the clocks. A stretch in the evenings. But the weather remains, for the most part, chilly and grim.

Reality is refusing to behave like a fast-forward montage leading up to to a single day when you throw open the curtains and springtime is suddenly there in all its glory.

That’s okay. I can wait. I’ve had a lot of practice over the past three years. We all have. Staying home, biding time, saving lives.

But hunkering down during The Situation isn’t like taking shelter during an air raid. There isn’t a signal that sounds to indicate “all clear!” It’s more like going from Winter to Spring. It’s slow, almost impercetible. But it is happening.

I’ve noticed a subtle change in my risk assessment over the past few months. I still think about COVID-19. I still factor it into my calculations. But it’s no longer the first thing I think of.

That’s a subtle change. It doesn’t seem like that long ago when COVID was at the forefront of my mind, especially if I was weighing up an excursion. Is it worth going to that restaurant? How badly do I want to go to that gig? Should I go to that conference?

Now I find myself thinking of COVID as less of a factor in my decision-making. It’s still there, but it has slowly slipped down the ranking.

I know that other people feel differently. For some people, COVID slipped out of their minds long ago. For others, it’s still very much front and centre. There isn’t a consensus on how to evaluate the risks. Like I said:

It’s like when you’re driving and you think that everyone going faster than you is a maniac, and everyone going slower than you is an idiot.

COVID-19 isn’t going away. But perhaps The Situation is.

The Situation has been gradually fading away. There isn’t a single moment where, from one day to the next, we can say “this marks the point where The Situation ended.” Even if there were, it would be a different moment for everyone.

As of today, the COVID-19 app officially stops working. Perhaps today is as good a day as any to say Spring has arrived. The season of rebirth.

Assumption

While I’m talking about the SVGs on The Session, I thought I’d share something else related to the rendering of the sheet music.

Like I said, I use the brilliant abcjs JavaScript library. It converts ABC notation into sheet music on the fly, which still blows my mind.

If you view source on the rendered SVG, you’ll see that the path and rect elements have been hard-coded with a colour value of #000000. That makes sense. You’d want to display sheet music on a light background, probably white. So it seems like a safe assumption.

Ah, but when it comes to front-end development, assumptions are like little hidden bombs just waiting to go off!

I got an email the other day:

Hi Jeremy,

I have vision problems, so I need to use high-contrast mode (using Windows 11). In high-contrast mode, the sheet-music view is just black!

Doh! All my CSS adapts just fine to high-contrast mode, but those hardcoded hex values in the SVG aren’t going to be affected by high-contrtast mode.

Stepping back, the underlying problem was that I didn’t have a full separation of concerns. Most of my styling information was in my CSS, but not all. Those hex values in the SVG should really be encoded in my style sheet.

I couldn’t remove the hardcoded hex values—not without messing around with JavaScript beyond my comprehension—so I made the fix in CSS:

[fill="#000000"] {
  fill: currentColor;
}
[stroke="#000000"] {
  stroke: currentColor;
}

That seemed to do the trick. I wrote back to the person who had emailed me, and they were pleased as punch:

Well done, Thanks!  The staff, dots, etc. all appear as white on a black background.  When I click “Print”, it looks like it still comes out black on a white background, as expected.

I’m very grateful that they brought the issue to my attention. If they hadn’t, that assumption would still be lying in wait, preparing to ambush someone else.

Workaround

Two weeks ago, I wrote:

I woke up today to a very annoying new bug in Firefox. The browser shits the bed in an unpredictable fashion when rounding up single pixel line widths in SVG. That’s quite a problem on The Session where all the sheet music is rendered in SVG. Those thin lines in sheet music are kind of important.

Paul Rosen, who makes abcjs, the JavaScript library that renders sheet music on The Session, managed to get a fix out pretty quickly. But I use an older version of the library and updating it would introduce some side-effects that would take me a while to work around. So that option wasn’t available to me.

In this situation, when the problem is caused by a browser bug, the correct course of action is to file a bug with the browser. That had already been done. Now all I could do was twiddle my thumbs and wait for the next release of the browser, which would hopefully ship with the fix.

But I figured I may as well try to find a temporary workaround in the meantime.

At first, I looked at diving into the internals of the JavaScript—that’s where the instructions are given for drawing the SVGs.

But then I stopped and thought, “If the problem is with the rendering of the SVG, maybe CSS can help.”

I started messing around with SVG-specific CSS properties like stroke, fill, and so on. With dev tools open, I started targeting the paths that acted as bar lines in the sheet music, playing around with widths, opacities, and fills.

It was the debugging equivalent of throwing spaghetti at the wall. Remarkably, it actually worked.

I found a solution with this nonsensical bit of CSS:

stroke: currentColor;
stroke-opacity: 0;

For some reason, rather than making all the barlines disappear, this ensured they were visible.

It’s the worst kind of hacky fix—the kind where you have no idea why it works, but it does.

So I shipped it.

And at pretty much exactly the same time, a new version of Firefox dropped …with the bug fixed.

I can’t deny that there was a certain satisfaction in being able to work around a browser bug. But there’s much more satisfaction in deleting the hacky workaround when it’s no longer needed.

Remote

Before The Situation, I used to work in the Clearleft studio quite a bit. Maybe I’d do a bit of work at home for an hour or two before heading in, but I’d spend most of my working day with my colleagues.

That all changed three years ago:

Clearleft is a remote-working company right now. I mean, that’s hardly surprising—just about everyone I know is working from home.

Clearleft has remained remote-first. We’ve still got our studio space, though we’ve cut back to just having one floor. But most of the time people are working from home. I still occasionally pop into the studio—I’m actually writing this in the studio right now—but mostly I work out of my own house.

It’s funny how the old ways of thinking have been flipped. If I want to get work done, I stay home. If I want to socialise, I go into the studio.

For a lot of the work I do—writing, podcasting, some video calls, maybe some coding—my home environment works better than the studio. In the Before Times I’d have to put on headphones to block out the distractions of a humming workplace. Of course I miss the serendipitous chats with my co-workers but that’s why it’s nice to still have the option of popping into the studio.

Jessica has always worked from home. Our flat isn’t very big but we’ve got our own separate spaces so we don’t disturb one another too much.

For a while now we’ve been thinking that we could just as easily work from another country. I was inspired by a (video) chat I had with Luke when he casually mentioned that he was in Cypress. Why not? As long as the internet connection is good, the location doesn’t make any difference to the work.

So Jessica and I spent the last week working in Ortygia, Sicily.

It was pretty much the perfect choice. It’s not a huge bustling city. In fact it was really quiet. But there was still plenty to explore—winding alleyways, beautiful old buildings, and of course plenty of amazing food.

The time difference was just one hour. We used the extra hour in the morning to go to the market to get some of the magnificent local fruits and vegetables to make some excellent lunches.

We made sure that we found an AirBnB place with a good internet connection and separate workspaces. All in all, it worked out great. And because we were there for a week, we didn’t feel the pressure to run around to try to see everything.

We spent the days working and the evenings having a nice sundowner appertivo followed by some pasta or seafood.

It was simultaneously productive and relaxing.

Read-only web apps

The most cartoonish misrepresentation of progressive enhancement is that it means making everything work without JavaScript.

No. Progressive enhancement means making sure your core functionality works without JavaScript.

In my book Resilient Web Design, I quoted Wilto:

Lots of cool features on the Boston Globe don’t work when JS breaks; “reading the news” is not one of them.

That’s an example where the core functionality is readily identifiable. It’s a newspaper. The core functionality is reading the news.

It isn’t always so straightforward though. A lot of services that self-identify as “apps” will claim that even their core functionality requires JavaScript.

Surely I don’t expect Gmail or Google Docs to provide core functionality without JavaScript?

In those particular cases, I actually do. I believe that a textarea in a form would do the job nicely. But I get it. That might take a lot of re-engineering.

So how about this compromise…

Your app should work in a read-only mode without JavaScript.

Without JavaScript I should still be able to read my email in Gmail, even if you don’t let me compose, reply, or organise my messages.

Without JavaScript I should still be able to view a document in Google Docs, even if you don’t let me comment or edit the document.

Even with something as interactive as Figma or Photoshop, I think I should still be able to view a design file without JavaScript.

Making this distinction between read-only mode and read/write mode could be very useful, especially at the start of a project.

Begin by creating the read-only mode that doesn’t require JavaScript. That alone will make for a solid foundation to build upon. Now you’ve built a fallback for any unexpected failures.

Now start adding the read/write functionally. You’re enhancing what’s already there. Progressively.

Heck, you might even find some opportunities to provide some read/write functionality that doesn’t require JavaScript. But if JavaScript is needed, that’s absolutely fine.

So if you’re about to build a web app and you’re pretty sure it requires JavaScript, why not pause and consider whether you can provide a read-only version.

Progressive disclosure with HTML

Robin penned a little love letter to the details element. I agree. It is a joyous piece of declarative power.

That said, don’t go overboard with it. It’s not a drop-in replacement for more complex widgets. But it is a handy encapsulation of straightforward progressive disclosure.

Just last week I added a couple of more details elements to The Session …kind of. There’s a bit of server-side conditional logic involved to determine whether details is the right element.

When you’re looking at a tune, one of the pieces of information you see is how many recordings there of that tune. Now if there are a lot of recordings, then there’s some additional information about which other tunes this one gets recorded with. That information is extra. Mere details, if you will.

You can see it in action on this tune listing. Thanks to the details element, the extra information is available to those who want it, but by default that information is tucked away—very handy for not clogging up that part of the page.

<details>
<summary>There are 181 recordings of this tune.</summary>
This tune has been recorded together with
<ul>
<li>…</li>
<li>…</li>
<li>…</li>
</ul>
</details>

Likewise, each tune page includes any aliases for the tune (in Irish music, the same tune can have many different titles—and the same title can be attached to many different tunes). If a tune has just a handful of aliases, they’re displayed in situ. But once you start listing out more than twenty names, it gets overwhelming.

The details element rides to the rescue once again.

Compare the tune I mentioned above, which only has a few aliases, to another tune that is known by many names.

Again, the main gist is immediately available to everyone—how many aliases are there? But if you want to go through them all, you can toggle that details element open.

You can effectively think of the summary element as the TL;DR of HTML.

<details>
<summary>There are 31 other names for this tune.</summary>
<p>Also known as…</p>
</details>

There’s another classic use of the details element: frequently asked questions. In the case of The Session, I’ve marked up the house rules and FAQs inside details elements, with the rule or question as the summary.

But there’s one house rule that’s most important (“Be civil”) so that details element gets an additional open attribute.

<details open>
<summary>Be civil</summary>
<p>Contributions should be constructive and polite, not mean-spirited or contributed with the intention of causing trouble.</p>
</details>

Browser history

I woke up today to a very annoying new bug in Firefox. The browser shits the bed in an unpredictable fashion when rounding up single pixel line widths in SVG. That’s quite a problem on The Session where all the sheet music is rendered in SVG. Those thin lines in sheet music are kind of important.

Browser bugs like these are very frustrating. There’s nothing you can do from your side other than filing a bug. The locus of control is very much with the developers of the browser.

Still, the occasional regression in a browser is a price I’m willing to pay for a plurality of rendering engines. Call me old-fashioned but I still value the ecological impact of browser diversity.

That said, I understand the argument for converging on a single rendering engine. I don’t agree with it but I understand it. It’s like this…

Back in the bad old days of the original browser wars, the browser companies just made shit up. That made life a misery for web developers. The Web Standards Project knocked some heads together. Netscape and Microsoft would agree to support standards.

So that’s where the bar was set: browsers agreed to work to the same standards, but competed by having different rendering engines.

There’s an argument to be made for raising that bar: browsers agree to work to the same standards, and have the same shared rendering engine, but compete by innovating in all other areas—the browser chrome, personalisation, privacy, and so on.

Like I said, I understand the argument. I just don’t agree with it.

One reason for zeroing in a single rendering engine is that it’s just too damned hard to create or maintain an entirely different rendering engine now that web standards are incredibly powerful and complex. Only a very large company with very deep pockets can hope to be a rendering engine player. Google. Apple. Heck, even Microsoft threw in the towel and abandoned their rendering engine in favour of Blink and V8.

And yet. Andreas Kling recently wrote about the Ladybird browser. How we’re building a browser when it’s supposed to be impossible:

The ECMAScript, HTML, and CSS specifications today are (for the most part) stellar technical documents whose algorithms can be implemented with considerably less effort and guesswork than in the past.

I’ll be watching that project with interest. Not because I plan to use the brower. I’d just like to see some evidence against the complexity argument.

Meanwhile most other browser projects are building on the raised bar of a shared browser engine. Blisk, Brave, and Arc all use Chromium under the hood.

Arc is the most interesting one. Built by the wonderfully named Browser Company of New York, it’s attempting to inject some fresh thinking into everything outside of the rendering engine.

Experiments like Arc feel like they could have more in common with tools-for-thought software like Obsidian and Roam Research. Those tools build knowledge graphs of connected nodes. A kind of hypertext of ideas. But we’ve already got hypertext tools we use every day: web browsers. It’s just that they don’t do much with the accumulated knowledge of our web browsing. Our browsing history is a boring reverse chronological list instead of a cool-looking knowledge graph or timeline.

For inspiration we can go all the way back to Vannevar Bush’s genuinely seminal 1945 article, As We May Think. Bush imagined device, the Memex, was a direct inspiration on Douglas Engelbart, Ted Nelson, and Tim Berners-Lee.

The article describes a kind of hypertext machine that worked with microfilm. Thanks to Tim Berners-Lee’s World Wide Web, we now have a global digital hypertext system that we access every day through our browsers.

But the article also described the idea of “associative trails”:

Wholly new forms of encyclopedias will appear, ready made with a mesh of associative trails running through them, ready to be dropped into the memex and there amplified.

Our browsing histories are a kind of associative trail. They’re as unique as fingerprints. Even if everyone in the world started on the same URL, our browsing histories would quickly diverge.

Bush imagined that these associative trails could be shared:

The lawyer has at his touch the associated opinions and decisions of his whole experience, and of the experience of friends and authorities.

Heck, making a useful browsing history could be a real skill:

There is a new profession of trail blazers, those who find delight in the task of establishing useful trails through the enormous mass of the common record.

Taking something personal and making it public isn’t a new idea. It was what drove the wave of web 2.0 startups. Before Flickr, your photos were private. Before Delicous, your bookmarks were private. Before Last.fm, what music you were listening to was private.

I’m not saying that we should all make our browsing histories public. That would be a security nightmare. But I am saying there’s a lot of untapped potential in our browsing histories.

Let’s say we keep our browsing histories private, but make better use of them.

From what I’ve seen of large language model tools, the people getting most use of out of them are training them on a specific corpus. Like, “take this book and then answer my questions about the characters and plot” or “take this codebase and then answer my questions about the code.” If you treat these chatbots as calculators for words they can be useful for some tasks.

Large language model tools are getting smaller and more portable. It’s not hard to imagine one getting bundled into a web browser. It feeds on your browsing history. The bigger your browsing history, the more useful it can be.

Except, y’know, for the times when it just make shit up.

Vannevar Bush didn’t predict a Memex that would hallucinate bits of microfilm that didn’t exist.

Scholarship sponsorship

I wrote a while back about the UX London 2023 scholarship programme. Applications are still open (until May 19th) so if you know someone who you think should apply, here’s the link. As I said then:

Wondering if you should apply? It’s hard to define exactly who qualifies for a diversity scholarship, but basically, the more your life experience matches mine, the less qualified you are. If you are a fellow able-bodied middle-aged heterosexual white dude with a comfortable income, do me a favour and don’t apply. Everyone else, go for it.

The response so far has been truly amazing—so many great applicants!

And therein lies the problem. Clearleft can only afford to sponsor a limited number of people. It’s going to be very, very, very hard to have to whittle this down.

But perhaps you can help. Do you work at a company that could afford to sponsor some places? If so, please get in touch!

Just to be clear, this would be different from the usual transactional sponsorship opportunities for UX London where we offer you a package of benefits in exchange for sponsorship. In the case of diversity scholarships, all we can offer you is our undying thanks.

I’ll admit I have an ulterior motive in wanting to get as many of the applicants as possible to UX London. The applications are positively aglow with the passion and fervour of the people applying. Frankly, that’s exactly who I want to hang out with at an event.

Anyway, on the off chance that your employer might consider this investment in the future of UX, spread the word that we’d love to have other companies involved in the UX London diversity scholarship programme.

Design transformation on the Clearleft podcast

Boom! The Clearleft podcast is back!

The first episode of season four just dropped. It’s all about design transformation.

I’ve got to be honest, this episode is a little inside baseball. It’s a bit navel-gazey and soul-searching as I pick apart the messaging emblazoned on the Clearleft website:

The design transformation consultancy.

Whereas most of the previous episodes of the podcast would be of interest to our peers—fellow designers—this one feels like it might of more interest to potential clients. But I hope it’s not too sales-y.

You’ll hear from Danish designer Maja Raunbak, and American in Amsterdam Nick Thiel as well as Clearleft’s own Chris Pearce. And I’ve sampled a talk from the Leading Design archives by Stuart Frisby.

The episode clocks in at a brisk eighteen and a half minutes. Have a listen.

While you’re at it, take this opportunity to subscribe to the Clearleft podcast on Overcast, Spotify, Apple, Google or by using a good ol’-fashioned RSS feed. That way the next episodes in the season will magically appear in your podcatching software of choice.

But I’m not making any promises about when that will be. Previously, I released new episodes in a season on a weekly basis. This time I’m going to release each episode whenever it’s ready. That might mean there’ll be a week or two between episodes. Or there might be a month or so between episodes.

I realise that this unpredictable release cycle is the exact opposite of what you’re supposed to do, but it’s actually the most sensible way for me to make sure the podcast actually gets out. I was getting a bit overwhelmed with the prospect of having six episodes ready to launch over a six week period. What with curating UX London and other activities, it would’ve been too much for me to do.

So rather than delay this season any longer, I’m going to drop each episode whenever it’s done. Chaos! Anarchy! Dogs and cats living together!

More speakers for UX London 2023

I’d like to play it cool when I announce the latest speakers for UX London 2023, like I could be all nonchalant and say, “oh yeah, did I not mention these people are also speaking…?”

But I wouldn’t be able to keep up that façade for longer than a second. The truth is I am excited to the point of skittish gigglyness about this line-up.

Look, I’ll let you explore these speakers for yourself while I try to remain calm and simply enumerate the latest additions…

A smiling white woman with shoulder-length brown hair wearing a bright red top in a pink chair in front of a bright blue wall. A studio portrait of a white woman with long straight light brown hair wearing a black top. A smiling black man with glasses and close-cropped hair and beard wearing a leather jacket outdoors.

The line-up is almost complete now! Just one more speaker to announce.

I highly recommend you get your UX London ticket if you haven’t already. You won’t want to miss this!

Steam

Picture someone tediously going through a spreadsheet that someone else has filled in by hand and finding yet another error.

“I wish to God these calculations had been executed by steam!” they cry.

The year was 1821 and technically the spreadsheet was a book of logarithmic tables. The frustrated cry came from Charles Babbage, who channeled his frustration into a scheme to create the world’s first computer.

His difference engine didn’t work out. Neither did his analytical engine. He’d spend his later years taking his frustrations out on street musicians, which—as a former busker myself—earns him a hairy eyeball from me.

But we’ve all been there, right? Some tedious task that feels soul-destroying in its monotony. Surely this is exactly what machines should be doing?

I have a hunch that this is where machine learning and large language models might turn out to be most useful. Not in creating breathtaking works of creativity, but in menial tasks that nobody enjoys.

Someone was telling me earlier today about how they took a bunch of haphazard notes in a client meeting. When the meeting was done, they needed to organise those notes into a coherent summary. Boring! But ChatGPT handled it just fine.

I don’t think that use-case is going to appear on the cover of Wired magazine anytime soon but it might be a truer glimpse of the future than any of the breathless claims being eagerly bandied about in Silicon Valley.

You know the way we no longer remember phone numbers, because, well, why would we now that we have machines to remember them for us? I’d be quite happy if machines did that for the annoying little repetitive tasks that nobody enjoys.

I’ll give you an example based on my own experience.

Regular expressions are my kryptonite. I’m rubbish at them. Any time I have to figure one out, the knowledge seeps out of my brain before long. I think that’s because I kind of resent having to internalise that knowledge. It doesn’t feel like something a human should have to know. “I wish to God these regular expressions had been calculated by steam!”

Now I can get a chatbot with a large language model to write the regular expression for me. I still need to describe what I want, so I need to write the instructions clearly. But all the gobbledygook that I’m writing for a machine now gets written by a machine. That seems fair.

Mind you, I wouldn’t blindly trust the output. I’d take that regular expression and run it through a chatbot, maybe a different chatbot running on a different large language model. “Explain what this regular expression does,” would be my prompt. If my input into the first chatbot matches the output of the second, I’d have some confidence in using the regular expression.

A friend of mine told me about using a large language model to help write SQL statements. He described his database structure to the chatbot, and then described what he wanted to select.

Again, I wouldn’t use that output without checking it first. But again, I might use another chatbot to do that checking. “Explain what this SQL statement does.”

Playing chatbots off against each other like this is kinda how machine learning works under the hood: generative adverserial networks.

Of course, the task of having to validate the output of a chatbot by checking it with another chatbot could get quite tedious. “I wish to God these large language model outputs had been validated by steam!”

Sounds like a job for machines.

Disclosure

You know how when you’re on hold to any customer service line you hear a message that thanks you for calling and claims your call is important to them. The message always includes a disclaimer about calls possibly being recorded “for training purposes.”

Nobody expects that any training is ever actually going to happen—surely we would see some improvement if that kind of iterative feedback loop were actually in place. But we most certainly want to know that a call might be recorded. Recording a call without disclosure would be unethical and illegal.

Consider chatbots.

If you’re having a text-based (or maybe even voice-based) interaction with a customer service representative that doesn’t disclose its output is the result of large language models, that too would be unethical. But, at the present moment in time, it would be perfectly legal.

That needs to change.

I suspect the necessary legislation will pass in Europe first. We’ll see if the USA follows.

In a way, this goes back to my obsession with seamful design. With something as inherently varied as the output of large language models, it’s vital that people have some way of evaluating what they’re told. I believe we should be able to see as much of the plumbing as possible.

The bare minimum amount of transparency is revealing that a machine is in the loop.

This shouldn’t be a controversial take. But I guarantee we’ll see resistance from tech companies trying to sell their “AI” tools as seamless, indistinguishable drop-in replacements for human workers.