Journal tags: collaboration

13

sparkline

UX London 2024, day three

UX London runs for three days, from June 18th to 20th. If you can, you should get a ticket for all three days. But if you can’t, you can get a one-day ticket. Think of each individual day as being its own self-contained conference.

The flow of the three-day event kind of mimics the design process itself. It starts with planning and research. Then it gets into the nitty-gritty product design details. Then it gets meta…

Day three, Thursday, June 20th is about design systems and design ops.

Maintenance matters, not just for the products and services you’re designing, but for the teams you’re designing with. You can expect a barrage of knowledge bombs on alignment and collaboration.

The bombardment commences with four great talks in the morning.

The eyes of a man with an impressive foppish fringe look out from inside a brightly-coloured child's space helmet. A professional portrait of a smiling woman with long hair in front of a black background. A woman with long curly hair outdoors with a big smile on her face. A pale-skinned woman with her tied back smiling in front of a white background.
  1. Brad Frost kicks things off with the question is atomic design dead? Brad will show you how to imagine what a global design system might look like.
  2. Alicia Calderón is going to be talking about unlocking collaboration . Alicia will show you how to use a framework for creating lasting aligment between developers and designers.
  3. Benaz Irani will be speaking about empathy overload. Benaz will show you how to strike a balance between compassion and confidence within your team.
  4. Kara Kane is going to talk about why UX building blocks need standards. Kara will show you how to use standards to enable adoption and contribution to design systems.

After the lunch break you’ll have your pick of four superb workshops. It’s not an easy choice.

The eyes of a man with an impressive foppish fringe look out from inside a brightly-coloured child's space helmet. Close up of a smiling light-skinned woman wearing glasses with long red hair. A bearded short-haired man with light skin smiling outdoors amongst greenery. A white man with short hair and a bit of a ginger beard with a twinkle in his eye, wearing a plaid shirt.
  1. Brad Frost is not only giving a talk in the morning, he’s also leading an afternoon workshop on the design system ecosystem. Brad will show you how to unpack the many layers of the design system layer cake so you can deliver sturdy user interfaces and help teams work better together.
  2. Stéphanie Walter is running a workshop on designing adaptive reusable components and pages . Stéphanie will show you how to plan your content and information architecture to help build more reusable components.
  3. Tom Kerwin will be giving a workshop on multiverse mapping. Tom will show you how to pin down your product strategy and to align your team around the stuff that matters.
  4. Luke Hay is running a workshop on bridging the gap between Research and Design. Luke will show you how to take practical steps to ensure that designers and researchers are working as a seamless team.

Finally we’ll finish the whole event with one last closing keynote. I’m very excited to announce who that’s going to be—I’ll only keep you on tenterhooks for a short while longer.

When step back and look at what’s on offer, day three of UX London looks pretty unmissable. If you work with a design system or heck, if you just work with other people, this is the day for you. So get your ticket now.

But be sure to use this discount code I’ve prepared just for you to get a whopping 20% off the ticket price: JOINJEREMY.

Design engineer

It’s been just over two years since Chris wrote his magnum opus about The Great Divide. It really resonated with me, and a lot of other people.

The crux of it is that the phrase “front-end development” has become so broad and applies to so many things, that it has effectively lost its usefulness:

Two front-end developers are sitting at a bar. They have nothing to talk about.

Brad nailed the differences in responsibilities when he described them as front-of-the-front-end and back-of-the-front-end web development:

A front-of-the-front-end developer is a web developer who specializes in writing HTML, CSS, and presentational JavaScript code.

A back-of-the-front-end developer is a web developer who specializes in writing JavaScript code necessary to make a web application function properly.

In my experience, the term “full stack developer” is often self-applied by back-of-the-front-end developers who perhaps underestimate the complexity of front-of-the-front development.

Me, I’m very much a front-of-the-front developer. And the dev work we do at Clearleft very much falls into that realm.

This division of roles and responsibilities reminds me of a decision we made in the founding days of Clearleft. Would we attempt to be a full-service agency, delivering everything from design to launch? Or would we specialise? We decided to specialise, doubling down on UX design, which was at the time an under-served area. But we still decided to do front-end development. We felt that working with the materials of the web would allow us to deliver better UX.

We made a conscious decision not to do back-end development. Partly it was a question of scale. If you were a back-end shop, you probably had to double down on one stack: PHP or Ruby or Python. We didn’t want to have to turn away any clients based on their tech stack. Of course this meant that we had to partner with other agencies that specialised in those stacks, and that’s what we did—we had trusted partners for Drupal development, Rails development, Wordpress development, and so on.

The world of front-end development didn’t have that kind of fragmentation. The only real split at the time was between Flash agencies and web standards (HTML, CSS, and JavaScript).

Overall, our decision to avoid back-end development stood us in good stead. There were plenty of challenges though. We had to learn how to avoid “throwing stuff over the wall” at whoever would be doing the final back-end implementation. I think that’s why we latched on to design systems so early. It was clearly a better deliverable for the people building the final site—much better than mock-ups or pages.

Avoiding back-end development meant we also avoided long-term lock-in with maintainence, security, hosting, and so on. It might sound strange for an agency to actively avoid long-term revenue streams, but at Clearleft it’s always been our philosophy to make ourselves redundant. We want to give our clients everything they need—both in terms of deliverables and knowledge—so that they aren’t dependent on us.

That all worked great as long as there was a clear distinction between front-end development and back-end development. Front-end development was anything that happened in a browser. Back-end development was anything that happened on the server.

But as the waters muddied and complex business logic migrated from the server to the client, our offering became harder to clarify. We’d tell clients that we did front-end development (meaning we’d supply them with components formed of HTML, CSS, and JavaScript) and they might expect us to write application logic in React.

That’s why Brad’s framing resonated with me. Clearleft does front-of-front-end development, but we liaise with our clients’ back-of-the-front-end developers. In fact, that bridging work—between design and implementation—is where devs at Clearleft shine.

As much as I can relate to the term front-of-front-end, it doesn’t exactly roll off the tongue. I don’t expect it to be anyone’s job title anytime soon.

That’s why I was so excited by the term “design engineer,” which I think I first heard from Natalya Shelburne. There’s even a book about it and the job description sounds very much like the front-of-the-front-end work but with a heavy emphasis on the collaboration and translation between design and implementation. As Trys puts it:

What I love about the name “Design Engineer”, is that it’s entirely focused on the handshake between those two other roles.

There’s no mention of UI, CSS, front-end, design systems, documentation, prototyping, tooling or any ‘hard’ skills that could be used in the role itself.

Trys has been doing some soul-searching and has come to the conclusion “I think I might be a design engineer…”. He has also written on the Clearleft blog about how well the term describes design and development at Clearleft.

Personally, I’m not a fan of using the term “engineer” to refer to anyone who isn’t actually a qualified engineer—I explain why in my talk Building—but I accept that that particular ship has sailed. And the term “design developer” just sounds odd. So I’m all in using the term “design engineer”.

I can imagine this phrase being used in a job ad. It could also be attached to levels: a junior design engineer, a mid-level design engineer, a senior design engineer; each level having different mixes of code and collaboration (maybe a head of design enginering never writes any code).

Trys has written a whole series of posts on the nitty-gritty work involved in design engineering. I highly recommend reading all of them:

Overlay gap

I think a lot about Danielle’s talk at Patterns Day last year.

Around about the six minute mark she starts talking about gaps and overlaps.

Gaps are where hidden complexity live. If we don’t have a category to cover it, in effect it becomes invisible. But that doesn’t mean it’s not there. Unidentified gaps cause inconsistency and confusion.

Overlaps occur when two separate categories encompass some of the same areas of responsibility. They cause conflict, duplication of effort, and unnecessary friction.

This is the bit I keep thinking about. It’s such an insightful lens to view things through. On just about any project, tensions are almost due to either gaps (“I thought someone else was doing that”) or overlaps (“Oh, you’re doing that? I thought we were doing that”).

When I was talking to Gerry on his new podcast recently, we were trying to figure out why web performance is in such a woeful state. I mused that there may be a gap. Perhaps designers think it’s a technical problem and developers think it’s a design problem. I guess you could try to bridge this gap by having someone whose job is to focus entirely on performance. But I suspect the better—but harder—solution is to create a shared culture of performance, of the kind Lara wrote about in her book:

Performance is truly everyone’s responsibility. Anyone who affects the user experience of a site has a relationship to how it performs. While it’s possible for you to single-handedly build and maintain an incredibly fast experience, you’d be constantly fighting an uphill battle when other contributors touch the site and make changes, or as the Web continues to evolve.

I suspect there’s a similar ownership gap at play when it comes to the ubiquitous obtrusive overlays that are plastered on so many websites these days.

Kirill Grouchnikov recently published a gallery of screenshots showcasing the beauty of modern mobile websites:

There are two things common between the websites in these screenshots that I took yesterday.

  1. They are beautifully designed, with great typography, clear branding, all optimized for readability.
  2. I had to install Firefox, Adblock Plus and uBlock Origin, as well as manually select and remove additional elements such as subscription overlays.

The web can be beautiful. Except it’s not right now.

How is this dissonance possible? How can designers and developers who clearly care about the user experience be responsible for unleashing such user-hostile interfaces?

PM/Legal/Marketing made me do it

I get that. But surely the solution can’t be to shrug our shoulders, pass the buck, and say “not my job.” Somebody designed each one of those obtrusive overlays. Somebody coded up each one and pushed them into production.

It’s clear that this is a problem of communication and understanding, rather than a technical problem. As always. We like to talk about how hard and complex our technical work is, but frankly, it’s a lot easier to get a computer to do what you want than to convince a human. Not least because you also need to understand what that other human wants. As Danielle says:

Recognising the gaps and overlaps is only half the battle. If we apply tools to a people problem, we will only end up moving the problem somewhere else.

Some issues can be solved with better tools or better processes. In most of our workplaces, we tend to reach for tools and processes by default, because they feel easier to implement. But as often as not, it’s not a technology problem. It’s a people problem. And the solution actually involves communication skills, or effective dialogue.

So let’s say it is someone in the marketing department who is pushing to have an obtrusive newsletter sign-up form get shoved in the user’s face. Talk to them. Figure out what their goals are—what outcome are they hoping to get to. If they don’t seem to understand the user-experience implications, talk to them about that. But it needs to be a two-way conversation. You need to understand what they need before you start telling them what you want.

I realise that makes it sound patronisingly simple, and I know that in actuality it’s a sisyphean task. It may be that genuine understanding between people is the wickedest of design problems. But even if this problem seems insurmoutable, at least you’d be tackling the right problem.

Because the web can’t survive like this.

Unity

It’s official. Microsoft’s Edge browser is running on the Blink rendering engine and it’s available now.

Just over a year ago, I wrote about my feelings on this decision:

I’m sure the decision makes sound business sense for Microsoft, but it’s not good for the health of the web.

The importance of browser engine diversity is beautifully illustrated (literally) in Rachel’s The Ecological Impact of Browser Diversity.

But I was chatting to Amber the other day, and I mentioned how I can see the theoretical justification for Microsoft’s decision …even if I don’t quite buy it myself.

Picture, if you will, something I’ll call the bar of unity. It’s a measurement of how much collaboration is happening between browser makers.

In the early days of the web, the bar of unity was very low indeed. The two main browser vendors—Microsoft and Netscape—not only weren’t collaborating, they were actively splintering the languages of the web. One of them would invent a new HTML element, and the other would invent a completely different element to do the same thing (remember abbr and acronym). One of them would come up with one model for interacting with a document through JavaScript, and the other would come up with a completely different model to the same thing (remember document.all and document.layers).

There wasn’t enough collaboration. Our collective anger at this situation led directly to the creation of The Web Standards Project.

Eventually, those companies did start collaborating on standards at the W3C. The bar of unity was raised.

This has been the situation for most of the web’s history. Different browser makers agreed on standards, but went their own separate ways on implementation. That’s where they drew the line.

Now that line is being redrawn. The bar of unity is being raised. Now, a number of separate browser makers—Google, Samsung, Microsoft—not only collaborate on standards but also on implementation, sharing a codebase.

The bar of unity isn’t right at the top. Browsers can still differentiate in their user interfaces. Edge, for example, can—and does—offer very sensible defaults for blocking trackers. That’s much harder for Chrome to do, given that Google are amongst the worst offenders.

So these browsers are still competing, but the competition is no longer happening at the level of the rendering engine.

I can see how this looks like a positive development. In fact, from this point of view, Mozilla are getting in the way of progress by having a separate codebase (yes, this is a genuinely-held opinion by some people).

On the face of it, more unity sounds good. It sounds like more collaboration. More cooperation.

But then I think of situations where complete unity isn’t necessarily a good thing. Take political systems, for example. If you have hundreds of different political parties, that’s not ideal. But if you only have one political party, that’s very bad indeed!

There’s a sweet spot somewhere in between where there’s a base of level of agreement and cooperation, but there’s also plenty of room for disagreement and opposition. Right now, the browser landscape is just about still in that sweet spot. It’s like a two-party system where one party has a crushing majority. Checks and balances exist, but they’re in peril.

Firefox is one of the last remaining representatives offering an alternative. The least we can do is support it.

Toast

Shockwaves rippled across the web standards community recently when it appeared that Google Chrome was unilaterally implementing a new element called toast. It turns out that’s not the case, but the confusion is understandable.

First off, this all kicked off with the announcement of “intent to implement”. That makes it sounds like Google are intending to, well, …implement this. In fact “intent to implement” really means “intend to mess around with this behind a flag”. The language is definitely confusing and this is something that will hopefully be addressed.

Secondly, Chrome isn’t going to ship a toast element. Instead, this is a proposal for a custom element currently called std-toast. I’m assuming that should the experiment prove successful, it’s not a foregone conclusion that the final element name will be called toast (minus the sexually-transmitted-disease prefix). If this turns out to be a useful feature, there will surely be a discussion between implementators about the naming of the finished element.

This is the ideal candidate for a web component. It makes total sense to create a custom element along the lines of std-toast. At first I was confused about why this was happening inside of a browser instead of first being created as a standalone web component, but it turns out that there’s been a fair bit of research looking at existing implementations in libraries and web components. So this actually looks like a good example of paving an existing cowpath.

But it didn’t come across that way. The timing of announcements felt like this was something that was happening without prior discussion. Terence Eden writes:

It feels like a Google-designed, Google-approved, Google-benefiting idea which has been dumped onto the Web without any consideration for others.

I know that isn’t the case. And I know how many dedicated people have worked hard on this proposal.

Adrian Roselli also remarks on the optics of this situation:

To be clear, while I think there is value in minting a native HTML element to fill a defined gap, I am wary of the approach Google has taken. A repo from a new-to-the-industry Googler getting a lot of promotion from Googlers, with Googlers on social media doing damage control for the blowback, WHATWG Googlers handling questions on the repo, and Google AMP strongly supporting it (to reduce its own footprint), all add up to raise alarm bells with those who advocated for a community-driven, needs-based, accessible web.

Dave Cramer made a similar point:

But my concern wasn’t so much about the nature of the new elements, but of how we learned about them and what that says about how web standardization works.

So there’s a general feeling (outside of Google) that there’s something screwy here about the order of events. A lot discussion and research seems to have happened in isolation before announcing the intent to implement:

It does not appear that any discussions happened with other browser vendors or standards bodies before the intent to implement.

Why is this a problem? Google is seeking feedback on a solution, not on how to solve the problem.

Going back to my early confusion about putting a web component directly into a browser, this question on Discourse echoes my initial reaction:

Why not release std-toast (and other elements in development) as libraries first?

It turns out that std-toast and other in-browser web components are part of an idea called layered APIs. In theory this is an initiative in the spirit of the extensible web manifesto.

The extensible web movement focused on exposing low-level APIs to developers: the fetch API, the cache API, custom elements, Houdini, and all of those other building blocks. Layered APIs, on the other hand, focuses on high-level features …like, say, an HTML element for displaying “toast” notifications.

Layered APIs is an interesting idea, but I’m worried that it could be used to circumvent discussion between implementers. It’s a route to unilaterally creating new browser features first and standardising after the fact. I know that’s how many features already end up in browsers, but I think that the sooner that authors, implementers, and standards bodies get a say, the better.

I certainly don’t think this is a good look for Google given the debacle of AMP’s “my way or the highway” rollout. I know that’s a completely different team, but the external perception of Google amongst developers has been damaged by the AMP project’s anti-competitive abuse of Google’s power in search.

Right now, a lot of people are jumpy about Microsoft’s move to Chromium for Edge. My friends at Microsoft have been reassuring me that while it’s always a shame to reduce browser engine diversity, this could actually be a good thing for the standards process: Microsoft could theoretically keep Google in check when it comes to what features are introduced to the Chromium engine.

But that only works if there is some kind of standards process. Layered APIs in general—and std-toast in particular—hint at a future where a single browser vendor can plough ahead on their own. I sincerely hope that’s a misreading of the situation and that this has all been an exercise in miscommunication and misunderstanding.

Like Dave Cramer says:

I hear a lot about how anyone can contribute to the web platform. We’ve all heard the preaching about incubation, the Extensible Web, working in public, paving the cowpaths, and so on. But to an outside observer this feels like Google making all the decisions, in private, and then asking for public comment after the feature has been designed.

Navigating Team Friction by Lara Hogan

It’s day two of An Event Apart Seattle (Special Edition). Lara is here to tell us about Navigating Team Friction. These are my notes…

Lara started as a developer, and then moved into management. Now she consults with other organisations. So she’s worked with teams of all sizes, and her conclusion is that humans are amazing. She has seen teams bring a site down; she has seen teams ship amazing features; she has seen teams fall apart because they had to move desks. But it’s magical that people can come together and build something.

Bruce Tuckman carried out research into the theory of group dynamics. He published stages of group development. The four common stages are:

  1. Forming. The group is coming together. There is excitement.
  2. Storming. This is when we start to see some friction. This is necessary.
  3. Norming. Things start to iron themselves out.
  4. Performing. Now you’re in the flow state and you’re shipping.

So if your team is storming (experiencing friction), that’s absolutely normal. It might be because of disagreement about processes. But you need to move past the friction. Team friction impacts your co-workers, company, and users.

An example. Two engineers passively-aggressively commenting each other’s code reviews; they feign surprise at the other’s technology choices; one rewrites the others code; one ships to production with code review; a senior team member or manager has to step in. But it costs a surprising amount of time and energy before a manager even notices to step in.

Brains

The Hulk gets angry. This is human. We transform into different versions of ourselves when we are overcome by our emotions.

Lara has learned a lot about management by reading about how our brains work. We have a rational part of our brain, the pre-frontal cortex. It’s very different to our amygdala, a much more primal part of our brain. It categorises input into either threat or reward. If a threat is dangerous enough, the amygdala takes over. The pre-frontal cortex is too slow to handle dangerous situations. So when you have a Hulk moment, that was probably an amygdala hijack.

We have six core needs that are open to being threatened (leading to an amygdala hijacking):

  1. Belonging. Community, connection; the need to belong to a tribe. From an evolutionary perspective, this makes sense—we are social animals.
  2. Improvement/Progress. Progress towards purpose, improving the lives of others. We need to feel that we do matters, and that we are learning.
  3. Choice. Flexibility, autonomy, decision-making. The power to make decisions over your own work.
  4. Equality/Fairness. Access to resources and information; equal reciprocity. We have an inherent desire for fairness.
  5. Predictability. Resources, time, direction future challenges. We don’t like too many surprises …but we don’t like too much routine either. We want a balance.
  6. Significance. Status, visibility, recognition. We want to feel important. Being assigned to a project you think is useless feels awful.

Those core needs are B.I.C.E.P.S. Thinking back to your own Hulk moment, which of those needs was threatened?

We value those needs differently. Knowing your core needs is valuable.

Desk Moves

Lara has seen the largest displays of human emotion during something as small as moving desks. When you’re asked to move your desk, your core need of “Belonging” may be threatened. Or it may be a surprise that disrupts the core need of “Improvement/Progress.” If a desk move is dictated to you, it feels like “Choice” is threatened. The move may feel like it favours some people over others, threatening “Equality/Fairness.” The “Predictability” core need may be threatened by an unexpected desk move. If the desk move feels like a demotion, your core need of “Significance” will be threatened.

We are not mind readers, so we can’t see when someone’s amygdala takes over. But we can look out for the signs. Forms of resistance can be interpreted as data. The most common responses when a threat is detected are:

  1. Doubt. People double-down on the status quo; they question the decision.
  2. Avoidance. Avoiding the problem; too busy to help with the situation.
  3. Fighting. People create arguments against the decision. They’ll use any logic they can. Or they simply refuse.
  4. Bonding. Finding someone else who is also threatened and grouping together.
  5. Escape-route. Avoiding the threat by leaving the company.

All of these signals are data. Rather than getting frustrated with these behaviours, use them as valuable data. Try not to feel threatened yourself by any of these behaviours.

Open questions are powerful tool in your toolbox. Asked from a place of genuine honesty and curiosity, open questions help people feel less threatened. Closed questions are questions that can be answered with “yes” or “no”. When you spot resistance, get some one-on-one time and try to ask open questions:

  • What do you think folks are liking or disliking about this so far?
  • I wanted to get your take on X. What might go wrong? What do you think might be good about it?
  • What feels most upsetting about this?

You can use open questions like these to map resistance to threatened core needs. Then you can address those core needs.

This is a good time to loop in your manager. It can be very helpful to bounce your data off someone else and get their help. De-escalating resistance is a team effort.

Communication ✨

Listen with compassion, kindness, and awareness.

  • Reflect on the dynamics in the room. Maybe somebody thinks a topic is very important to them. Be aware of your medium. Your body language; your tone of voice; being efficient with words could be interpreted as a threat. Consider the room’s power dynamics. Be aware of how influential your words could be. Is this person in a position to take the action I’m suggesting?
  • Elevate the conversation. Meet transparency with responsibility.
  • Assume best intentions. Remember the prime directive. Practice empathy. Ask yourself what else is going on for this person in their life.
  • Listen to learn. Stay genuinely curious. This is really hard. Remember your goal is to understand, not make judgement. Prepare to be surprised when you walk into a room. Operate under the assumption that you don’t have the whole story. Be willing to have your mind changed …no, be excited to have your mind changed!

This tips are part of mindful communication. amy.tech has some great advice for mindful communication in code reviews.

Feedback

Mindful communication won’t solve all your problems. There are times when you’ll have to give actionable feedback. The problem is that humans are bad at giving feedback, and we’re really bad at receiving feedback. We actively avoid feedback. Sometimes we try to give constructive feedback in a compliment sandwich—don’t do that.

We can get better at giving and receiving feedback.

Ever had someone say, “Hey, you’re doing a great job!” It feels good for a few minutes, but what we crave is feedback that addresses our core needs.

GeneralSpecific and Actionable
Positive Feedback
Negative Feedback

The feedback equation starts with an observation (“You’re emails are often short”)—it’s not how you feel about the behaviour. Next, describe the impact of the behaviour (“The terseness of your emails makes me confused”). Then pose a question or request (“Can you explain why you write your emails that way?”).

observation + impact + question/request

Ask people about their preferred feedback medium. Some people prefer to receive feedback right away. Others prefer to digest it. Ask people if it’s a good time to give them feedback. Pro tip: when you give feedback, ask people how they’d like to receive feedback in the future.

Prepare your brain to receive feedback. It takes six seconds for your amygdala to chill out. Take six seconds before responding. If you can’t de-escalate your amygdala, ask the person giving feedback to come back later.

Think about one piece of feedback you’ll ask for back at work. Write it down. When your back at work, ask about it.

You’ll start to notice when your amygdala or pre-frontal cortex is taking over.

Prevention

Talking one-on-one is the best way to avoid team friction.

Retrospectives are a great way of normalising of talking about Hard Things and team friction.

It can be helpful to have a living document that states team processes and expectations (how code reviews are done; how much time is expected for mentoring). Having it written down makes it a North star you can reference.

Mapping out roles and responsibilities is helpful. There will be overlaps in that Venn diagram. The edges will be fuzzy.

What if you disagree with what management says? The absence of trust is at the centre of most friction.

DisgreeAgree
CommitMature and TransparentEasiest
Don’t CommitAcceptable but ToughBad Things

Practice finding other ways to address B.I.C.E.P.S. You might not to be able to fix the problem directly—the desk move still has to happen.

But no matter how empathic or mindful you are, sometimes it will be necessary to bring in leadership or HR. Loop them in. Restate the observation + impact. State what’s been tried, and what you think could help now. Throughout this process, take care of yourself.

Remember, storming is natural. You are now well-equipped to weather that storm.

See also:

Problem space

Adam Wathan wrote an article recently called CSS Utility Classes and “Separation of Concerns”. In it, he documents his journey through different ways of thinking about CSS. A lot of it is really familiar.

Phase 1: “Semantic” CSS

Ah, yes! If you’ve been in the game for a while then this will be familiar to you. The days when we used to strive to keep our class names to a minimum and use names that described the content. But, as Adam points out:

My markup wasn’t concerned with styling decisions, but my CSS was very concerned with my markup structure.

Phase 2: Decoupling styles from structure

This is the work pioneered by Nicole with OOCSS, and followed later by methodologies like BEM and SMACSS.

This felt like a huge improvement to me. My markup was still “semantic” and didn’t contain any styling decisions, and now my CSS felt decoupled from my markup structure, with the added bonus of avoiding unnecessary selector specificity.

Amen!

But then Adam talks about the issues when you have two visually similar components that are semantically very different. He shows a few possible solutions and asks this excellent question:

For the project you’re working on, what would be more valuable: restyleable HTML, or reusable CSS?

For many projects reusable CSS is the goal. But not all projects. On the Code For America project, the HTML needed to be as clean as possible, even if that meant more brittle CSS.

Phase 3: Content-agnostic CSS components

Naming things is hard:

The more a component does, or the more specific a component is, the harder it is to reuse.

Adam offers some good advice on naming things for maximum reusability. It’s all good stuff, and this would be the point at which I would stop. At this point there’s a nice balance between reusability, readability, and semantic meaning.

But Adam goes further…

Phase 4: Content-agnostic components + utility classes

Okay. The occasional utility class (for alignment and clearing) can be very handy. This is definitely the point to stop though, right?

Phase 5: Utility-first CSS

Oh God, no!

Once this clicked for me, it wasn’t long before I had built out a whole suite of utility classes for common visual tweaks I needed, things like:

  • Text sizes, colors, and weights
  • Border colors, widths, and positions
  • Background colors
  • Flexbox utilities
  • Padding and margin helpers

If one drink feels good, then ten drinks must be better, right?

At this point there is no benefit to even having an external stylesheet. You may as well use inline styles. Ah, but Adam has anticipated this and counters with this difference between inline styles and having utility classes for everything:

You can’t just pick any value want; you have to choose from a curated list.

Right. But that isn’t a technical solution, it’s a cultural one. You could just as easily have a curated list of allowed inline style properties and values. If you are in an environment where people won’t simply create a new utility class every time they want to style something, then you are also in an environment where people won’t create new inline style combinations every time they want to style something.

I think Adam has hit on something important here, but it’s not about utility classes. His suggestion of “utility-first CSS” will only work if the vocabulary is strictly adhered to. For that to work, everyone touching the code needs to understand the system and respect the boundaries of it. That understanding and respect is far, far more important than any particular way of structuring HTML and CSS. No technical solution can replace that sort of agreement …not even slapping !important on every declaration to make them immutable.

I very much appreciate the efforts that people have put into coming up with great naming systems and methodologies, even the ones I don’t necessarily agree with. They’re all aiming to make that overlap of HTML and CSS less painful. But the really hard problem is where people overlap.

100 words 053

When I got back from Bletchley Park yesterday, I immediately started huffduffing more stories about cryptography and code-breaking.

One of the stories I found was an episode of Ockham’s Razor featuring Professor Mark Dodgson. He talks about the organisational structure at Bletchley Park:

The important point was the organization emphasised team-working and open knowledge sharing where it was needed, and demarcation and specialisation where it was most appropriate.

This reminds of another extraordinary place, also displaying remarkable levels of collaboration, that has an unusual lack of traditional hierarchies and structure: CERN.

Bletchley Park produced the computer. CERN produced the web.

Notes from the edge

I went up to London for the Edge Conference on Friday. It’s not your typical conference. Instead of talks, there are panels, but not the crap kind, where nobody says anything of interest: these panels are ruthlessly curated and prepared. There’s lots of audience interaction too, but again, not the crap kind, where one or two people dominate the discussion with their own pet topics: questions are submitted ahead of time, and then you are called upon to ask it at the right moment. It’s like Question Time for the web.

Components

The first panel was on that hottest of topics: Web Components. Peter Gasston kicked it off with a superb introduction to the subject. Have a read of his equally-excellent article in Smashing Magazine to get the gist.

Needless to say, this panel covered similar ground to the TAG meetup I attended a little while back, and left me with similar feelings: I’m equal parts excited and nervous; optimistic and worried. If Web Components work out, and we get a kind emergent semantics of UI widgets, it’ll be a huge leap forward for the web. But if we end up with a Tower of Babel, things could get very messy indeed. We’ll probably get both at once. And I think that’ll be (mostly) okay.

I butted into the discussion when the topic of accessibility came up. I was a little worried about what I was hearing, which was mainly, “Oh, ARIA takes care of the accesibility.” I felt like Web Components were passing the buck to ARIA, which would be fine if it weren’t for the fact that ARIA can’t cover all the possible use-cases of Web Components.

I chatted about this with Derek and Nicole during the break, but I’m not sure if I was articulating my thoughts very well, so I’ll have another stab at it here:

Let me set the scene for Web Components…

Historically, HTML has had a limited vocubalary for expressing interface widgets—mostly a bunch of specialised form fields like, say, the select element. The plus side is that there’s a consensus of understanding among the browsers, so you don’t have to explain what a select element does; the browsers already know. The downside is that whenever we want to add a new interface element like input type="range", it takes time to get into browsers and through the standards process. Web Components allow you to conjure up interface elements, and you don’t have to lobby browser makers or standards groups in order to make browsers understand your newly-minted element: you provide all the behavioural and styling instructions in one bundle.

So Web Components make use of HTML, JavaScript, and (scoped) CSS. The possibility space for the HTML is infinite: if you need an element that doesn’t exist, you just invent it. The possibility space for the JavaScript is pretty close to infinite: it’s a Turing-complete language that can be wrangled to do just about anything. The possibility space for CSS isn’t infinite, but it’s pretty darn big: there’s not much you can’t do with it at this point.

What’s missing from that bundle of HTML, JavaScript, and CSS are hooks for assistive technology. Up until now, this is something we’ve mostly left to the browser. We don’t have to include any hooks for assistive technology when we use a select element because the browser knows what it is and can expose that knowledge to the assistive technology. If we’re going to start making up our own interface elements, we now have to take on the responsibility of providing that information to assistive technology.

How do we that? Well, right now, our only option is to use ARIA …but the possibility space defined by ARIA is much, much smaller than HTML, JavaScript, or CSS.

That’s not a criticism of ARIA: that’s the way it was designed. It’s a reactionary technology, designed to plug the gaps where the native semantics of HTML just don’t cut it. The vocabulary of ARIA was created by looking at the kinds of interface elements people are making—tabs, sliders, and so on. That’s fine, but it can’t scale to keep pace with Web Components.

The problem that Web Components solve—the fact that it currently takes too long to get a new interface element into browsers—doesn’t have a corresponding solution when it comes to accessibility hooks. Just adding more and more predefined ARIA roles won’t cut it—we need some kind of extensible accessibility that matches the expressive power of Web Components. We don’t need a bigger vocabulary in ARIA, we need a way to define our own vocabulary—an extensible ARIA, if you will.

Hmmm… I’m still not sure I’m explaining myself very well.

Anyway, I just want to make sure that accessibility doesn’t get left behind (again!) in our rush to create a new solution to our current problems. With Web Components still in their infancy, this feels like the right time to raise these concerns.

That highlights another issue, one that Nicole picked up on. It’s really important that the extensible web community and the accessibility community talk to each other.

Frankly, the accessibility community can be its own worst enemy sometimes. So don’t get me wrong: I’m not bringing up my concerns about the accessibility of Web Components in order to cry “fail!”—I just want to make sure that it’s on the table (and I’m glad that Alex is one of the people driving Web Components—his history with Dojo reassures me that we can push the boundaries of interface widgets on the web without leaving accessibility behind).

Anyway …that’s enough about that. I haven’t mentioned all the other great discussions that took place at Edge Conference.

Developer Tooling

The Web Components panel was followed by a panel on developer tools. This was dominated by representatives from different browsers, each touting their own set of in-browser tools. But the person who I really wanted to rally behind was Kenneth Auchenberg. He quite rightly asks why our developer tools and our text editors are two different apps. And rather than try to put text editors into developer tools, what we really want is to pull developer tools into our text editors …all the developer tools from all the browsers, not just one set of developer tools from one specific browser.

If you haven’t seen Kenneth’s presentation from Full Frontal, I urge you to watch it or listen to it.

I had my hand up to jump into the discussion towards the end, but time ran out so I didn’t get a chance. Paul came over afterwards and asked what I was going to say. Here’s what I told him…

I’m fascinated by the social dynamics around how browsers get made. This is an area where different companies are simultaneously collaborating and competing.

Broadly speaking, the feature set of a web browser can be divided into two buckets:

In one bucket, you’ve got the support for standards like HTML, CSS, JavaScript. Now, individual browsers might compete on how quickly or how thoroughly they get those standards implemented, but at this point, there’s no disagreement about the fact that proprietary crap is bad, standards are good, and that no matter how painful the process can be, browser makers all need to get together and work on standards together. Heck, even Apple can’t avoid collaborating on this stuff.

In the other bucket, you’ve got all the stuff that browsers compete against each other with: speed, security, the user interface, etc. A lot of this takes place behind closed doors, and that’s fine. There’s no real need for browser makers to collaborate on this stuff, and it could even hurt their competetive advantage if they did collaborate.

But here’s the problem; developer tools seem to be coming out of that second bucket instead of the first. There doesn’t seem to be much communication between the browser makers on developer tools. That’s fine if you see developer tools as an opportunity for competition, but it’s lousy if you see developer tools as an opportunity for interoperability.

This is why Kenneth’s work is so important. He’s crying out for more interoperability between browsers when it comes to developer tools. Why can’t they all use the same low-level APIs under the hood? Then they can still compete on how pretty their dev tools look, without making life miserable for developers who want to move quickly between browsers.

As painful as it might be, I think that browser makers should get together in some semi-formalised way to standardise this stuff. I don’t think that the W3C or the WHATWG are necessarily the right places for this kind of standardisation, but any kind of official cooperation would be good.

Build Process

The panel on build processes for front-end development kicked off with Gareth saying a few words. Some of those words included the sentence:

Make is probably older than you.

Cue glares from me and Scott.

Gareth also said that making websites means making software. We’re all making software—live with it.

This made me nervous. I’ve always felt that one of the great strengths of the web has been its low barrier to entry. The idea of a web that can only be made by qualified software developers doesn’t sound like a good thing to me.

Fortunately, things got cleared up later on. Somebody else asked a question about whether the barrier to entry was being raised by the complexity of tools like preprocessors, compilers, and transpilers. The consensus of the panel was that these are power tools for power users. So if someone were learning to make a website from scratch, you wouldn’t start them off with, say, Sass, without first learning CSS.

It was a fun panel, made particulary enjoyable by the presence of Kyle Simpson. I like the cut of his jib. Alas, I didn’t get the chance to tell him that in person. I had to duck out of the afternoon’s panels to get back to Brighton due to unforeseen family circumstances. But I did manage to catch some of the later panels on the live stream.

Closing thoughts

A common thread I noticed amongst many of the panels was a strong bias for decantralisation, rather than collaboration. That was most evident with Web Components—the whole point is that you can make up your own particular solution rather than waiting for a standards body. But it was also evident in the Developer Tools line-up, where each browser maker is reinventing the same wheels. And when it came to Build Process, it struck me that everyone is scratching their own itch instead of getting together to work on an itch solution.

There’s nothing wrong with that kind of Darwinian approach to solving our problems, but it does seem a bit wasteful. Mairead Buchan was at Edge Conference too and she noticed the same trend. Sounds like she’s going to do something about it too.

The Session

When I was travelling back from Webstock in New Zealand at the start of this year, I had a brief stopover in Sydney. It coincided with one of John and Maxine’s What Do I Know? events so I did a little stint on five things I learned from the internet.

It was a fun evening and I had a chance to chat with many lovely Aussie web geeks. There was this one guy, Christian, that I was chatting with for quite a bit about all sorts of web-related stuff. But I could tell he wasn’t Australian. The Northern Ireland accent was a bit of a giveaway.

“You’re not from ‘round these parts, then?” I asked.

“Actually,” he said, “we’ve met before.”

I started racking my brains. Which geeky gathering could it have been?

“In Freiburg” he said.

Freiburg? But that was where I lived in the ’90s, before I was even making websites. I was drawing a complete blank. Then he said his name.

“Christian!” I cried, “Kerry and Christian!”

With a sudden shift of context, it all fit into place. We had met on the streets of Freiburg when I was a busker. Christian and his companion Kerry were travelling through Europe and they found themselves in Freiburg, also busking. Christian played guitar. Kerry played fiddle.

I listened to them playing some great Irish tunes and then got chatting with them. They didn’t have a place to stay so I offered to put them up. We had a good few days of hanging out and playing music together.

And now, all these years later, here was Christian …in Sydney, Australia …at a web event! Worlds were colliding. But it was a really great feeling to have that connection between my past and my present; between my life in Germany and my life now; between the world of Irish traditional music and the world of the web.

One of the other things that connects those two worlds is The Session. I’ve been running that website for about twelve or thirteen years now. It’s the thing I’m simultaneously most proud of and most ashamed of.

I’m proud of it because it has genuinely managed to contribute something back to the tradition: it’s handy resource for trad players around the world.

I’m ashamed of it because it has been languishing for so long. It has so much potential and I haven’t been devoting enough time or energy into meeting that potential.

At the end of 2009, I wrote:

I’m not going to make a new year’s resolution—that would just give me another deadline to stress out about—but I’m making a personal commitment to do whatever I can for The Session in 2010.

Well, it only took me another two years but I’ve finally done it.

I’ve spent a considerable portion of my spare time this year overhauling the site from the ground up, completely refactoring the code, putting together a new mobile-first design, adding much more location-based functionality and generally tilting at my own personal windmills. Trying to rewrite a site that’s been up and running for over a decade is considerably more challenging than creating a new site from scratch.

Luckily I had some help. Christian, for example, helped geocode all the sessions and events that had been added to the site over the years.

That’s one thing that the worlds of Irish music and the web have in common: people getting together to share and collaborate.

To CERN with love

I went to Switzerland yesterday. More specifically, Geneva. More specifically, CERN. More specifically, ATLAS. Tireless Communications Officer Claudia Marcelloni went out of her way to make sure that I had a truly grand tour of life at CERN.

Claudia at the Globe Control room

CERN is the ultimate area of overlap in the Venn diagram of geek interests: the place where the World Wide Web was invented while people were working on cracking the secrets of the universe.

I saw the world’s first web server—Tim Berners-Lee’s NeXT machine. I saw the original proposal for the World Wide Web, complete with the note scribbled across the top “vague but exciting.”

The first web server Information Management: A Proposal

But I understand what James meant when he described the whole web thing as a sideshow to the main event:

Because, you know the web is cool and all, but when you’re trying to understand the fundamental building blocks of the universe and constructing the single greatest scientific instrument of ours and perhaps any civilisation, the whole modern internet is a happy side effect, it is a nice to have.

The highlight of my day was listening to Christoph Rembser geek out about his work: hunting for signs of elusive dark matter by measuring missing momentum when smashing particles together near the speed of light in a 27 kilometre wide massive structure 100 metres underneath France and Switzerland, resulting in incredible amounts of data being captured and stored within an unimaginably short timescale. Awesome. Literally, awesome.

Christoph geeking out Dr. Christoph Rembser

But what really surprised me at CERN wasn’t learning about the creation of the web or learning about the incredible scientific work being done there. As a true-blooded web/science nerd, I had already read plenty about both. No, what really took me by surprise was the social structure at CERN.

According to most established social and economic theory, nothing should ever get done at CERN. It’s a collection of thousands of physics nerds—a mixture of theorists (the ones with blackboards) and experimentalists (the ones with computers). When someone wants to get something done, they present their ideas and ask for help from anyone with specific fields of expertise. Those people, if they like the sound of the idea, say “Okay” and a new collaboration is born.

That’s it. That’s how stuff gets done. It’s like a massive multiplayer hackday. It’s like the ultimate open source project (and yes, everything, absolutely everything, done at CERN is realised publicly). It is the cathedral and it is the bazaar. It is also the tower of Babel: people from everywhere in the world come to this place and collaborate, communicating any way they can. In the canteen, where Nobel prize winners sit with students, you can hear a multitude of accents and languages.

CERN is an amazing place. These thousands of people might be working on completely different projects, but there’s a shared understanding and a shared ethos amongst every one of them. That might sound like a flimsy basis for any undertaking, but it works. It works really, really well. And this isn’t just any old undertaking—they’re not making apps or shipping consumer products—they’re working on the most important questions that humans have ever attempted to answer. And they’re doing it all within a framework that, according to conventional wisdom, just shouldn’t work. But it does work. And that, in its own way, is also literally awesome.

Christoph described what it was like for him to come to CERN from Bonn, the then-capital of West Germany. It was 1989, a momentous year (and not just because Tim Berners-Lee wrote Information Management: A Proposal). Students were demonstrating and dying in Tiananmen Square. The Berlin wall was coming down (only later did I realise that my visit to CERN took place on October 3rd, Tag der Deutschen Einheit). At CERN, Christoph met Chinese students, Russian scientists, people from all over the world transcending their political differences to collaborate on truly fundamental questions. And he said that when people returned to their own countries, they surely carried with them some of that spirit that they had experienced together at CERN.

Compared to the actual work going on at CERN, that idea is a small one. It may not be literally awesome …but it really resonated with me.

I think I understand a little better now where the web comes from.

I approve of this message

The good new days

I’m continually struck by a sense of web design deja vu these days. After many years of pretty dull stagnation, things are moving at a fast clip once again. It reminds of the web standards years at the beginning of the century—and not just because HTML5 Doctor has revived Dan’s excellent Simplequiz format.

Back then, there was a great spirit of experimentation with CSS. Inevitably the experimentation started on personal sites—blogs and portfolios—but before long that spirit found its way into the mainstream with big relaunches like ESPN, Wired, Fast Company and so on. Now I’m seeing the same transition happening with responsive web design and, funnily enough, I’m seeing lots of the same questions popping up:

  • How do we convince the client?
  • How do we deal with ad providers?
  • How will the CMS cope with this new approach?

Those are tricky questions but I’m confident that they can be answered. The reason I feel so confident is that there are such smart people working on this new frontier.

Just as we once gratefully received techniques like Dave’s CSS sprites and Doug’s sliding doors, now we have new problems to solve in fiendishly clever ways. The difference is that we now have Github.

Here’s a case in point: responsive images. Scaling images is all well and good but beyond a certain point it becomes overkill. How do we ensure that we’re serving up appropriately-sized images to various screen widths?

Scott kicked things off with his original code, a clever mixture of JavaScript, cookies, .htaccess rules and the -data HTML5 attribute prefix. Crucially, this technique is using progressive enhancement: the smaller image is the default; the larger image only gets swapped in when the screen width is wide enough. Update: and Scott has just updated the code to remove the -data-fullsrc usage.

Mark was able to take Scott’s code and fork it to come up with his own variation which uses less JavaScript.

Andy added his own twist on the technique by coming up with a slightly different solution: instead of looking at the width of the screen, take a look a look at the width of the element that contains the image. Basically, if you’re using percentages to scale your images anyway, you can compare the offsetWidth of the image to the declared width of the image and if it’s larger, swap in a larger image. He has written up this technique and you can see it in action on the holding page for this September’s Brighton Digital Festival.

I particularly like Andy’s Content First approach. The result is that sometimes a large screen width might mean you actually want smaller images (because the images will appear within grid columns) whereas a smaller screen, like maybe a tablet, might get the larger images (if the content is linearised, for example). So it isn’t the width of the viewport that matters; it’s the context within which the image is appearing.

All three approaches are equally valuable. The technique you choose will depend on your own content and the specific kind of problem you are trying to solve.

The Mobile Safari orientation and scale bug is another good example of a crunchy problem that smart people like Shi Chuan and Mathias Bynens can tackle using the interplay of blogs, Github and to a lesser extent, Twitter. I just love seeing the interplay of ideas that cross-pollinate between these clever-clogged geeks.

Sea change

Every now and then I come across a site that reminds of just why I love this sad and beautiful world wide web: a site with that certain intangible webbiness.

Wikipedia has it. That’s a project that’s not just on the web, it’s of the web. It’s a terrible idea in theory, but an amazing achievement in practice. It restores my faith in humanity.

Kickstarter has it. The word distruptive is over-used in the world of technology, but I can’t think of a better adjective to describe Kickstarter …except, perhaps, for empowering. There’s something incredibly satisfying about contributing directly to someone’s creative output.

Old Weather has it. It’s the latest project from the magnificent Zooniverse crew, the people behind the brilliant Galaxy Zoo.

Old Weather is another collaborative project. Everyone who takes part is presented with a scanned-in page from a ship’s logbook from the early 20th Century. The annotations on the pages aren’t machine-readable but the human brain does an amazing job of discerning the meaning in the patterns of markings made with pen on paper (and if you need help, there are video tutorials available).

Converting this data from analogue paper-based databases into a digital database online would in itself be a worthy goal, but in this case, the data is especially valuable:

These transcriptions will contribute to climate model projections and improve a database of weather extremes. Historians will use your work to track past ship movements and the stories of the people on board.

But the value is not just in contributing to a worthy cause; it’s also great fun. It makes excellent use of the that Clay Shirky talks about.

What a shame that the situation in which we are most often called upon to demonstrate our humanity is through the vile CAPTCHA, a dreadful idea that is ironically dehumanising in its implementation.

I’d much rather have people prove their species credentials with a more rewarding task. Want to leave a comment? First you must calculate the optimum trajectory for a Jupiter flyby, categorise a crater on the moon spot a coronal mass ejection or tell me if you live in fucking Dalston.