Tuesday, October 17, 2017

Testing Thinking, Anyone?

At Quality Jam London 2017 I was introduced to the term design thinking. It sounded interesting. I  looked it up when I got home, spoke to Rog, our UX specialist at work, and read a couple of references that he provided. Jared Spool's Shh! Don’t Tell Them There’s No Magic In Design Thinking was particularly intriguing:
For decades, I’ve needed to do what every seasoned design professional has found themselves doing: explaining why design is more than just making something pretty. When I’ve worked with other designers, they get it. 
But once someone who isn’t a designer — someone who is a layperson — is introduced into the mix, I’ve found I need to convince them that design isn’t only about making the thing pretty. That’s it’s about solving problems. That’s it’s about end-to-end solutions. 
The phrase design thinking changed all that. To a layperson, it was completely new. While it was made up of words they thought they knew, the combination was novel. “Design thinking? What’s that?”
Adding the word ‘thinking’ to ‘design’ was a brilliant move. David Kelley and Tim Brown, the founders of IDEO who popularized the term, were smart to take advantage of the unfamiliarity of the phrase.
To those of us who’ve been doing this for a long time, design thinking doesn’t mean anything new. But it also doesn’t mean ‘make it pretty.’ And that’s why it works.

At Quality Jam London 2017 I listened to Tony Bruce talking about manual testing. From my notes:
in Manual Testing is Dead. Long Live Manual Testing, [Bruce] called for testers to set the expectations of the people that they interact with. The term "manual testing" undersells what testing is, or can be, with its connotations of manual labour, unthinking monotony, apparent separation from (woo! sexy!) automation and the historical association with scripted test cases.
So he might refer to using questions, experiments, exploration, engagement, surveys, investigation, tools (which includes woo! sexy! automation), spending time thinking, iterating, and adjusting to new data. And he'll report on what he found, but not necessarily what he produced ... It's all testing, and it's on testers to explain and demonstrate how and why and what value it delivers.

Words are more than just a collection of letters on a page or vibrations in the air. Words are powerful and subtle and important. Words have emotional effects and social impacts, and they'll differ for different listeners (connoisseurs and consumers, for instance) and at different times. Words come with baggage and consequences, some of which will align with the speaker's intent, and some of which will not. Words can be freedom and words can be prison.

But blah blah semantics whatever. Testing Thinking, anyone?
Image: https://flic.kr/p/5bfq6t

Saturday, October 14, 2017

Heads Up

I tell you what: of all the things I might've expected to see on the first slide at Quality Jam London 2017, my own professional-work-photo-grinning, shiny-pated, blue-tinted face peering back down at me from behind a massive Thank You! wasn't it.

Expectations are grist to the working tester's mill, yet also often the bane of their lives. Tony Bruce, in  Manual Testing is Dead. Long Live Manual Testing, called for testers to set the expectations of the people that they interact with. The term "manual testing" undersells what testing is, or can be, with its connotations of manual labour, unthinking monotony, apparent separation from (woo! sexy!) automation and the historical association with scripted test cases.

For Bruce, testing is "the pursuit of information" but he doesn't necessarily rush into meetings spouting from that kind of lexicon (although he's singing my kind of song right there). Instead he promotes the use of PAC (purpose, audience, context) to guide conveying the desired message in a way that a recipient can take on board.

So he might refer to using questions, experiments, exploration, engagement, surveys, investigation, tools (which includes woo! sexy! automation), spending time thinking, iterating, and adjusting to new data. And he'll report on what he found, but not necessarily what he produced: "I discovered that the server will crash when there are 17 simultaneous requests" over "I filed a bug about a server crash". It's all testing, and it's on testers to explain and demonstrate how and why and what value it delivers.

In her talk, Succeeding as an Introvert, Elizabeth Zagroba led us through the traits she has that characterise her introversion and then gloried in the ways that she has found to harness them, to have them contribute to her success as a tester.

She's a passionate spectator, but that makes her focused, good on detail, and a great listener. Her preference for preparation helps give her opportunity to spend time asking "what if?", really think hard about things, and not rush into rash actions. If you're worried about her apparent reserve then consider that she says nothing when there's nothing to say, that she's comfortable in herself, and won't give in to peer pressure.

Strategies for dealing with the day-to-day were plentiful: find yourself an extrovert ally to perhaps open a spot in a boisterous conversation for you, be the scribe and write your comments direct into meeting notes (projected live or afterwards before circulation), allow yourself headspace by asking "can I get back to you?", carve yourself out some recharging time by booking solo meetings in another part of the building, ask for what you need (or risk not getting it). And, finally, some advice for extroverts: treat others how they want to be treated. (But, in fact, I think that's good advice for verts of any flavour.)

When asked how testing can be improved in an organisation, Keith Klain reaches for a simple model based on the kind of iron triangles you'll have seen in countless slide decks and blog posts. He wants to know how the notion of quality in the test team diverges from that of the business, he looks for transparency and honesty in the cost of testing activity, and he reviews time efficiency across the board.

His Debugging Your Testing Team covers the three main failings that he's seen this kind of analysis expose:
  • People who don't think about their work. This leads to wasted effort, poor coverage, bad feelings between team members, and worse. It can perhaps be turned around by finding ways to engage your staff, by coaching interactional expertise to enable them to converse on a level with others, by helping them to explore their craft.
  • People who don't trust their test team. For Klain, this is typically a process issue and addressing it requires reviewing attitudes, being open-minded about changing things, taking care to understand (to link back to Tony Bruce's talk) which nuanced version of X a stakeholder is thinking of when they claim X ... or Y, asking why (for example) particular metrics are being collected, and who is consuming them.
  • People who don't like testing. Technology is frequently at the heart of this problem, he finds. In particular a fetishisation of tools. Satisfying as it might be, automating all the test cases is not necessarily testing, which Klain views as a social activity. Step back and consider what the right thing to do is, for who, at this time.
Having given his common diagnosis, he proceeds to give his common prescription:
  • Focus on business risk
  • Make objectives based on business-aligned principles
  • Fire your test managers
Fire your test managers? I think I'd better keep my head down.

See also: my sketchnotes from Quality Jam London 2017.

Wednesday, October 11, 2017

Going Underground

Hands up if you're suspicious of vendor-run conferences. Yeah, me too. But I'm pleased to report that QASymphony didn't ram qTest down our throats at Quality Jam London 2017. Better still, they had a programme that included some speakers who had (a) nothing to do with the tools, and (b) interesting stuff to say. 

At 100 people, and with a single track, it was a good size and shape for my taste, and having the talks in the (quite cosy) repurposed subterranean beer tank of an old Whitbread brewery added to the atmosphere and charm.

I took the opportunity to practice sketchnoting again in all the talks. Mixed results, for me, but here's the ones I'm prepared to share. See my notes on the conference too.

Wednesday, October 4, 2017

assert(Assertive == True)

We've just run a two-part assertiveness training course for my Test and Doc teams at Linguamatics. What exactly is assertiveness? you ask. From our trainer's web site:
Assertiveness is a highly effective communications model ...  Assertive behaviour is professional behaviour. It is about being able to express yourself calmly and clearly and on equal terms with others; to stand your ground when necessary without becoming aggressive, or manipulative, or backing down unnecessarily.
I found it interesting and engaging, and with some immediate practical value: to create a drill to be practised and used in situations where I think I'm at risk of becoming non-assertive, and to remind myself that even if the outcome of some interaction isn't ideal I can still try to give myself credit for how I behaved during it. I also enjoyed trying to fit ABCDE (always be calm, direct, and equal) into the behaviour space I've come to by other means, such as congruent communication and Satir's interaction model.

As an aside, during the workshops I practiced the sketchnoting techniques I picked up at DDD! the other week. One sketch is at the top, the other here:

Wednesday, September 27, 2017

Cambridge Lean Coffee

This month's Lean Coffee was hosted by DisplayLink. Here's some brief, aggregated comments and questions on topics covered by the group I was in.

Are testers doing less and less testing?

  • The questioner is finding that testers today are doing more "other" activities, than he was in his early days of testing.
  • Where's the right balance between testing and other stuff?
  • What's your definition of testing?
  • I think that exploring ideas is testing.
  • I fall into a "support" role for the team; I'm the "glue" in the team, often.
  • I focus on the big picture.
  • I am thinking about what needs to be ready for the next phase, and preparing it.
  • I am thinking about information gathering and communication to stakeholders.
  • Is there a contradiction: testers are a scarce resource, but they're the ones doing "non-core" activities.
  • Perhaps it's not a contradiction? Perhaps testers are making themselves a scarce resource by doing that other stuff?
  • Doing other stuff might be OK, but you want others to take a share of it.
  • Doing some other stuff is OK, but perhaps not all of the housekeeping.
  • I want to focus on testing, not housekeeping.
  • Seniority is one of the reasons you end up doing less testing.
  • Less testing, or perhaps you have less engagement with the product?
  • I am doing more coaching of developers these days, and balancing that with exploratory testing.
  • In the absence of an expert, people expect the tester to take a task on.
  • Are developers more hesitant to take on other tasks, generally?
  • Or is the developer role just so much better defined that it's not asked of them?
  • Are you sure developers aren't doing non-development work? What about DevOps?
  • The tester contract at my work includes that testers will support others.
  • Is there a problem here? Perhaps the role is just changing to fit business needs?

How do you differentiate between a test plan and a test strategy?

  • What are they? What are the differences between them?
  • Why does it matter?
  • Plan: more detailed, acceptance criteria, specific cases, areas.
  • Strategy: high-level, test approach, useful for sharing with non-testers.
  • ... but most people don't care about this detail.
  • Do any of the participants here have required documents called Test Plan and Test Strategy on products? (Some did.)
  • Most projects have a place for strategy and tactics.
  • ... and the project context affects the division of effort that should be put into them.
  • ... and ideally the relationship between them is one of iteration.
  • Ideally artefacts are not producted once up-front and then never inspected again.
  • You might want some kind of artefact to get customer sign-off.
  • Your customer might want to see some kind of artefact at the end.
  • .... but isn't that a report of what was done, not what was planned (and probably not done)?

Can testing be beautiful?

  • When it returns stakeholder value efficiently.
  • When you've spent time testing something really well and you get no issues in production.
  • When you identify issues that no-one else would find.
  • When others think "I would never have found that!"
  • When the value added is apparent.
  • When you can demonstrate the thought process.
  • When you make other people aware of a bigger picture.
  • When you uncover an implicit customer requirement and make it explicit.
  • When you keep a lid on scope, and help to deliver value because of it.
  • OK, when is testing ugly?
  • When you miss issues and there's a bad knock-on effect.
  • When you have reams of test cases. In Excel.
  • When testing is disorganised, lacking in direction, lacking in focus, unprofessional.
  • Is beauty in the actions or the result?
  • Is beauty in the eye of the artist, or the audience?
  • Or both?

What is a tester's role in a continuous delivery pipeline?

  • When the whole pipeline is automated, where do testers fit?
  • There's an evolution in the tester skillset; the context is changing.
  • Shift left?
  • Shift in all directions!
  • Testing around and outside the pipeline.
  • Asking where the risks are.
  • Analysing production metrics.
  • Are we regressing? Isn't CD a kind of waterfall?
  • Less a line from left to right, and more a cascade flowing through gates?
  • ... perhaps, but the size of the increments is significant.

Image: https://flic.kr/p/6tZUfG

Sunday, September 17, 2017

Developer! Developer! Developer! Tester!

Last weekend, one of the testers from my team was speaking at Developer! Developer! Developer! East Anglia, a .NET community event. Naturally, because I'm a caring and supportive manager — and also as it was in Cambridge, and free — I went down to have a look. Despite not being a developer, it wasn't hard to find something of interest in all five sessions, although it's a shame the two talks on testing were scheduled against each other. Here's thumbnails from my notes.

Building a better Web API architecture using CQRS (Joseph Woodward): Command Query Responsibility Segregation is a design pattern that promotes the separation of responsibility for reading data (query) from writing data (command). A useful intro to some concepts and terminology for me, it went deeper into the code than I generally need, and in a language and libraries that I'm unfamiliar with. I found Martin Fowler's higher-level description from 2011 more consumable.

I do like listening to practitioners talk about topics they care about, though. Particularly enjoyable here was the shared relief in the room when it became apparent that many of the attendees, despite being advocates of CQRS, found that they all violate a core principle (commands don't return data) in favour of practicality from time to time (e.g. when creating users and returning a unique ID).

Client-side web performance for back-end developers (Bart Read): Chrome's developer tools got a massive big-upping, particularly the memory monitoring and task manager. Bart provided an interesting list of heuristics for improving user experience on the client side which included: load only enough to show the first thing that that the user needs to see, do the rest lazily; inline what you can for that first load; if you can fit it into a single packet even better because that reduces the cost of latency; It's the latency, stupid; load all adverts last, really last, totally the last thing that you do on a page, honestly never do anything with adverts until you've done every other thing you have to do.

Visual note-taking workshop (Ian Johnson): I've thought a lot about my own note-taking over the years and I know that it's heavy on text. I'm very comfortable with that, but I like drawing and I'm interested in trying sketchnoting to see whether going out of my comfort zone can give me another perspective or perhaps technique to roll into my usual approach.

This talk was a primer: some basic iconography, some suggestions for placement (corners for metadata such as dates, speaker name, conference); thoughts on prominence (bold, colours, boxes, underlines, ...); reminders that sketch notes are not about realism; exhortations to just go for it and not worry if it doesn't work out; and this rule of thumb for ordering noting activity: content then boxes then colours. (Related material from Ian is here.)

Testing Demystified (Karo Stoltzenburg): Karo's talk is the reason I was at the conference but she's written about it already in Three Ways to get Started with (Exploratory) Testing as a non-Tester so I won't say more. I will, however, mention that I took the opportunity to practice my new-found sketchnoting skills in her talk. As expected, I found it hard to resist writing a lot of text.

Monitoring-First Development (Benji Weber): Unruly are an XP shop applying XP development practices in a wider context. In the product they'll write a failing test then code to make it pass, and in production they'll write a failing monitor (such as checking for equivalence between two data items using a tool such as Nagios) and then implement whatever functionality provides the data for the monitor. A neat idea, and it works in their context. (Similar content here in an earlier lightning talk by Benji.)

I was really impressed with DDD: 300 attendees, friendly atmosphere, just enough organisation, free, and good spread of talks.
Image: DDD!

Monday, September 11, 2017

Forget It

Now where was I?

Oh yes, The Organized Mind by erm ... hold on a minute ... it's on the tip of my tongue ... err ... ummm ... there you go: The Organized Mind by Daniel Levitin, a self-subtitled guide to thinking straight in the age of information overload. And surely we all need a bit of that, eh?

One of the most productive ways to get your mind organised, according to Levitin, is to stop trying to organise your mind (p. 35):
The most fundamental principle of the organized mind, the one most critical to keeping us from forgetting or losing things, is to shift the burden of organizing from our brains to the external world ... This is not because of the limited capacity of our brains — rather it's because of the nature of memory storage and retrieval in our brains.
Essentially, memory is unreliable. There are numerous reasons for this, including: novelty being prioritised over familiarity, successful recall being reliant on having a suitable cue, and — somewhat scarily — that the act of remembering can itself cause memories to change.

To get around this, Levitin favours lodging information you'll need later somewhere outside of your head, in the place that you need it, in a form that'll help you to use it straight away. He likens this to affordances as described by Don Norman in his book The Design of Everyday Things which I blogged about in Can You Afford Me?  From there:
an affordance is the possibility of an action that something provides, and that is perceived by the user of that thing. An affordance of a chair is that you can stand on it. The chair affords (in some sense is for) supporting, and standing on it utilises that support.
Failing to externalise can lead to competition for mental resources when a new primary task comes along but in the background your mind is rehashing earlier ones (p. 68):
... those thoughts will churn around in your brain until you deal with them somehow. Writing them down gets them out of your head, clearing your brain of the the clutter that is interfering with being able to focus on what you want to focus on.
Perhaps you're worried that too much organisation inhibits creativity? Quite the opposite, Levitin claims (p. 87):
Finding things without rummaging saves mental energy for more important and creative tasks. 
Which brings me to testing and a conversation I was having with one my team a couple of weeks ago. In it, we agreed that experience has taught us to prefer earlier rather than later organisation of our test notes, the data we're collecting, and our ideas for other possible tests.

I've also written at some length about how I deposit thoughts in files, arranged in folders for particular purposes such as 1-1 meetings, testing experiments, or essay ideas where they may later be of value (e.g. Taking Note and A Field of My Stone).

Even posts here on Hiccupps serve a related purpose. I find that I don't easily remember stuff, but curating material that I find interesting and writing about it, and adding tags, and cross-referencing to other thoughts helps to me to retain and reinforce and later recall it.

Try to keep everything in my head? Forget it.
Image: Amazon