Archive | Websites RSS feed for this section

The London Zine Symposium

30 Mar

12pm – 6pm – sunday – 17th April – 2011 –

at The Rag Factory,
Heneage Street, London E1 5LJ

Entry free/ donation


About the Symposium

The 7th annual London Zine Symposium will be a day celebrating zines, small press, comix, radicals can DIY culture.


First stalls confirmed for 2011



With just under six weeks to go before the big day we’re happy to announce the first confirmations of stalls for the 2011 London Zine Symposium. 2011 will once again see over sixty distros, collectives and individual sine makers selling their wares on the day. This year we have people coming from Finland, Germany, Holland, Belgium, Italy and across the UK. It’s going to be an amazing spread of different zines to pick up. You can see full details of the first confirmations at

We’ve also announced the early details of our workshops, walks and talks. The Anarchist Teapot are returning to feed us all, and Past Tense are once again going to be running their popular radical history walk around the East End. You can see full details at:



A case of never letting the source spoil a good story

23 Mar

A case of never letting the source spoil a good story

By Ben Goldacre

Perhaps it’s too embarrassing for some writers to risk linking to primary sources that readers can check for themselves
Wind farms have been blamed for the stranding of whales, according to a distorted story in the Daily Telegraph which was later retracted. Photograph: Christopher Thomond

Why don’t journalists link to primary sources? Whether it’s a press release, an academic journal article, a formal report or perhaps (if everyone’s feeling brave) the full transcript of an interview, the primary source contains more information for interested readers, it shows your working, and it allows people to check whether what you wrote was true. Perhaps linking to primary sources would just be too embarrassing. Here are three short stories.

This week the Telegraph ran the headline “Wind farms blamed for stranding of whales”. It continued: “Offshore wind farms are one of the main reasons why whales strand themselves on beaches, according to scientists studying the problem.” Lady Warsi even cited this as fact on the BBC’s Question Time this week, while arguing against wind farms.

But anyone who read the open-access academic paper in PLoS One, titled “Beaked whales respond to simulated and actual navy sonar”, would see that the study looked at sonar and didn’t mention wind farms at all. At our most generous, the Telegraph story was a spectacular and bizarre exaggeration of a brief contextual aside about general levels of manmade sound in the ocean by one author at the end of the press release (titled “Whales ‘scared’ by sonars”). Now, I have higher expectations of academic institutions than media ones, but this release didn’t mention wind farms, certainly didn’t say they were “one of the main reasons why whales strand themselves on beaches”, and anyone reading the press release could see that the study was about naval sonar.

The Telegraph article was a distortion (now retracted), perhaps driven by its odder editorial lines on the environment, but my point is this: if we had a culture of linking to primary sources, if they were a click away, then any sensible journalist would be too embarrassed to see this article go online. Distortions like this are only possible, or plausible, or worth risking, in an environment where the reader is actively deprived of information.

Sometimes the examples are sillier. Professor Anna Ahn published a paper recently showing that people with shorter heels have larger calves. For the Telegraph this became Why stilettos are the secret to shapely legs”, for the Mail “Stilettos give women shapelier legs than flats”, for the Express “Stilettos tone up your legs”.

Yet anybody who read even just the press release would immediately see that this study had nothing whatsoever to do with shoes. It didn’t look at shoe heel height, it looked at anatomical heel length, the distance from the back of your ankle joint to the insertion of the achilles tendon. It was just an interesting, nerdy insight into how the human body is engineered: if you have a shorter lever at the back of your foot, you need a bigger muscle in your calf. The participants were barefoot.

Once more this story was a concoction by journalists, but no journalist would have risked writing that the study was about stilettos if they’d had to link to the press release – they’d have looked like idiots, and fantasists, to anyone who bothered to click.

Lastly, on Wednesday the Daily Mail ran with the scare headline “Swimming too often in chlorinated water ‘could increase risk of developing bladder cancer’, claim scientists”. There’s little point in documenting the shortcomings of Mail health stories any more, but suffice to say, while the story purported to describe a study in the journal Environmental Health, anyone who read the original paper, or even the press release, would see immediately that bladder cancer wasn’t measured, and the Mail’s story was a simple distortion.

Of course, this is a problem that generalises well beyond science. Over and again, you read comment pieces that purport to be responding to an earlier piece, but distort the earlier arguments, or miss out the most important ones: they count on it being inconvenient for you to check. There’s also an interesting difference between different media: most bloggers have no institutional credibility, so they must build it by linking transparently and allowing you to double-check their work easily.

But more than anything, because linking to sources is such an easy thing to do and the motivations for avoiding links are so dubious, I’ve detected myself using a new rule of thumb: if you don’t link to primary sources, I just don’t trust you.

Next up, what you see is what you get.

16 Mar

Next up, what you see is what you get.

This post continues the discussion about the tool we developed for Split Second.  Once you get past stressing and (possibly) scrolling in the timed trial, the tool asks you to slow down and consider a work in various ways prior to rating it.  What you may not know is different people are randomly assigned into groups and asked different things during these stages, so your own experience is often different from other participants.

Section two of the tool is designed to get you thinking in various capacities about a work prior to rating it. Participants are split into six groups and each group is given a question or activity about ten works. Either you are asked one of five questions (shown below) or you are just given the rating scale alone.

Which activity did you get? Participants are split into groups and each group is assigned a task to complete prior to rating. (Image: Intoxicated Woman at a Window Northern India, 79.285) 

In terms of the data coming back to us, we’ll be looking at a lot of different aspects.  Do any of the activities have an effect on the eventual rating?  How widely do answers vary to these questions?  Do participants bail from the tool and,  if so, which question/activity triggers this? How long do participants spend with works prior to answering and rating?

The third section of the tool is a bit of an information showdown.  Unlike the first section where we are looking for gut reactions or the second which gauges whether thinking/participation has an impact on rating, this final section looks at how given information may change things.  This time, we are specifically looking at the information that the institution produces to see how effective it is (or isn’t).

Participants are randomly split into one of three groups and presented with ten objects. Some people only see the object’s caption, others are given tags to consider and the final group gets what we think of as the museum “gold,” the interpretive label.  I’ve spoken to more than a few participants who were really disappointed when they were randomly selected to just review tags or captions; don’t you just love it?  Folks disappointed that they can’t dig into label copy is a bit of trip.

Participants are split into groups with some reviewing typical caption information, tags or label copy. (Image: Intoxicated Woman at a Window Northern India, 79.285) 

In this activity we are measuring a few things.  Most obvious, which type of information changes ratings. Less obvious, we’re going to be looking at length of label copy and tone of the label to see if ratings are effected simply by how we compose these materials.  Also thinking about how long people linger with these materials prior to rating.  We’ve got a chance to look at tagging in a new light, too. We know tagging has a positive effect on our collection’s searchablity, but do tags as information sitting on a page help or hurt a participant’s rating of an object?

The rating scale used in both of these sections is also worth noting because it’s a notch above what we used for Click!. In both Click! and Split Second, we recognize participants are rating art and, with its many complexities, wanted to stay away from simple 1-10 or 5 star scales.  In both cases, we implemented a slider with some general direction, but otherwise want to give folks as much granularity as possible. Split Second‘s slider differs from Click! in that there’s no fixed position of the slider mark itself.  With Click!, the slider was fixed in the center and then moved by the participant.

Click! Rating ToolSlider used in the rating tool for Click! was fixed at the center position until moved by the participant. 

In Split Second, the slider isn’t fixed until a participant hovers to encourage participants to move from center and use the breadth of the scale.

Split Second Rating ToolWith Split Second’s rating tool, slider is unfixed until hover and participants are encouraged to use the breadth of the scale. 

This is a subtle change that will likely have a big impact and many thanks goes to Beau for this idea and implementation. For as simple as the tool is, there’s a lot of complexity behind the scenes and Beau and Paul have done incredible work as the team behind it.

The preliminary data is incredibly rich and the questions and ideas that I’ve talked about here barely scratch the surface of what we are seeing, so stay tuned for more.  If you’ve not taken part in Split Second yet, you’ve got until April 14—go for it!

What Designers Should Know About Visual Perception and Memory

15 Mar

Closeup of an eye

A very interseting post on a blog by Steven Bradley on Monday, March 7th, 2011 at:

Here are some excerpts:

When a visitor lands on your web page and begins to look around you hope your message is communicated clearly and understood. On the surface this may seem like a simple one way transmission of ideas from your design to the viewer’s eye. The reality is more complex.

Visual perception is the result of complex interactions between external visual stimulus and prior knowledge, goals, and expectations. Understanding how we all perceive things visually will help designers communicate better.

This post will focus on the theory and science of visual perception and memory. Much of the information comes from the book Visual Language for Designers by Connie Malamed, which I recently read and recommend.

Visual Processing

Perception is the process of obtaining awareness and understanding of sensory data. We take in something visually and then need to process what we see in order to derive some meaning from it.

Our brains need to find meaningful patterns in our visual environment in order to make decisions about what to do and how to respond.

Human beings process sensory data in parallel as we interact with the world. Different regions of the brain are simultaneously activated through networks of neurons. This parallel processing allows visual perception to be both fast and efficient and it’s why designs can communicate quickly and efficiently as well.

Visual perception is a two way street. We see small details in the environment and take them all in to see the whole. We also bring to our environment knowledge and specific goals that determine where we look and influence our interpretation of sensory data.

How we perceive things is a combination of both bottom-up and top-down processes.

Schematic of the eye

Bottom-Up Processing

The bottom-up process is driven by external stimuli.

The human fovea can only focus on a very small area at one time and we see through a series of saccadic movements of the eye.

We fixate on one location for a moment and then move on to the next fixation. We take in little at each fixation and it’s through a pattern of saccades that we take in our visual environment.

This all occurs quickly and early in the visual process without any conscious effort or attention on out parts. It happens so quickly we’re not even aware it’s happening.

At a glance we detect the following without conscious awareness.

Each therefore can be used to attract attention to something in your design. As we process the above information our brains

This all occurs rapidly helping us to recognize and identify objects on the page. This information is quickly passed to other areas of the brain and influences where we place our attention next.

'Everything is a matter of perception' standing out as red words in a sea of white letters

Top-Down Processing

The top-down process is driven by prior knowledge and expectations as well as our specific goals of the moment. What we know shapes our interpretation of the things we see. The task at hand influences where we look next.

We tend to disregard anything that isn’t meaningful or useful at the moment. In the image above the red letters spelling out “Everything is a matter of perception” clearly stand out due to the contrast in color.

Your mind is looking for words in a sea of letters as we generally expect a pattern of letters to form words and sentences, etc.

Most of the other letters fade into the background as you read the sentence in red.

Suppose though, I asked you to find all the occurrences of the letter “P” in the image? Now as you scan the image the letter “P” should start to stand out a bit more and it’s possible that even the highly visible red letters start to fade into the background. At the very least you likely aren’t noticing the words they spell out.

The task at hand is affecting your visual perception. You see more of what you’re looking for and less of what you aren’t. This top-down process so affects our visual perception that some suggest we see more with our mind than with out eyes.

What we know, what we expect, and what we want to do influence what we see.

Abstract illustration representing free of thought


We hold information in different kinds of memory. Sensory memory records fleeting impressions that last a few hundred milliseconds. This is long enough to hold the prominent features of what we see long enough to further process them.

When sensory information is auditory we call this echoic memory and when the information is visual we call it iconic memory.

Please read in full and write a comment about how this migt affect your work/blog.

What is Design Thinking Anyway?

16 Feb

Roger Martin form

The Design of Business

Design thinking, as a concept, has been slowly evolving and coalescing over the past decade. One popular definition is that design thinking means thinking as a designer would, which is about as circular as a definition can be. More concretely, Tim Brown of IDEO has written that design thinking is “a discipline that uses the designer’s sensibility and methods to match people’s needs with what is technologically feasible and what a viable business strategy can convert into customer value and market opportunity.” [1] A person or organization instilled with that discipline is constantly seeking a fruitful balance between reliability and validity, between art and science, between intuition and analytics, and between exploration and exploitation. The design-thinking organization applies the designer’s most crucial tool to the problems of business. That tool is abductive reasoning.  

Don’t feel bad if you’re not familiar with the term. Formal logic isn’t systematically taught in our North American educational system, except to students of philosophy or the history of science. The vast majority of students are exposed to formal logic only by inference and then only to the two dominant forms of logic — deductive reasoning and inductive reasoning. Those two modes, grounded in the scientific tradition, allow the speaker to declare at the end of the reasoning process that a statement is true or false.

Deductive logic — the logic of what must be — reasons from the general to the specific. If the general rule is that all crows are black, and I see a brown bird, I can declare deductively that this bird is not a crow.

Inductive logic — the logic of what is operative — reasons from the specific to the general. If I study sales per square foot across a thousand stores and find a pattern that suggests stores in small towns generate significantly higher sales per square foot than stores in cities, I can inductively declare that small towns are my more valuable market.
Deduction and induction are reasoning tools of immense power. As knowledge has advanced, our civilization has accumulated more deductive rules from which to reason. In field after field, we stand on the shoulders of the giants who have come before us. And advances in statistical methods have furnished us with ever more powerful tools for reasoning inductively. Thirty years ago, few in a boardroom would have dared to cite the R2 of regression analysis, but now the statistical tools behind this form of induction are relatively common in business settings. So it is no wonder that deduction and induction hold privileged places in the classroom and, inevitably, the boardroom as the preeminent tools for making an argument and proving a case.

Yet a reasoning toolbox that holds only deduction and induction is incomplete. Toward the end of the nineteenth century, American philosophers such as William James and John Dewey began to explore the limits of formal declarative logic — that is, inductive and deductive reasoning. They were less interested in how one declares a statement true or false than in the process by which we come to know and understand. To them, the acquisition of knowledge was not an abstract, purely conceptual exercise, but one involving interaction with and inquiry into the world around them. Understanding did not entail progress toward an absolute truth but rather an evolving interaction with a context or environment.  

James, Dewey, and their circle became known as the American pragmatist philosophers, so called because they argued that one could gain understanding only through one’s own experiences. Among these early pragmatists, perhaps the greatest of them and certainly the most intriguing was Charles Sanders Peirce. Peirce (rhymes with “terse”) was fascinated by the origins of new ideas and came to believe that they did not emerge from the conventional forms of declarative logic. In fact, he argued that no new idea could be proved deductively or inductively using past data. Moreover, if new ideas were not the product of the two accepted forms of logic, he reasoned, there must be a third fundamental logical mode. New ideas came into being, Peirce posited, by way of “logical leaps of the mind.” New ideas arose when a thinker observed data (or even a single data point) that didn’t fit with the existing model or models. The thinker sought to make sense of the observation by making what Peirce called an “inference to the best explanation.” The true first step of reasoning, he concluded, was not observation but wondering. Peirce named his form of reasoning abductive logic. It is not declarative reasoning; its goal is not to declare a conclusion to be true or false. It is modal reasoning; its goal is to posit what could possibly be true. (For further information, see “Why You’ve Never Heard of Charles Sanders Peirce.”)

Whether they realize it or not, designers live in Peirce’s world of abduction; they actively look for new data points, challenge accepted explanations, and infer possible new worlds. By doing so, they scare the hell out of a lot of businesspeople. For a middle manager forced to deal with flighty, exuberant “creative types,” who seem to regard prevailing wisdom as a mere trifle and deadlines as an inconvenience, the admonition to “be like a designer” is tantamount to saying “be less productive, less efficient, more subversive, and more flaky” — not an attractive proposition. And it is a fair critique that abduction can lead to poor results; unproved inferences might lead to success in time, but then again, they might not.

Some abductive thinkers fail to heed Brown’s requirement that the design must be matched to what is technologically feasible, launching products that do not yet have supporting technology. Consider the software designers who inferred from the growth of the Internet that consumers would want to do all their shopping online, from pet supplies to toys to groceries. Online security and back-end infrastructure had not yet caught up to their ideas, dooming them to failure.

Other abductive thinkers fail to address Brown’s second requirement: that the innovation must make business sense. Looking back on the dot-com crash, Michael Dell, founder of Dell, argues that little has changed. “Still today in our industry, if you go to a trade show, you walk around and you will find a lot of technology for which there is no problem that exists,” he says. “It’s like, ‘Hey, look at this, we’ve got a great solution and there is no problem to solve here.’ ” [2] Think of the Apple Newton, the world’s first portable data assistant. Launched in 1993, it utterly flopped. According RIM’s Lazaridis, it was a failure of abduction. “It had no future,” he argues. “What problem did it solve? What value did it create? It was a research project. What could you do with it that you couldn’t do with a laptop? Nothing. And everything you could do with it, you could do better with a laptop.” Apple Computer (as it was known then) wasn’t wrong when it inferred that customers would value a small, portable, digital assistant, but it didn’t ultimately deliver a solution that matched the insight.

So the prescription is not to embrace abduction to the exclusion of deduction and induction, nor is it to bet the farm on loose abductive inferences. Rather, it is to strive for balance. Proponents of design thinking in business recognize that abduction is almost entirely marginalized in the modern corporation and take it upon themselves to make their companies hospitable to it. They choose to embrace a form of logic that doesn’t generate proof and operates in the realm of what might be — a realm beyond the reach of data from the past.

That’s a risk many leaders won’t take. Making Peirce’s logical leaps is not consistent or reliable; nor does it faithfully adhere to predetermined budgets. But the far greater risk is to maintain an environment hostile to abductive reasoning, the proverbial lifeblood of design thinkers and the design of business. Without the logic of what might be, a corporation can only refine its current heuristic or algorithm, leaving it at the mercy of competitors that look upstream to find a more powerful route out of the mystery or a clever new way to drive the prevailing heuristic to algorithm. Embracing abduction as the coequal of deduction and induction is in the interest of every corporation that wants to prosper from design thinking, and every person who wants to be a design thinker.
“What is Design Thinking” is an excerpt from Roger Martin’s new book The Design of Business: Why Design Thinking is the Next Competitive Advantage (Harvard Business Press, 2009).
1 Tim Brown, “Design Thinking. ” Harvard Business Review, June 2008. p. 86.

2 Michael Dell, in conversation with the author as part of the Rotman School of Management’s Integrative Thinking Experts Speaker Series, September 21, 2004.

Learning to Design Without Losing Your Soul

2 Feb

Learning to Design Without Losing Your Soul

By Francisco Inchauste on January 27, 2011

Aspiring designers are failing. They are being let down by their schools and sometimes by our design community. In America, creativity is on a decline. The resources available online are massive; Quality content is hard to find.

“I’m eager to hire the next great class of designers, but to my dismay–and the dismay of many young hopefuls who’ve often spent many years and thousands of dollars preparing to enter the industry–I’m finding that the impressive academic credentials of most students don’t add up to the basic skills I require in a junior designer.” — Gadi Amit1

The design community has a new challenge. It’s not how we push design to the next level. It’s not how we best design publications for the 80 tablets coming out this year. It is something I see as much more critical: Guiding the next generation of designers.

More at:


Smashing Magazine (@smashingmag)
01/02/2011 16:40
Learning to Design Without Losing Your Soul: good conversation about design education