5 reasons you can’t trust community surveys

When it comes to great placemaking, analysis is always the first step. But are we getting the right insights from traditional methods?

Traditionally, citymakers compile volumes of information about land use, socio-economic data, property sales, and traffic. We visit the site and observe it (or maybe just rely on Google street view). We wade through the planning codes, and make a list.

But, when it comes to understanding people and behaviour we are often left guessing.

Why do people come here? What attracts them? How long do they stay? What do they value? We must answer these questions if we are to create successful places that allow people to thrive.

So, how can we best understand people’s behaviour?

Urban development revolves around planning codes and balcony depths, rather than people and social life. Principally this is because physical criteria are easier to measure and are more tangible than social and cultural values. After all, “You can’t manage what you can’t measure”.

Community Surveys - the traditional approach

Our traditional tools for understanding the human side of neighbourhoods are limited - and they usually involve surveys. Now surveys can be useful, if done well; and meaningful, if done for the right purpose.

But when it comes to wider community surveys, particularly when trying to understand places, there are a number of challenges and limitations.

Let’s take a look at five of them.

1. Sample size

A lot of community surveys involve a few dozen people, hundreds at best. This might be ok for feedback about a facility or venue but for a neighbourhood of hundreds of thousands of people, the sample size is just too small to provide good data.


2. Self-selection

Most community surveys are opt-in. There’s an email in your inbox inviting you to have a say, a letter in the mail, or someone with a clipboard in the street inviting you to stop. This creates a huge self-selection bias towards people who have time, are more educated, are more politically engaged or have better comprehension skills. A lot of surveys are also only in English.


3. Accuracy and Truthfulness

Most people don’t answer surveys honestly. What people say and what they do are often very different. They reply with what they think they should say, or what they think the surveyor wants to hear, rather than what they actually think. If you ask me (in a survey) how many times I go to the gym, I’m likely to answer with my ‘best case’ scenario:  4-5 times a week, rather than my actual scenario: 2-3 times a week. Scale up this error across everyone in the neighbourhood and you have to question the reliability of the information.

What people say and what they do are often very different.

4. Replicability

Surveys are static: a ‘point in time’ approach, which can be difficult to replicate. This point in time method also makes it hard to understand cause and effect relationships, which need time-series data. MIT’s Human Dynamics Laboratory have done a range of studies comparing the accuracy of mobile phone meta-data and traditional social science. This example from  Dr Sandy Pentland compares survey results to phone data, arguing that the limited number of data points from surveys make it difficult to claim any cause and effect relationships from the results.

time .png
Surveys are ‘point in time’ which makes it difficult to claim any cause and effect relationships from the results.

5. Scalability and Comparison

Assuming we can get around the sample size and self selection bias, and have well worded surveys to improve accuracy -  like some well planned random selection surveys do - there are still barriers around cost and scale. There is enormous cost and time involved in each survey commission thats undertaken, which makes gathering comparative data difficult.


The Opinion - Perception - Behaviour Spectrum by Neighbourlytics

Today we have the capability to capture behavioural data at a scale and cost efficiency unimaginable just a few years ago. Every time to you tweet, like, check-in or post, you’re leaving behind digital footprints and contributing to data about places. Neighbourlytics harnesses this social data to create game-changing insights for neighbourhoods.

We’re not saying that we should throw out surveys altogether, rather that we should be careful to use them for the right purposes. Surveys are useful to understand the opinion of communities. They’re not useful for understanding perception and behaviour.

At Neighbourlytics we talk about the spectrum of human experience from opinion to perception to behaviour.


To understand human behaviour, the first step is we need to look at other data approaches. This means, data that is reflective of people’s lifestyle choices, unsolicited, and available in large volumes.

What we think or believe is not the best determinant of our behaviour, but our location is. Where we spend time, where we move next is a determinant of our behaviour, as MIT researchers have shared in the approach coined “social physics”.

We understand our behaviour by understanding our social context - mostly we learn from each other.

The best determinants of our behavior are not what we think or believe...but where we spend time.
Screen Shot 2019-05-06 at 12.01.31 pm.png

Next time you’re planning a placemaking or urban development project, think about your social analysis and where you’re getting your information from. Do you trust it? Is it accurate and replicable? Then… give us a call instead, at Neighbourlytics!

Lucinda Hartley

I’m an urban designer and social entrepreneur who has spent the past decade pioneering innovative methods to improve the social sustainability of cities, now being implemented around the world. 

As a co-founder at Neighbourlytics, I bring my social innovation and entrepreneurship strengths to harness big data, and deliver community insights that inform evidence-based urban development decisions. Neighbourlytics is backed by BlueChilli Group’s highly competitive She Starts program.