The Barcelona Principles were updated to version 3.0 on July 9, and the revisions encompass a wider scope than previous versions.
The original Barcelona Principles, released in 2010, and the Barcelona Principles 2.0, in 2015, were primarily focused on goal-setting and evaluation for business public relations. This year, the principles have been retooled to apply to “the larger communication function of any organization, government, company or brand globally.”
What are the Barcelona Principles?
In 2010, the International Association for the Measurement and Evaluation of Communications -- AMEC -- held a summit and constructed a set of 7 principles for evaluating the effectiveness of communications. They were the first framework of their kind and were particularly important as the field adapted to increasing internet usage.
Their suggestion to use social media as a measurement is now commonplace but was quite new at the time.
As a content analytics provider, React & Share works exclusively with communications teams. Our CEO, Timo Virtanen, said customers often tell him that “comms people often aren't encouraged to think about top-level goals in daily decisions.”
The principles, then, serve as the glue that ties executive decision making with the day-to-day happenings of a busy communications office.
This year’s update stresses the importance of holistic goals and the potential impact of each communication. Let’s jump into a description of each principle, with some insight as to how they can be applied. You can follow along with AMEC’s release slides here.
Principle 1: “Setting goals is an absolute prerequisite to communications planning, measurement, and evaluation.”
We come out of the gate with the theme established: set goals. More specifically, set measurable goals.
Right now, ‘measureable’ remains a little unclear, but the rest of the principles will cover that. Principle 1 is about understanding why goals are important for long-term success in communications.
Goals, AMEC argues, represent the change you want to make. This isn’t a set of KPIs you want to hit; it’s a change you want to affect. They suggest sticking with the fundamental “who, what, how much, and when.”
- Who are you trying to reach?
- What are you trying to say to them?
- How much change are you trying to make?
- When do you want all this to happen?
When it comes time to evaluate your performance, measure and evaluate only against the goals you set before the campaign.
Why? Progress is measured in miles, not acres. Side effects aren’t a measure of success if you’ve set your goals right. Imagine seeing the view count on an article skyrocket, but its engagement remains low. If your goal was to increase engagement, your strategy failed, regardless of vanity metrics.
Principle 2: “Measurement and Evaluation Should Identify Outputs, Outcomes, and Potential Impact.”
Barcelona Principles 2.0 placed outputs and outcomes on even footing, but 3.0 adds potential impact as a third, equally important metric.
That communications have impact isn’t new; identifying ‘impact’ as distinct from ‘outcome’ is. It may reflect a generational trend, found in a 2008 study, that shows millennials care deeply about the wider impacts of their actions. The report says that on top of spontaneous point-of-sale donations or participating in charity races, they care deeply about the message and authenticity of organizations they support.
Back then, cliche would’ve stated that millennials are the communicators of tomorrow. Now, they’re the communicators of today -- the principles reflect that.
Impact is also a holistic measurement. “There is no one right way to measure the societal and organizational impacts you want to measure,” as AMEC says. But, they do give some suggestions.
Principle 3: “Outcomes and Impact Should Be Identified For Stakeholders, Society, & the Organization.”
Outcomes for stakeholders are business fundamentals; revenue, reach, reputation, etc. Outcomes for the organization are more complex, and impacts on society go far beyond the other two.
Approach these measurements scientifically. Develop a hypothesis on how your work will have broader impacts. What will happen to X due to our campaign? How will Y react?
When evaluating, look back to your hypothesis and consider your results. Just like your goals, you want to evaluate only based on the hypothesis you came up with. “This should include thinking beyond services and sales provided,” AMEC suggests. Changing behaviors within the organization and within society are relevant outcomes.
Several other factors can get in the way of accurate measurements and evaluation. There are other departments in your organization that can also influence these outcomes. Let’s consider an example. Say you’re measuring brand reputation change. If a large-scale marketing campaign was underway the same time as your communications, it’s highly unlikely that you’ll be able to isolate your work as the cause of any change. To adjust, you’d want to consider narrower methods of measurement to test your hypothesis against, like asking pointed questions about specific communications in an interview.
Up to now, though, ‘measurements’ has remained a vague term. The next principle distinguishes types of measurements and helps tie them into your process.
Principle 4: “Communication Measurement and Evaluation Should Include Both Qualitative and Quantitative Analysis.”
The difference between qualitative and quantitative is something we’ve talked about on the blog before. The internet, so far, has been approached quantitatively, with views, bounce rates, reach, and likes being highly regarded and influential in data-driven decisions. Qualities remain hard to extract from big data.
But they are both important.
You’re trying to answer three questions:
- How did our target audience access our communications?
- Was it through the intended strategy or channel?
- What did that audience conclude?
Breaking these down, we see that each requires some degree of qualitative and quantitative measures. I recommend looking at AMEC’s suggestions, here in full (from AMEC’s presentation):
Quantitative
For cross-channel research:
- Impressions or reach among target audiences
- Competitive or sector share of voice
- Engagement with earned/owned/paid content across channels
- Sharing of earned/owned/paid content across channels
For audience survey-based research:
- Awareness
- Recall
- Message/content relevance
- Perception/attitude change
- Expected behavior change
Qualitative
For cross-channel research:
- Sentiment and/or emotional response from target audiences
- Credibility and relevance
- Message delivery
- Calls to action
- Third-party endorsements
- Inclusion of company spokespeople
- Prominence as relevant to the channel
For audience survey/interview/bulletin board-based research:
- Ethnographic insights
- Underlying motivations
- Rationale
- Perceptual context
- Style/language impact
They give some friendly reminders before closing out this important principle: Measure results and progress, not success only; measure consistently to keep an eye on trends; use a healthy mix of qualitative and quantitative measures to inform evaluation.
Principle 5: “AVEs are Not the Value of Communication.”
AVE isn’t the common term it once was. AVE means ‘Advertising Value Equivalent’. PR folks would measure the space a communication took up in print media or its broadcast length on radio/TV, then assign an AVE by how much it would have cost to print/broadcast an advertisement of that magnitude.
So why isn’t it common anymore, and why has AMEC strongly opposed its use since the original set of principles? The argument is implied in the four principles we’ve already covered.
Communications can’t be reduced to a single metric. There’s too many factors at play, too many audiences to understand, and too many potential impacts to consider. No simple financial value will truly account for the effectiveness of communications.
But don’t take my word for it. Check out 22 reasons why AVEs are invalid, from AMEC.
Principle 6: “Holistic communication measurement and evaluation includes all relevant online and offline channels.”
Principle 6 has an interesting timeline.
In 2010, the principle read, “Social media can and should be measured.” In 2015, it was qualified slightly to read, “Social media can and should be measured consistently with other media channels.”
Now it seems like 3.0 is reminding 2020 communicators that social media is not the only platform, even though it may feel like it. Radio and television are still viable platforms, and organic search remains the largest driver of traffic for most informational content. Email is on the rise, with platforms like Substack driving a renaissance of the newsletter.
All that in mind, each channel has its own quirks. Social media likely overcorrected to overuse because it’s so easy to measure -- most platforms come built-in with metrics and tools to view them. Social media is also interacted with very differently than, say, email or a website.
Your goals, hypotheses, measurements, and eventual evaluations should account for platform uniqueness.
Principle 7: “Communication Measurement and Evaluation Are Rooted in Integrity and Transparency to Drive Learning and Insights.”
In other words, learning and insights should not come at the expense of integrity and transparency.
Data privacy is important. Follow GPDR regulations when determining analytics methods. The people receiving your communications deserve to have their privacy respected, and you can gather useful data without crossing any lines.
Your evaluations should be honest internally, as well. Biases of all types, whether in the form of conflicting campaigns that confound data or human biases that enter in assessment, should be rooted out. Be honest about failures, otherwise any insights will be built on half-truths.
What’s next for communications?
Holistic thinking is bound to our expanding use of the internet. As our technology grows, so too will the number of connections between people, things, organizations, societies, and people. The Barcelona Principles have once again adapted to the times, more aware of the vast system that we launch our communications into.
Set goals, measure according to audience, and evaluate. It’s a simple process made difficult in the details, but these principles will help sort out the mess.