As we think about the glorious future of Upwell, we're talking to lots of people who have helped shape our path over the past two years. It's inspiring to see how abstract concepts have turned into daily practices, some measurable in graphs and some measured in far less 2D ways.
We're thinking a lot about sharing this week.
One of the posters we look at every day in the Upwell office.
Though we think about sharing and open access all the time, we haven't gathered those thoughts in one post. Here's a first draft of the Upwell Manifesto of Sharing.
- We're on the big Team Ocean. Ray, who has several national medals in rowing, confirms our position on the team is most similar to coxswain. We are happy to serve as the boat servant, and it feels accurate to say we're guiding navigation and steering, and encouraging our team members to work together and pull hard. This is a sports metaphor I get.
- Our work should be accessible to anyone in the sector. Though we can't fly around to everyone's office and personally do campaign consulting, we do it several times a week on the phone or by skype. To date, we do it for free. When we learn something that feels like an emergent best practice, we share it on our blog, and at every speaking opportunity. We also know that busy shark evangelists and social media managers don't have the luxury of digging through big reports and digesting graphs based on newly evolved methodologies. We try to make access both available and *possible* by quickly packaging the gems.
- This isn't our sandbox. We surface and test models to share. Awesome ocean communications were underway long before we arrived on the scene. We're grateful for that. Lots of people play in this sandbox.
- We curate best practices, too. And we bring them back from our hunts in the wild, tidily packaged, to big Team Ocean. Upworthy has done some amazing work in the past year. We didn't invent image macros. There are lots of good tactics about, but it's hard to know which are best.
- Whenever we can, we teach for free. In the last few months this has included workshops at the Greenpeace Digital Mobilisation Skill Share, SXSW Interactive (on Nonprofit Times, Beth Kanter's blog), NTC (upcoming), Personal Democracy Forum (upcoming), Web of Change, a recent meeting convened by Omidyar's Democracy Fund, and many more. Budgets are tight, so having travel covered really helps. We'll give up our weekends to teach for free. We believe the sector needs this knowledge that much.
- The ocean is our client. Part of what this means to us is that we should help support all of the orgs and evangelists who are helping make our client famous. We can't give money, but we can share skills and send attention.
- Sharing models makes them stronger. We can only do so much testing ourselves. Releasing models into the wild means we can get better, faster, if others test the models with us.
- We model an architecture of transparency. It doesn't help anyone for us to dump our files in the street. What does help is a considered, well-designed openness.
- Resources are abundant. We have everything we need to acchieve our conservation goals, we just need to get it into the right basket. Competition, siloes and secrecy slow progress.
- We've shared our work from the beginning. For a while we posted a soul-baring critique of every single campaign we did every single week. Ultimately, we dropped that practice because it didn't scale to produce narrative reports. We graph it now.
- The intern thinks we're exactly like a cleaner wrasse. Details to follow, upon further marine biology research.
- We choose appropriate licenses, and build sharing platforms. Getting the structure of sharing right matters. Hooray for foundation reports with Creative Commons licenses. And funders who respect sharing. And staff who breaks down a 160-page PDF into digestable blog posts - which we tweet about and go to lengths to cross-post.
I'm sure there are elements of our sharing that I've failed to record here. We'll be reflecting more on this and revising as we gain clarity. Speaking of which, why and how do you share?
Have you ever wondered how an Upwell campaign works? What is the process by which we identify, clarify and amplify an issue? How do we devise a campaign plan, whether it’s a campaign that lasts only an hour or one that is spread over several days? There’s more to it than posting a tweet with a link and then move on. Here’s a behind-the-scenes look at our creative process from beginning to end:
Awesome illustrations by our super rad intern, Christine!
Big Listening is the art of gaining insight by tracking topical online conversations over time. Big Listening is distinguished from traditional social media monitoring by its scale, fluidity, focus on issue or cause monitoring, and expanded access to historical data. Using monitoring and measurement tools such as Radian6, Topsy Pro, Google Alerts and Tweetdeck, Upwell builds a meteorology of ocean conversations, pinpointing opportunities for intervention. Matt Fitzgerald has detailed some of our Big Listening insights on our blog.
We identify opportunities for our distributed online campaigning network through our daily Big Listening. We find out what is spiking, and join the conversation. We choose opportunities based on the ever-changing tides of the internet. We find hooks in mainstream news and cherrypick the most shareable content.
We look for news and content that we think has been egregiously under-amplified. Sometimes a hot piece of news just wasn't packaged in the right way. We mine our network and find the awesome stuff that few have seen, and we repackage it to go farther. We write the great tweet to go with that video. We pair actions with news. We make tweetable summaries for wonky reports. Saray Dugas, our designer extraordinaire, gives boring content flair.
Upwell’s network is key to our success. Our attention campaigns are primarily amplified not by our organization alone, or by a dedicated base of supporters we’ve built over many years, but rather by the network of ocean communicators that we regularly contact through the Tide Report, our social media channels, and our blog. It’s more of a syndication model than a direct-to-consumer model. We call these fellow conservation comrades our “distributed network.” They take the curated content we share with them and translate it out to their audiences through the communications channels they maintain.
We measure the impact of our campaigns by counting social mentions. Social mentions are online acts of self-expression in which individuals, organizations and other entities invest (at least) a small amount of social capital. We do that with our Big Listening tools (Radian6, Topsy Pro, etc.), and we also hand count social mentions that don’t have our monitored keywords (like shares of images). Sharedcount is one of our favorite tools for doing this. We don’t just count social mentions, we also look for trends to try to understand what types of content generate conversation about the ocean.
We strive to not just collect data on the online ocean conservation, but also to improve our practices and share what we’ve learned with the sector. On a monthly, quarterly or to-order basis, we export and graph Big Listening data based on the most current keyword groups. With this data, we create reports, blog posts and other types of synthesis for external audiences. We gather feedback and process what we’ve discovered in order to improve our methodology. (For more on our sharing practices, read Rachel Weidinger's post about how we were born to share.)
We do this all in the course of 1-2 days, adapting our methods and incorporating insights into the next campaign we run.
Our primary metric for understanding the conversations we analyze, monitor and campaign in is what we refer to as a “social mention” (or “social item”). Social mentions are what we count when we do Big Listening to understand the volume of conversations, and they're what we count when we evaluate the success of our attention campaigns.
Upwell defines a social mention as the text inclusion of a monitored keyword in a post on a social media platform like Twitter, Facebook, a blog, mainstream news with an RSS feed, a forum/board, YouTube or Pinterest. Social mentions are online acts of self-expression in which individuals, organizations and other entities invest (at least) a small amount of social capital.
Social mentions have more in common with the metric of media hits than they do with the more common, older PR and marketing metric of impressions. Upwell focuses on counting and analyzing social mentions (rather than impressions or online mentions) because we believe that the number of people who choose to take an action to create or share content is a better indicator of engagement than the number of people who have simply seen (or could have seen) that content.
It is worth noting that, while it is theoretically possible to accurately count every single social mention on a topic, Upwell’s Big Listening methodology focuses on characterizing conversations just thoroughly enough to campaign successfully within them.
Furthermore, Upwell believes that social mentions are a better leading indicator of willingness to take action for the oceans than other communications metrics. This is because social mentions represent actions, the choice of an individual to risk a small amount of social capital by associating their online identity with a piece of online content. In aggregate, the volume of social mentions not only represents the amount of attention being paid to a topic, but a forecast of potential campaign success. For this reason, generating social mentions is the primary goal of our attention campaigns.
The strength of a community, by our standards, is measured not by its size, but rather by its engagement level. For example, if one tweet has 12,000 impressions (the number of people who follow the account that posted the tweet), we count the tweet the same way that we would count a tweet with 200 impressions. If a person or organization is network-oriented, it would follow that their content would lead to more retweets, replies and/or mentions. If a tweet goes out to 12,000 followers but gets zero retweets, it is less of an indicator of willingness to take action than a tweet that goes out to 200 followers and gets 10 retweets.
What About “Likes”?
Likes, loves, and faves (different terminology for different social media platforms) are in a middle ground. While they are not social mentions (as people are not creating new content), they are also not as passive as views or impressions. While likes, loves, and faves are not counted by Radian6, Upwell does measure them, when possible. However, for the purposes of our campaign reporting, we omit these metrics since they constitute only minimal public engagement and can require laborious, resource- intensive manual calculation (since we often don't own the properties on which our content and campaigns are shared).
Upwell has spent the past few months analyzing and wordsmithing all the things we’ve done and learned in our first year campaigning with you. We may have been more or less radio silent while we’ve been doing this, but now we’re back to report what we’ve found.We are telling all on our blog, sharing what has worked and what hasn’t, with plenty of tips and tricks to share. There will be more to come, and we'll keep this blog post updated as we add more.
Our primary metric for understanding the conversations we analyze is what we refer to as a “social mention.” How does that differ from other more traditional online campaigning metrics like impressions? Why is this the most exciting thing ever? Find out.
We’ve compared and analysed the conversational volume in the Sustainable Seafood and Overfishing conversations over the past year plus. Spoiler alert: both conversations have changed substantially since 2011, with significant increases in spike volume, spike frequency, and ratio of average daily social mentions to the average baseline. There’s some hot data here.
What is that special sauce that makes an Upwell campaign? We’ve described the basic building blocks of an Upwell campaign and the importance of our distributed network (that’s all of you!) for the work that we do.
There's a lot of steps between Big Listening and measuring a campaign. We try to cram them all into one day. Check out this description of the full lifecycle of an Upwell campaign, complete with original illustrations!
A “spike” is a significant increase in online attention for a particular topic. When you graph social mentions, you can see that burst of attention ‘spike’ the graph -- hence the name. We’ve detailed how we differentiate between a normal bump and actual spike in this post.
Upwell informally defines a conversation’s “Baseline” as the point below which the daily volume doesn’t drop. It’s the number of social mentions that occur each day by the topic’s diehard conversationalists. If everyone else left the party, the Baseline would still be there, dancing by itself.
The following post details before and after intervals in the two main conversations Upwell invested in: Sustainable Seafood and Overfishing. One finding of note is that both the Sustainable Seafood and Overfishing conversations have been substantially changed since the founding of Upwell. The narrative details significant increases in spike volume, spike frequency, and ratio of average daily social mentions to the average baseline (for more on how we quantify a baseline, see our baseline methodology. To understand how we quantify spikes, see our spike quantification methodology).
Primary Campaign Topics: Then and Now
Comparison for Winter 2011 (top) and Winter 2012 (bottom) showing social mentions by day for Upwell’s Sustainable Seafood keyword group, as compared to the baseline, spike threshold and high spike threshold (Winter 2011: 10/17/2011 - 1/31/12; Winter 2012: 10/1/2012 - 1/29/13)
In the Winter of 2011 (graph: above, top) when Upwell began Big Listening in Sustainable Seafood, social mention volume was an average of 423 mentions per day. By Winter of 2012 (graph: above, bottom), social mention volume had climbed to an average of 549 per day -- an increase of 29.9%. The ratio of average daily social mentions to the average baseline value also increased by 29.9% (as one would expect), going from 132.3% of the baseline in Winter 2011 to 171.8% of the baseline in Winter 2012. (Note: 'Average baseline' generalizes Upwell's day-of-the-week baseline values for a given topic into one mean value for the purpose of calculations, such as this one, which require a single value).
Spike frequency -- measured by how often social mention volume spikes equal to, or greater than Upwell’s spike threshold -- describes how often spikes occur, on average, in a particular conversation. Spike frequency in the Sustainable Seafood conversation increased from 2.2 spikes every thirty days in Winter 2011, to 8.2 spikes every thirty days in Winter 2012 -- an increase of 265%. (Note: you can learn more about the spike threshold in this post).
Those spikes were not just occurring more often, they were also getting bigger. Upwell’s high spike threshold, set at two standard deviations above the average social mention volume for that day of the week, provides another indication of spike intensity. The more spikes reach the high threshold, the more the conversation is spiking at higher volumes. In Winter 2011, there were two high threshold spikes and the following year there were thirteen -- an average of 0.5 spikes per thirty days versus an average of 3.2 spikes per thirty days, a 475% increase.
Comparison for Winter 2011 (top) and Winter 2012 (bottom) showing social mentions by day for Upwell’s Overfishing keyword group, as compared to the baseline, spike threshold and high spike threshold (Winter 2011: 10/17/2011 - 1/31/12; Winter 2012: 10/1/2012 - 1/29/13)
In the Winter of 2011 (graph: above, top) when Upwell began Big Listening in Overfishing, social mention volume was an average of 1,979 mentions per day. By Winter of 2012 (graph: above, bottom), social mention volume had climbed to an average of 3,386 per day -- a 71% increase. The ratio of average daily social mentions to the average baseline value also rose, from 126.5% of the baseline in Winter 2011, to 216.3% of the baseline in Winter 2012. (Note: average baseline, as above).
Spike frequency -- measured by how often social mention volume spikes equal to or greater than Upwell’s spike threshold -- describes how often spikes occur, on average, in a particular conversation. Spike frequency in the Overfishing conversation increased from 0.8 spikes every thirty days in Winter 2011, to 7.4 spikes every thirty days in Winter 2012 -- a massive increase of 784%. The 30-day rate of high threshold spikes also increased, from an average of 0.6 to an average of 3.2 -- a 475% increase.
(Notes: Upwell defintes high threshold spikes as occurring when social mention volume for a given day is greater-than or equal-to two standard deviations above the average social mention volume for that day of the week. You can learn more about the spike threshold in this post).
We've spent the last six weeks reflecting on our pilot project, and want to share our results with you. This post is one in a series of pieces about what we've learned over the last 10 months.
If you like this post check out: