Skip directly to content

Blog Feed

Ray Dearborn's picture

What's a social mention?

on March 6, 2013 - 1:44pm

Our primary metric for understanding the conversations we analyze, monitor and campaign in is what we refer to as a “social mention” (or “social item”). Social mentions are what we count when we do Big Listening to understand the volume of conversations, and they're what we count when we evaluate the success of our attention campaigns

Upwell defines a social mention as the text inclusion of a monitored keyword in a post on a social media platform like Twitter, Facebook, a blog, mainstream news with an RSS feed, a forum/board, YouTube or Pinterest. Social mentions are online acts of self-expression in which individuals, organizations and other entities invest (at least) a small amount of social capital.

Social mentions have more in common with the metric of media hits than they do with the more common, older PR and marketing metric of impressions. Upwell focuses on counting and analyzing social mentions (rather than impressions or online mentions) because we believe that the number of people who choose to take an action to create or share content is a better indicator of engagement than the number of people who have simply seen (or could have seen) that content.

It is worth noting that, while it is theoretically possible to accurately count every single social mention on a topic, Upwell’s Big Listening methodology focuses on characterizing conversations just thoroughly enough to campaign successfully within them.

Furthermore, Upwell believes that social mentions are a better leading indicator of willingness to take action for the oceans than other communications metrics. This is because social mentions represent actions, the choice of an individual to risk a small amount of social capital by associating their online identity with a piece of online content. In aggregate, the volume of social mentions not only represents the amount of attention being paid to a topic, but a forecast of potential campaign success. For this reason, generating social mentions is the primary goal of our attention campaigns

The strength of a community, by our standards, is measured not by its size, but rather by its engagement level. For example, if one tweet has 12,000 impressions (the number of people who follow the account that posted the tweet), we count the tweet the same way that we would count a tweet with 200 impressions. If a person or organization is network-oriented, it would follow that their content would lead to more retweets, replies and/or mentions. If a tweet goes out to 12,000 followers but gets zero retweets, it is less of an indicator of willingness to take action than a tweet that goes out to 200 followers and gets 10 retweets.

What About “Likes”?

Likes, loves, and faves (different terminology for different social media platforms) are in a middle ground. While they are not social mentions (as people are not creating new content), they are also not as passive as views or impressions. While likes, loves, and faves are not counted by Radian6, Upwell does measure them, when possible. However, for the purposes of our campaign reporting, we omit these metrics since they constitute only minimal public engagement and can require laborious, resource- intensive manual calculation (since we often don't own the properties on which our content and campaigns are shared).

Ray Dearborn's picture

What the heck has Upwell been up to?

on March 6, 2013 - 10:39am

Upwell has spent the past few months analyzing and wordsmithing all the things we’ve done and learned in our first year campaigning with you. We may have been more or less radio silent while we’ve been doing this, but now we’re back to report what we’ve found.We are telling all on our blog, sharing what has worked and what hasn’t, with plenty of tips and tricks to share. There will be more to come, and we'll keep this blog post updated as we add more. 

What's a "Social Mention"?

Our primary metric for understanding the conversations we analyze is what we refer to as a “social mention.” How does that differ from other more traditional online campaigning metrics like impressions? Why is this the most exciting thing ever? Find out. 

Conversation Metrics for Overfishing and Sustainable Seafood

We’ve compared and analysed the conversational volume in the Sustainable Seafood and Overfishing conversations over the past year plus. Spoiler alert: both conversations have changed substantially since 2011, with significant increases in spike volume, spike frequency, and ratio of average daily social mentions to the average baseline. There’s some hot data here. 

Upwell's Distributed Network Campaigning Method

What is that special sauce that makes an Upwell campaign? We’ve described the basic building blocks of an Upwell campaign and the importance of our distributed network (that’s all of you!) for the work that we do.

The Lifecycle of an Upwell Campaign

There's a lot of steps between Big Listening and measuring a campaign. We try to cram them all into one day. Check out this description of the full lifecycle of an Upwell campaign, complete with original illustrations!

Upwell's Spike Quantification of the Ocean Conversation

A “spike” is a significant increase in online attention for a particular topic. When you graph social mentions, you can see that burst of attention ‘spike’ the graph -- hence the name. We’ve detailed how we differentiate between a normal bump and actual spike in this post.

Upwell's Ocean Conversation Baseline Methodology

Upwell informally defines a conversation’s “Baseline” as the point below which the daily volume doesn’t drop. It’s the number of social mentions that occur each day by the topic’s diehard conversationalists. If everyone else left the party, the Baseline would still be there, dancing by itself.

Matt Fitzgerald's picture

Conversation Metrics for Overfishing and Sustainable Seafood

on February 28, 2013 - 6:25pm

The following post details before and after intervals in the two main conversations Upwell invested in: Sustainable Seafood and Overfishing. One finding of note is that both the Sustainable Seafood and Overfishing conversations have been substantially changed since the founding of Upwell. The narrative details significant increases in spike volume, spike frequency, and ratio of average daily social mentions to the average baseline (for more on how we quantify a baseline, see our baseline methodology. To understand how we quantify spikes, see our spike quantification methodology). 

Primary Campaign Topics: Then and Now

Sustainable Seafood

Comparison for Winter 2011 (top) and Winter 2012 (bottom) showing social mentions by day for Upwell’s Sustainable Seafood keyword group, as compared to the baseline, spike threshold and high spike threshold (Winter 2011: 10/17/2011 - 1/31/12; Winter 2012: 10/1/2012 - 1/29/13) 

In the Winter of 2011 (graph: above, top) when Upwell began Big Listening in Sustainable Seafood, social mention volume was an average of 423 mentions per day. By Winter of 2012 (graph: above, bottom), social mention volume had climbed to an average of 549 per day -- an increase of 29.9%. The ratio of average daily social mentions to the average baseline value also increased by 29.9% (as one would expect), going from 132.3% of the baseline in Winter 2011 to 171.8% of the baseline in Winter 2012. (Note: 'Average baseline' generalizes Upwell's day-of-the-week baseline values for a given topic into one mean value for the purpose of calculations, such as this one, which require a single value).

Spike frequency -- measured by how often social mention volume spikes equal to, or greater than Upwell’s spike threshold -- describes how often spikes occur, on average, in a particular conversation. Spike frequency in the Sustainable Seafood conversation increased from 2.2 spikes every thirty days in Winter 2011, to 8.2 spikes every thirty days in Winter 2012 -- an increase of 265%. (Note: you can learn more about the spike threshold in this post).

Those spikes were not just occurring more often, they were also getting bigger. Upwell’s high spike threshold, set at two standard deviations above the average social mention volume for that day of the week, provides another indication of spike intensity. The more spikes reach the high threshold, the more the conversation is spiking at higher volumes. In Winter 2011, there were two high threshold spikes and the following year there were thirteen -- an average of 0.5 spikes per thirty days versus an average of 3.2 spikes per thirty days, a 475% increase. 

Overfishing

Comparison for Winter 2011 (top) and Winter 2012 (bottom) showing social mentions by day for Upwell’s Overfishing keyword group, as compared to the baseline, spike threshold and high spike threshold (Winter 2011: 10/17/2011 - 1/31/12; Winter 2012: 10/1/2012 - 1/29/13)

In the Winter of 2011 (graph: above, top) when Upwell began Big Listening in Overfishing, social mention volume was an average of 1,979 mentions per day. By Winter of 2012 (graph: above, bottom), social mention volume had climbed to an average of 3,386 per day -- a 71% increase. The ratio of average daily social mentions to the average baseline value also rose, from 126.5% of the baseline in Winter 2011, to 216.3% of the baseline in Winter 2012. (Note: average baseline, as above). 

Spike frequency -- measured by how often social mention volume spikes equal to or greater than Upwell’s spike threshold -- describes how often spikes occur, on average, in a particular conversation. Spike frequency in the Overfishing conversation increased from 0.8 spikes every thirty days in Winter 2011, to 7.4 spikes every thirty days in Winter 2012 -- a massive increase of 784%. The 30-day rate of high threshold spikes  also increased, from an average of 0.6 to an average of 3.2 -- a 475% increase. 

 

(Notes: Upwell defintes high threshold spikes as occurring when social mention volume for a given day is greater-than or equal-to two standard deviations above the average social mention volume for that day of the week. You can learn more about the spike threshold in this post).

***

We've spent the last six weeks reflecting on our pilot project, and want to share our results with you. This post is one in a series of pieces about what we've learned over the last 10 months.

If you like this post check out:

Upwell's Ocean Conversation Baseline Methodology

Upwell's Spike Quantification of the Ocean Conversation

Upwell's Distributed Network Campaigning Method

Ray Dearborn's picture

Upwell's Distributed Network Campaigning Method

on February 28, 2013 - 5:32pm

The mission of Upwell is to condition the climate for change in marine conservation, and to ready people to take action. In order to do this, our team sifts through the vast amount of real-time online content about the ocean, and amplifies the best of it. Upwell’s campaigning model capitalizes on the insights we glean from Big Listening and other curation efforts, and responds to the currents of online conversation. Through an iterative process of lots and lots of campaign testing, we find ways to create spikes of attention in conversations. Ultimately, we hope to raise the day-to-day, week-to-week, month-to-month, year-to-year baseline of those conversations.

What is an Upwell campaign?

Upwell’s campaigning model combines a few key elements. Our campaigns are attention campaigns, focused on raising attention to ocean issues. They are minimum viable campaigns, operating on short time-frames and focused on rapid delivery of content, continuous learning and iteration. They are run and amplified across a distributed network, rather than being housed on, and amplified by way of our own platforms. 

The Attention Campaign

The nonprofit community has deeply-held ideas of what constitutes a campaign. Often, organizations build campaigns with institutional goals (e.g.: awareness, list-building, advocacy and fundraising campaigns), and compete with other entities in the same sector/issue space. Upwell’s attention campaigns operate on a different plane, one in which success (greater attention) elevates the work of everyone in Team Ocean, and is tied to no particular institutional outcome other than generating conversation. 

What we do with attention campaigns is try to drive more attention to existing content and actions that are not on our properties. They’re not associated with our brand. We use this loosely held connection, tying into the momentum of the news cycle, and being strategically opportunistic in the pursuit of creating spikes in attention. 

We focus on shareability, and measure our success by the same, simple attention metric we use to measure online conversations: social mentions. Social mentions are the currency of attention, and represent small bits of action. In contrast, awareness is a less meaningful measurement, representing what someone thinks they might do, not what they have done. 

Over time, we believe that increased attention to ocean issues will raise the daily baseline of conversation about ocean issues. We have been experimenting with trying to understand what makes baselines go above the expected, or historical level (i.e., what causes spikes in conversation), with an eye toward making these increases in attention sustainable.

The Minimum Viable Campaign

“You can't just ask customers what they want and then try to give that to them. By the time you get it built, they'll want something new.” - Steve Jobs

On the advice of Sean Power, Data Scientist at Cheezburger, Upwell has adapted an agile development principle from the  lean startup movement - the minimum viable product. Our campaign lifecycle embodies the Build-Measure-Learn cycle that software developers have used in order to quickly release products with the minimum amount of functional features, in order to gather immediate insight that can inform later iterations. 

The cycle of agile software development

Through our minimum viable campaigns, we employ ongoing, iterative, continuous delivery of content, resisting our urges toward perfection and providing irreverent, timely, contextual content to audiences immediately instead of strategizing for six months, or a year. We focus on the quickest, dirtiest thing we can get out the door that we think will have a measurable effect on a conversation. 

Our campaigns have short lifecycles - anywhere from a couple hours to a few days, and they are inspired and informed by hot news that feels really immediate to those campaigns. We move very rapidly through a process of hatching an idea, finding or creating the campaign product(s), putting it out into the world and getting back data. We are constantly learning how to be more effective. In our first year running over 160 minimum viable attention campaigns, we have learned that even a tiny bit of effort can make a huge difference in how campaigns get picked up.

Combining These Models

The minimum viable campaign model could be applied to not just attention campaigns, but also fundraising, advocacy, or other types of campaigns. Likewise, an attention campaign could certainly be run at different time scales. For instance, there is no reason why Red Cross couldn’t start doing minimum viable campaigns. Keeping everything else the same, they could tighten up their campaign time cycles and run experimental campaigns to engage their base in different ways. Red Cross could also start running attention campaigns. If they believed that Ushahidi was doing really good work, they could run an attention campaign, pointing at Ushahidi’s work and amplifying attention to it. This could turn out to be a faster path to achieving their own mission.

By applying both these models, Upwell has crafted a new way of campaigning that is easily delivered, measured, and adapts to the ever-changing sea of conversation. In summary, through our campaigns, Upwell:

  • Surfs existing conversations in order to increase and expand attention.
  • Measures social mentions (rather than policy outcomes, petition signatures, or public opinion) to evaluate the success of our campaign efforts.
  • Delivers, measures, and learns from campaigns on a short time cycle, embedding lessons and insights immediately. We sacrifice perfection.
  • Collaborates with a network of ocean stakeholders and curates a diverse set of existing ocean content, rather than building on our own brand, and creating our own content. Our campaigns are not aligned with Upwell program priorities or policy goals, but instead amplify attention to the priorities and goals of those in our network.
  • Runs our campaigns across a distributed network of ocean communicators, rather than relying on our own platforms as information hubs.

The Upwell Network

The key to our campaigns’ success is in our network. Our attention campaigns are amplified not by us, or by a dedicated base of supporters we’ve built over the years, but rather by the network of ocean communicators that we regularly contact through the Tide Report, our social media channels, and our blog. We call this “running a campaign across a distributed network.” It’s more of a syndication model than a direct-to-consumer model.

We built our network proactively to respond to several trends. With the rising cacophony of the internet, the rapidly increasing pace at which news spreads, and the shift toward people finding news through their friends on social media channels rather than getting it directly from “official” channels, we decided to approach network campaigns in a new light. It would have been cost prohibitive to buy the attention (through ads or purchasing email lists) or to build a world-class, unbranded media hub. Rather than collect a large set of official MOU’s and partner logos to put up on our website, we built a loosely held, distributed network. We’ve reached out to nodes of people who control the communications channels that reach lots of people who are interested in ocean issues. We’ve been scrappy and ruthless about who we put into that distributed network, trying to make it diverse and ensure the reach is big.

Campaigning across a distributed network means that we have that golden ticket of communications - message redundancy - but those redundant messages are all tailored by the individual nodes in our network for their audiences. It’s the job of the individual people in our network to know their audience really well. They take our messages and content and they translate them out to their audiences through the communications channels they maintain. 

As a point of comparison, Upworthy, a similar effort that launched just after Upwell and that shares our goal of making social change content more shareable and “viral,” approached the problem of distribution from a different angle. Rather than build their own network through which they could distribute the content they curate, they built their own media hub, repackaging content under the Upworthy banner, and rapidly scaling up an audience and brand of their own. This model certainly brings eyes to worthy content, but doesn’t (yet) effectively pass on engagement to the organizations and individuals it supports - it retains that engagement for its own channels.

We wanted to build an issue-specific network, and through our networked campaigns, strengthen our network’s members’ and supporters’ potential for future action. 

Below are the values that guide Upwell in building and strengthening our distributed network:

  • Trust: we share only science-based content, ensuring that other science-based institutions know that the content we share is trustworthy.
  • Transparency: we share our campaign and big listening data with our network, so that they can apply our lessons in their own work.
  • Brand-agnostic: we work as willingly with Greenpeace as we do with Deep Sea News, as we do with the Facebook page “I Fucking Love Science.” We will share an organization or individual’s content or campaign, as long as it promotes ocean conservation goals and fits our curation criteria. Often, promoting content from an array of brands meant releasing control of the message.
  • Issue-agnostic: We aren’t working just on overfishing, or GMO salmon, or catch shares to cultivate the network. We amplify any ocean campaign or content as long as it fits our curation criteria.
  • Personal: We build relationships with humans, not organizations. The liveliest online conversations happen between people, not institutions. We model the authentic behavior of the internet.
  • Generous: We provide small bits of advice and feedback to help our network do better. If their work will get more people talking about the ocean online, it fits with our mission.

***

We've spent the last six weeks reflecting on our pilot project, and want to share our results with you. This post is one in a series of pieces about what we've learned over the last 10 months.

If you like this post check out:

Matt Fitzgerald's picture

Upwell's Spike Quantification of the Ocean Conversation

on February 28, 2013 - 4:10pm

What is a Spike?

A spike is a significant increase in online attention for a particular topic. When you graph those social mentions, you can actually see that burst of attention ‘spike’ the graph -- hence the name. 

We have been observing spikes in the wild, so to speak, since the beginning of Upwell. It’s a concept that is at least somewhat familiar to anyone who has ever described a video as “viral,” or checked out the list of the most shared articles on The New York Times' website. A lot of people, sharing one thing, over a short time, creates a spike. In the world of Big Listening, that one thing they share can actually be a large number of different things on the same topic, but the general point remains the same. Surges in attention create spikes. So how do you measure one?

Let’s look at a graph of the sustainable seafood conversation from Summer 2012:

Social mentions for Upwell’s Sustainable Seafood keyword group vs. Upwell’s Sustainable Seafood Baseline, June 1, 2012 - August 1, 2012.

It seems pretty clear that there are two spikes in this time period. One appears on June 8, the other on June 16. But what about the other days? How far above the Baseline does social mention volume have to be in order to qualify as a spike? We set out to find a way to compare spikes that would answer the question.

Before we dive in, it’s important to note that social mention volume for a given day is a construct. We decided to use a day as the operating unit of time both because the tools we have available to us use that temporal distinction, and because a day as a unit of measurement is widely understood. That is not to say that one couldn’t decide to measure spikes by the hour, by the minute, or by some other amount of time. We made a conscious decision to build our initial definition of a spike around the day, but infinite other options exist as well. 

A second caveat is that focusing on spikes may obscure what is actually making up the long tail of post volume. Upwell talks about, and quantifies, much of this activity as the Baseline, but there may be other small-to-medium bursts of attention that last more than a day and consequently don’t visually ‘spike’ a graph in the same way (think of a multi-day increase in attention as a hump or a mesa, rather than the taller, more angular spike). Spikes look good on charts, and they help push conversations into the wider internet, but they are not the whole story of an online topic. We long for a day when tools for Big Listening allow us to view topic volume graphs like geologists look at cross-sections of rocks -- that day is not here yet. 

With those caveats out of the way, we can return to our earlier question: what is a spike? Remember from our discussion of Baseline quantification that Upwell’s analysis is designed to inform a set of interventionist activities. We:

  • identify and target high-value items to campaign on;
  • compare the relative size of different ocean sub issues (e.g. sharks vs. whales); and
  • measure the impact of our campaigns.

Spike quantification informs our campaigning and provides one measure of results. We’re not interested in just contributing to the noise around a given ocean topic, we actually want to help a signal to emerge. Spikes are those signals. Evaluating opportunities to campaign becomes a much more concrete activity when you know exactly how many social mentions are needed to break through the regular volume of conversation.

After examining historical social mention volume for our Sustainable Seafood and Overfishing keyword groups, we calculated a variety of statistical thresholds for the exported data and compared the results to our measured campaign and spike data. As discussed earlier, Upwell’s Baseline calculations are derived from the insight that our primary ocean topics each demonstrate a weekly periodicity. Similarly, in calculating potential thresholds for what constitutes a spike, we started with that same insight and then calculated various multiples of standard deviation above the average (mean) value for that day of the week. Because standard deviation measures how spread out the values within a data set are, using it to measure a particular value’s variation from the “normal” value of that data set is a good way to test for a spike. Spikes are visible because they’re outliers, and that’s what the standard deviation threshold(s) tests. 

Day-of-the-week values for the Sustainable Seafood Baseline, along with the Sustainable Seafood mean, and mean +1x, +1.5x and +2x standard deviations (10/17/11 - 1/29/13).

As seen above, the standard deviation thresholds are higher than both the Baseline and the mean. Graphing those thresholds against our campaign and event records revealed that the one standard deviation threshold was the most accurate representation of what we were observing on a day-to-day basis.

Social mentions for Upwell’s Sustainable Seafood keyword group vs. Upwell’s Sustainable Seafood Baseline vs. ‘Mean + 1 Standard Deviation’ Spike Threshold (June 1, 2012 - August 1, 2012)

Upwell defines a spike as occurring when the social mention volume for a given day meets or exceeds one standard deviation from the mean of all recorded values for that same day of the week.

While a critic might accuse us working backwards to find the threshold that gives the best fit, we would actually agree. Sustainable Seafood and Overfishing are the topics that we know the best -- because we’ve monitored them and campaigned on them with the most focus -- and we were looking for a metric that would have practical implications for attention campaigns. As mentioned before, we remain open to other spike quantification approaches, but this one is our preferred option, given what we know right now.

What Does Spike Quantification Tell Us?

Upwell’s spike quantification methodology is in alpha, so to speak, and going forward we will look to improve it. The possibilities for more comparative measures of success are numerous. One thing is certain, however: applying a spike quantification lens to our work is illuminating.

Spike comparison beta methodology?

The following graphs show our first Winter in 2011 and most recent Winter in 2012 working in the Overfishing and Sustainable Seafood conversations. Both one standard deviation and two standard deviation threshold lines are included for reference.

The comparison in time periods for both conversations is dramatic. There is a noticeable increase in spike frequency (the number of spikes), spike volume or  “spikiness” (see: the number of spikes exceeding two standard deviations), and in the overall volume of conversation in the time period as measured against the Baseline. To be blunt: this is what success looks like.

Sustainable Seafood: Winter 2011

Social mentions by day for Upwell’s Sustainable Seafood keyword group, as compared to the Sustainable Seafood Baseline, as well as to spike thresholds of one standard deviation and two standard deviations above the day-of-the-week mean (10/17/2011 - 1/31/12). Total post volume: 45,255 social mentions over 107 days. Average volume / day: 423 social mentions.

Sustainable Seafood: Winter 2012

Social mentions by day for Upwell’s Sustainable Seafood keyword group, as compared to the Sustainable Seafood Baseline, as well as to spike thresholds of one standard deviation and two standard deviations above the day-of-the-week mean (10/1/2012 - 1/29/2013). Total post volume: 66,456 social mentions over 121 days. Average volume / day: 549 social mentions.

Overfishing: Winter 2011

Social mentions by day for Upwell’s Overfishing keyword group, as compared to the Overfishing Baseline, as well as to spike threshholds of one standard deviation and two standard deviations above the day-of-the-week mean (10/17/2011 - 1/31/12). Total post volume: 211,799 social mentions over 107 days. Average volume / day: 1,979 social mentions.

Overfishing: Winter 2012

 

Social mentions by day for Upwell’s Overfishing keyword group, as compared to the Overfishing baseline, as well as spike threshholds of one standard deviation and two standard deviations above the day-of-the-week mean (10/1/2012 - 1/29/13). Total post volume: 409,692 social mentions over 121 days. Average volume / day: 3,386 social mentions.

***

We've spent the last six weeks reflecting on our pilot project, and want to share our results with you. This post is one in a series of pieces about what we've learned over the last 10 months.

If you like this post check out:

Upwell's Ocean Conservation Baseline Methodology

Upwell's Distributed Network Campaigning Method

Conversation Metrics for Overfishing and Sustainable Seafood

Pages