Social media analytics
Social media analytics is the process of gathering and analyzing data from social networks such as Facebook, Instagram, LinkedIn and Twitter. It is commonly used by marketers to track online conversations about products and companies. One author defined it as "the art and science of extracting valuable hidden insights from vast amounts of semi-structured and unstructured social media data to enable informed and insightful decision making."
Process
There are three main steps in analyzing social media: data identification, data analysis, and information interpretation. To maximize the value derived at every point during the process, analysts may define a question to be answered. The important questions for data analysis are: "Who? What? Where? When? Why? and How?" These questions help in determining the proper data sources to evaluate, which can affect the type of analysis that can be performed.Data identification
Data identification is the process of identifying the subsets of available data to focus on for analysis. Raw data is useful once it is interpreted. After data has been analyzed, it can begin to convey a message. Any data that conveys a meaningful message becomes information. On a high level, unprocessed data takes the following forms to translate into exact message: noisy data; relevant and irrelevant data, filtered data; only relevant data, information; data that conveys a vague message, knowledge; data that conveys a precise message, wisdom; data that conveys exact message and reason behind it. To derive wisdom from an unprocessed data, we need to start processing it, refine the dataset by including data that we want to focus on, and organize data to identify information. In the context of social media analytics, data identification means "what" content is of interest. In addition to the text of content, we want to know: who wrote the text? Where was it found or on which social media venue did it appear? Are we interested in information from a specific locale? When did someone say something in social media?Attributes of data that need to be considered are as follows:
- Structure: Structured data is a data that has been organized into a formatted repository - typically a database - so that its elements can be made addressable for more effective processing and analysis. The unstructured data, unlike structured data, is the least formatted data.
- Language: Language becomes significant if we want to know the sentiment of a post rather than number of mentions.
- Region: It is important to ensure that the data included in the analysis is only from that region of the world where the analysis is focused on. For example, if the goal is to identify the clean water problems in India, we would want to make sure that the data collected is from India only.
- Type of Content: The content of data could be Text, Photos, Audio, or Videos.
- Venue: Social media content is getting generated in a variety of venues such as news sites and social networking sites. Depending on the type of project the data is collected for, the venue becomes very significant.
- Time: It is important to collect data posted in the time frame that is being analyzed.
- Ownership of Data: Is the data private or publicly available? Is there any copyright in the data? These are the important questions to be addressed before collecting data.
Data analysis
is the set of activities that assist in transforming raw data into insight, which in turn leads to a new base of knowledge and business value. In other words, data analysis is the phase that takes filtered data as input and transforms that into information of value to the analysts. Many different types of analysis can be performed with social media data, including analysis of posts, sentiment, sentiment drivers, geography, demographics, etc. The data analysis step begins once we know what problem we want to solve and know that we have sufficient data that is enough to generate a meaningful result. How can we know if we have enough evidence to warrant a conclusion? The answer to this question is: we don't know. We can't know this unless we start analyzing the data. While analyzing if we found the data isn't sufficient, reiterate the first phase and modify the question. If the data is believed to be sufficient for analysis, we need to build a data model.Developing a data model is a process or method that we use to organize data elements and standardize how the individual data elements relate to each other. This step is important because we want to run a computer program over the data; we need a way to tell the computer which words or themes are important and if certain words relate to the topic we are exploring.
In the analysis of our data, it's handy to have several tools available at our disposal to gain a different perspective on discussions taking place around the topic. The aim here is to configure the tools to perform at peak for a particular task. For example, thinking about a word cloud, if we take a large amount of data around computer professionals, say the "IT architect", and built a word cloud, no doubt the largest word in the cloud would be "architect". This analysis is also about tool usage. Some tools may do a good job at determining sentiment, where as others may do a better job at breaking down text into a grammatical form that enables us to better understand the meaning and use of various words or phrases. In performing analytic analysis, it is difficult to enumerate each and every step to take on an analytical journey. It is very much an iterative approach as there is no prescribed way of doing things.
The taxonomy and the insight derived from that analysis are as follows:
- Depth of Analysis: Simple descriptive statistics based on streaming data, ad hoc analysis on accumulated data or deep analysis performed on accumulated data. This analysis dimension is really driven by the amount of time available to come up with the results of a project. This can be considered as a broad continuum, where the analysis time ranges from few hours at one end to several months at the other end. This analysis can answer following type of questions:
- * How many people mentioned Wikipedia in their tweets?
- * Which politician had the highest number of likes during the debate?
- * Which competitor is gathering the most mentions in the context of social business?
- Machine Capacity: The amount of CPU needed to process data sets in a reasonable time period. Capacity numbers need to address not only the CPU needs but also the network capacity needed to retrieve data. This analysis could be performed as real-time, near real-time, ad hoc exploration and deep analysis. Real-time analysis in social media is an important tool when trying to understand the public's perception of a certain topic as it unfolding to allow for reaction or an immediate change in course. In near real-time analysis, we assume that data is ingested into the tool at a rate that is less than real-time. Ad hoc analysis is a process designed to answer a single specific question. The product of ad hoc analysis is typically a report or data summary. A deep analysis implies an analysis that spans a long time and involves a large amount of data, which typically translates into a high CPU requirement.
- Domain of Analysis: The domain of the analysis is broadly classified into external social media and internal social media. Most of the time when people use the term social media, they mean external social media. This includes content generated from popular social media sites such as Twitter, Facebook and LinkedIn. Internal social media includes enterprise social network, which is a private social network used to assist communication within business.
- Velocity of Data: The velocity of data in social media can be divided into two categories: data at rest and data in motion. Dimensions of velocity of data in motion can answer questions such as: How the sentiment of the general population is changing about the players during the course of match? Is the crowd conveying positive sentiment about the player who is actually losing the game? In these cases, the analysis is done as arrives. In this analysis, the amount of detail produced is directly correlated to the complexity of the analytical tool or system. A highly complex tool produces more amounts of details. The second type of analysis in the context of velocity is an analysis of data at rest. This analysis is performed once the data is fully collected. Performing this analysis can provide insights such as: which of your company's products has the most mentions as compared to others? What is the relative sentiment around your products as compared to a competitor's product?
Information interpretation
The best visualizations are ones that expose something new about the underlying patterns and relationships contain the data. Exposure of the patterns and understating them play a key role in decision making process. Mainly there are three criteria to consider in visualizing data.
- Understand the audience: before building the visualization, set up a goal, which is to convey great quantities of information in a format that is easily assimilated by the consumer of information. It is important to answer "Who is the audience?", and "Can you assume the audience has the knowledge of terminologies used?" An audience of experts will have different expectations than a general audience; therefore, the expectations have to be considered.
- Set up a clear framework: the analyst needs to ensure that the visualization is syntactically and semantically correct. For example, when using an icon, the element should bear resemblance to the thing it represents, with size, color, and position all communicating meaning to the viewer.
- Tell a story: analytical information is complex and difficult to assimilate, thus, the goal of visualization is to understand and make sense of the information. Storytelling helps the viewer gain insight from the data. Visualization should package information into a structure that is presented as a narrative and easily remembered. This is important in many scenarios when the analyst is not the same person as a decision-maker.
Role in Business Intelligence
Sentiment Analyser is a technology framework in the field of Social BI that leverages Informatica products. It is designed to reflect and suggest the focus shift of businesses from transactional data to behavioral analytics models. Sentiment Analyser enables businesses to understand customer experience and ideates ways to enhance customer satisfaction.
Common Use-Cases for Social Media Analytics | Required Business Insight | Enabling Social Media Analytics Techniques | Pertinent Social Media Performance Metrics |
Social Media Audience Segmentation | Which segments to target for acquisition, growth or retention? Who are the advocates and influences for brand or product? | Social Network Analysis | Active Advocates Advocate Influence |
Social Media Information Discovery | What are the new or emerging business relevant topics or themes? Are new communities of influence emerging? | Natural language processing Complex event processing | Topic trends Sentiment ratio |
Social Media Exposure & Impact | What are the brand perceptions among constituents? How does brand compare against competitors? Which social media channels are being used for discussion? | Social Network Analysis Natural Language Processing | Conversation Reach Velocity Share of Voice Audience Engagement |
Social Media Behavior Inferences | What is the relationship among business relevant topics and issues? What are the causes for expressed intent ? | Natural Language Processing Clustering Data Mining | Interests or Preferences Correlations Topic Affinity Matrices |
Impacts on Business Intelligence
Recent research on social media analytics has emphasized the need to adopt a BI based approach to collecting, analyzing and interpreting social media data. Social media presents a promising, albeit challenging, source of data for business intelligence. Customers voluntarily discuss products and companies, giving a real-time pulse of brand sentiment and adoption. According to the recent research on social media analytics has mentioned that the need to adopt a Business Intelligence-based approach is needed for collecting, analyzing and interpreting social media data. Social media is one of the most important tools for marketers in the rapidly evolving media landscape. Firms have created specialized positions to handle their social media marketing. These arguments are in line with the literature on social media marketing that suggest that social media activities are interrelated and influence each other.Role in International Politics
The possibilities of the dangers of social media analytics and social media mining in the political arena were revealed in the late 2010s. In particular, the involvement of the data mining company Cambridge Analytica in the 2016 United States presidential election and Brexit have been representative cases that show the arising dangers of linking social media mining and politics. This has raised the question of data privacy for individuals and the legal boundaries to be created for data science companies in relevance to politics in the future. Both of the examples listed above demonstrate a future in which big data can change the game of international politics. It is likely politics and technology will evolve together throughout the next century. In the cases with Cambridge Analytica, the effects of social media analytics have resonated throughout the globe through two major world powers, the United States and the U.K.2016 United States Presidential Election
The scandal that followed the American presidential election of 2016 was one involving a three-way relationship between Cambridge Analytica, the Trump campaign, and Facebook. Cambridge Analytica acquired the data of over 87 million unaware Facebook users and analyzed the data for the benefit of the Trump campaign. By creating thousands of data points on 230 million U.S. adults, the data mining company had the potential to analyze which individuals could be swayed into voting for the Trump campaign, and then send messages or advertisements to said targets and influence user mindset. Specific target voters could then be exposed to pro-Trump messages without being aware, even, of the political influence settling on them. Such a specific form of targeting in which select individuals are introduced to an above-average amount of campaign advertisement is referred to as "micro-targeting." There remains great controversy in measuring the amount of influence this micro-targeting had in the 2016 elections. The impact of micro-targeting ads and social media data analytics on politics is unclear as of the late 2010s, as a newly arising field of technology.While this was a breach of user privacy, data mining and targeted marketing undermined the public accountability to which social media entities no longer subject, therefore twisting the democratic election system and allowing it to dominated by platforms of “user-generated content polarized the media’s message.”