- SHORT REPORT
- Open access
- Published:
An extended analysis of factors contributing to opinion formation in a bipartite society of mavens and laypeople
Computational Cognitive Science volume 2, Article number: 2 (2016)
Abstract
Background
Communication and sharing of opinions play a crucial role in shaping the views of a person in a society. Interactions with other people enable a person to interpret their views and expound his opinion. Ordinarily, people tend to change their opinions in compliance with those having significantly higher expertise thereby leading to a bipartite society of two intellectual groups i.e. mavens (highly intellectual and confident people) and laypeople (diffident people with little or no experience and knowledge). However, the sharing of information in a group is influenced by the weight of advice with which people consider opinion of others and several control factors like interaction procedure adopted, possibility of mutual exchange of information, and the time at which information is updated. Moreover, the effects of these factors are observable in both physical and digital societies during opinion formation. This study is build upon the prior work of Moussad et al. (PLoS ONE 8:78433, 2013).
Findings
In this study, we use agent based modeling to analyze five types of interaction (including ideal cases) using an integrated selection process to empirically investigate the influence of above mentioned control factors in such a society. Through the simulations, we identify the minimum number of iterations required to reach an agreement in such a group of people and the critical proportion of the respective group to become observable in the opinion formation under different scenarios.
Conclusions
We observe that increasing the weight of advice has a positive effect on the quality of consensus reached as well as the speed of convergence of crowd towards an opinion. Furthermore, the interaction procedure adopted plays a dominant role in demarcating the critical proportions of the groups to dominate the consensus.
Findings
Collective decision making and opinion formation have always been intelligible among humans as well as in the animal groups, and the environment plays an important role in it (Conradt and Roper 2003; 2005; Dyer et al. 2008; Fisher et al. 2009). An opinion can be some quantification of an abstract notion like belief, norm, value, behavior etc. shared by people among each other in a group, and is relevant to the question. During social interactions, people tend to change their opinions because of uncertainty in their judgments. The impact of social influence in opinion formation has been examined through different models and interdisciplinary theories that are extensively investigated by the researchers of diverse domains (Altman 1973; Glomb and Liao 2003; Lewis et al. 2011; Mercken et al. 2007).
In this study, we examine different types of interaction procedures adopted in a bipartite society of mavens (experts) and laypeople (completely unfamiliar with the subject) to analyze their impact on the collective opinion formation. Furthermore, people tend to change their views while interacting with others having significantly higher credence which is one of the reasons to reach an agreement (consensus) in a group (King et al. 2011).
This study is inspired by the bounded confidence (BC) model (Deffuant et al. 2000; Hegselmann and Krause 2002; Weisbuch 2004) and expounds the previous work of Moussad et al. (2013). While analyzing the effect of social influence through simulations, the information about people (including their opinion and credence levels) is stored in a repository. Moussad M. et al. recorded the opinion and credence of participants in their first experiment in such a repository. They distributed this recorded data in the successive iterations of their second experimental study to analyze the effect social influence. Both the experiments had different number and set of people. Their results were based on the random interactions of people and the recorded data in the repository remained same throughout the study.
However, in our investigations, there is only a single experimental simulation in which the participants (agents) remain same throughout and the inclusion of control factors (section “Control factors”) makes the opinion formation dynamically adaptive. Moreover in real world interactions, people form new opinions while interacting with each other and share these newly formed opinions within an iteration. Thus, to incorporate this phenomenon and to link the model closer to real life scenario, we execute an update process on the repository itself.
In the Hegselmann et al. simulation model, an agent i takes multiple agents j into account satisfying a certain threshold, ε i , on opinion difference while making a decision. Thus, each agent has a set of agents with which it interacts given by the Eq. 1 in which x represents the opinion of an agent and N gives the total number of agents. However in this model, an agent i takes a single agent j into account while making a decision satisfying two thresholds. The first is given by the difference in opinion, α i , and the second is given by the difference in credence level, β i . This single agent is selected based on the type of interaction (TOI; Section “Control factors”) from the set given by the Eq. 2 in which x and y represent the opinion and credence respectively of an agent, z specifies the type of interaction, and N gives the total number of agents.
The data analyzed in this exposition is generated from controlled computerized simulations of social interactions with the help of a model developed in Net Logo simulation environment (Wilensky 2014). The results obtained show a clear demarcation of the minimum critical proportion of the two intellectual groups required for their dominating effect to become observable in collective opinion formation and the number of iterations generally required to reach an agreement.
Materials and methods
Model development
We created an agent-based model (ABM) to empirically investigate the impact of control factors (Section “Control factors”) during social interactions. An ABM uses agents which have a symbiotic relationship in the development of an evolving effect in the system (Bonabeau 2002).
In this model, an agent acts as a person whose properties and their use are given in Table 1. The opinion (O) of a person is a real number. The credence (C) takes integral values between 1 to 6. The higher values of C correspond to greater confirmation level of the individual. People with lower credence levels of 1, 2, and 3 act as laypeople and 6 as mavens. People with 4 and 5 credence level are not present at the start of interactions in the model because they belong to neither group. The correct answer is given by a key(K) in the simulation. All the mavens and laypeople share an opinion based on their intellect respectively for quantification of their effect during interactions.
The total number of people in each simulation is given by N. The simulations start with the laypeople in complete majority. In a simulation, consecutive experiments are executed with the proportion of mavens increasing by a constant factor. Each experiment involves iterations in which selected agents share their opinion and credence. The experiment continues until it reaches an upper limit set on the number of iterations or a stationary state. A stationary state is said to be reached if all the people retain their opinion and no change in their respective credence level is observed for 15 consecutive iterations. An upper limit on the number of iterations is used in this study because in some cases people tend to adjust their opinion indefinitely.
An iteration runs in terms of step. It continues until step reaches the step-limit given in Section “Control factors”. Two persons are selected in each step that act as Source (S) and Target (T). Source is the person who receives the information and makes decision. Target is the person whose information the source receives. A person revises his opinion in three possible ways (Lorenz et al. 2011; Yaniv 2004):
-
1.
Retain: Totally discards the received opinion and thus, retains his initial opinion i.e. prior to receiving the new information.
-
2.
Adjust: Adjusts his opinion between his original and that of T based on the weight of advice \(\omega \in (0, \frac {1}{2}]\) (Hirscher 2014). This changed opinion is given by Eq. 3.
$$ changedOpinion(S) = O_{S} + \omega(O_{T} - O_{S}) $$(3) -
3.
Inherit: Completely ignores his personal opinion and inherits the opinion he receives from T.
Thus, the weight of advice, ω, depends upon the opinions itself and the model becomes nonlinear (Hegselmann and Krause 2002). Similar to the model developed by Hegselmann et al., an agent is influenced by another agent, i.e. when he either adjusts his opinion or inherits another agent’s opinion, only if the difference between their credence satisfy a certain threshold given in Table 2. This behavior of a person after receiving an opinion and credence value is adapted from the published study by Moussad et al. (2013) and customised to initialize and implement modeling parameters. It is further explained below.
The source (S) receives the information of target (T) and changes his credence (changedCredence(S)) based on the normalized difference between opinions (ΔN(O ST )=|O S −O T |/O S ) and difference between credence levels (ΔC ST =C S −C T ) as:
-
Near: If ΔC ST <=−4, credence increases by one level. If 0>=ΔC ST >−4, credence increases by one level but with a probability of 0.5.
-
Intermediate: Credence increases by one level only if ΔC ST <=−3 with a probability of 0.5.
-
Far: Credence decreases by one level if ΔC ST >=4.
Control factors
-
1.
Mutual Exchange (ME) Property: This refers to the mutual exchange of information between S and T. If mutual exchange occurs, then the two persons share their opinion and credence with each other simultaneously and revise their opinions. The persons who act as S and T when mutual exchange occur do not participate again in an iteration. Thus, the step-limit is given by \(\frac {N}{2}\) in this case. On the other hand, if there is no mutual exchange of information only the source alters his views and step-limit is set to N. If there is no possible target for a source, then the source receives his own opinion and credence. Figure 1 illustrates this scenario in an interaction with a population of six.
-
2.
Time of Update (TU): Decides the time at which the information is updated in the repository i.e. changedOpinion and changedCredence set to opinion and credence respectively of the agent(s). This is demonstrated in Fig. 1. It can be classified into two types:
-
(a)
Concurrent (CON): at the end of an iteration.
-
(b)
Sequential (SEQ): after each step.
-
(a)
-
3.
Type of Interaction (TOI):
-
(a)
Nearest : This selects the target that holds the closest opinion to the source. (Algorithm 1).
-
(b)
Random : This randomly selects the target from the group. (Algorithm 2)
-
(c)
Neighbor : This identifies a target from the Moore neighborhood of source. (Algorithm 3)
-
(d)
Optimization : This finds the source and target such that the difference between the opinion of source and key becomes minimum. It is the ideal case for social interactions. (Algorithm 5).
-
(e)
De-optimization : This finds the source and target such that the difference between the opinion of source and key becomes maximum. It is the worst case for social interactions. (Algorithm 5).
-
(a)
The underlying mechanism and heuristic used in our model has been depicted in Fig. 2. The System has a repository that contains all the data about participants and a process called Target Selector that selects a target for the source in each step using the algorithms under the set configuration. After this selection, all the decisions are made based on the control factors. At the end of an iteration, the experiment either terminates or new source and target are selected to interact. If the experiment terminates, the proportion of maven is incremented and new experiment starts.
Results and discussion
We determined the number of interactions as an effective metric of the consensus convergence required for the system. A stationary state is achieved in most of the experiments and thus, the system was considered stable. The instability in others can be accrued to the observation that very few people continued to adjust their opinion indefinitely between the two poles created by the maven and laypeople opinions. In all configurations, either mavens or laypeople must be present above a critical proportion to dominate the opinion formation process which engenders two critical points. In between these two points, a transition phase occurs in which the collective opinion of the crowd shifts from laypeople to maven or vice versa but lies between initial opinion of maven and laypeople respectively. The collective opinion in Figs. 4, 6 and 8 is given by the average of the opinion of all agents in the system.
The results shown here belong to a group of 100 people. We use real numbers as opinion values to mathematically formulate the opinion formation process. Such values have also been used in previous published studies (Hirscher 2014). Mavens and laypeople had an initial opinion of 600 and 50 respectively. The value of key is fixed at 550. These values are randomly chosen but follow the constraints defined in (Moussaïd et al. 2013). The observations are made at two weights of advice viz. 0.3 and 0.5 which confirm to the valid range (Deffuant et al. 2000; Hirscher 2014). The results discussed here are for the weight of advice fixed at 0.3. The Figs. 3, 4, 5, 6, 7 and 8 and Table 3 for different control factor configurations are generated from an average of 30 simulations because of variation in collective opinion and stationary states accounting to interactions of people with different credence.
The observations pertaining to these scenarios generated under different control factor configurations are shown below.
Without mutual exchange and concurrent TU
Figures 3 and 4 show the graphs for stationary state numbers and opinion formations respectively under this configuration.
In the Nearest type of interaction, the stationary state number decreases very slowly while the proportion of maven is between 0 and 0.8. It decreases more quickly between 0.8 to 0.95, and then becomes steep which indicates that maven have major impact on the stationary state above the proportion of 0.8. In Random and Neighbor, introduction of maven in the group creates a great disturbance in the group. The stationary state number grows rapidly until the proportion of maven become 0.15 in both the cases. However, the stationary state number decreases at a faster rate than it increased in Random, with two points of major slope changes at 0.25 and 0.35, whereas it remains high in Neighbor until the proportion of maven is 0.3 and then begins to drop slowly until the proportion of maven is 0.75. These interactions indicate that people may form consensus quickly if they are allowed to interact freely with no spatial limitations. The collective opinions formed under both the interactions show that critical proportion of maven required for a consensus closer to the key is at least 0.25 and 0.45 for Random and Neighbor respectively. The stationary state plots for Optimization and De-Optimization reveal that an agreement can be achieved in much shorter time in either best-case or worst-case scenario under this configuration. By increasing the weight of advice to 0.5 from 0.3, the stationary state number for De-Optimization becomes closer to that observed for Optimization, which remains unaffected by the change. Moreover, the observed collective opinion under Optimization show that the consensus shifts towards the maven as soon as maven enter the group. On the contrary, De-Optimization shows that the consensus may shift towards the laypeople as soon as they are introduced in the group.
Without mutual exchange and sequential TU
Figure 5 shows the stationary state numbers for different type of interactions under this configuration. Minor changes are observed under each type of interactions under this configuration. In the Nearest, the stationary states decrease slowly until the proportion of maven becomes 0.7 and then decreases at a faster rate until the proportion become 0.95 and then becomes steep. Similar to concurrent revision, the introduction of maven in the group engenders disturbance and the stationary state number increases drastically while the proportion of maven goes from 0 to 0.15 in Random and 0 to 0.2 in Neighbor. The stationary state number remain high between 0.15 and 0.2 under Random and then decreases rapidly. In Neighbor, the stationary state number becomes stable when the proportion of maven is at least 0.6.
If it is assumed that the decision of people takes them only closer to the key, the proportion of possible targets with opinions farther to the key decreases after each step in an iteration which results in quick convergence. This is an ideal scenario which can be realized under the Optimization in which the stationary state number is reduced by 1. This shows that sequential updates allow slightly faster convergence of crowd if people are strictly lead closer to the key during interaction. Moreover, Optimization and De-Optimization algorithms take comparatively much greater time than the other type of interactions (observable in the Net Logo Model). This time increases exponentially by increasing the number of people. The stationary state numbers observed under both the time of updates are very low which conveys that each iteration requires longer time to complete. Thus, reduction of stationary state number by 1 reduces the total time for interaction to complete by a significant amount. However, incorrect opinions may also travel faster during sequential updates. This effect is not present in under the concurrent update of opinions but they have a tantamount drawback - if because of the newly formed opinion of an agent, the opinions of other agents become closer to the key within an iteration, it does not happen since it is the initial opinion and credence which are shared with other agents in this scenario. This is demonstrated by the plots in Nearest, Neighbor, and Random (Figs. 5 and 6) which appear similar to the concurrent time of update (Figs. 3 and 4). The decision of a person can also make his opinion farther from the key. In the worst case or De-Optimization, the stationary state numbers are observed to increase linearly with respect to proportion of maven in contrast to concurrent updates where the stationary states remain constant throughout, and a steep fall is obtained when the proportion of maven is between 0.95 and 1 in both the cases. Thus, concurrent updates are found to be better in terms of time needed for convergence under worst case scenario.
The collective opinions observed under this configuration are shown in Fig. 6. The critical proportion of the maven and laypeople required for their respective effect to be observable are not affected by changing the time of update. But for Random, the transition region shifts to the right by a factor of 0.05 conveying that more maven are needed under this configuration for a good quality consensus.
With mutual exchange and concurrent TU
By allowing mutual exchange of information in the system, new critical points were observed for Random and Neighbor (3). The rate at which stationary state number increased was significantly higher than without mutual exchange in Neighbor. However, it was much slower in Random in which it rose to maximum of 65 iterations whereas it was as high as 203 without mutual exchange (3). Thus, the time needed for the crowd to converge to a concerted opinion is found to be significantly lower under this configuration for Random type of interaction. The reason for decreased stationary state numbers can be accrued to the following observation. There are 3 categories that a person can belong to during interaction: mavens (credence 6), laypeople (credence 1), or others (credence 2 - 5). Thus, there is a probability associated with a person of interacting with another person from any of these categories. Now, as opinion of maven lie closest to the key and they have highest credence, it is most beneficial to interact with the mavens. When there is mutual exchange of information, the probability of interaction with a maven increases with increasing number of steps in an iteration since the people once selected as source and target cannot interact again and initially the probability of interaction with layperson is highest. However, if there is no mutual exchange and time of update is concurrent, the probability of interaction with a maven remains constant in an iteration because the credence and opinion values of people are updated only at the end of iteration. On the other hand, if the time of update is sequential and there is no mutual exchange, then the proportion of maven can either increase or decrease within an iteration because the changes in opinion and credence are reflected within the iteration owing to the sequential time of update. Thus, the probability of interaction with maven is flexible in this case. In both the Random and Neighbor, the stationary state number increased until the proportion of maven became 0.05, remained similar until 0.1, and then decreased. The stationary state number became stable when proportion of maven was at least 0.4 and 0.65 in Random and Neighbor respectively. Overall, with the introduction of maven, the group was able to reach a consensus in much fewer number of iterations (compare Figs. 3, 5, and 7 under this configuration). Also, the transition period started early, when proportion of maven was 0.05, to 0.3 and 0.4 for Random and Neighbor respectively (Fig. 8). The Optimization and De-optimization in this configuration were found to be computationally unsolvable and therefore not considered.
Overall, averaging the opinions during decisions tend to result in faster convergence of crowd with opinion formations nearer to the key (Figs. 3, 5, 7). During Nearest, the linear curve across all configurations for collective opinion suggests that people with similar opinion form clusters, gain full credence, and stick with their opinion until end, however erroneous it might be. The transition region is spread over the entire possible proportion of maven. Thus, this type of interaction is the worst case for either maven or laypeople to influence the crowd. Table 3 shows the average data for 30 simulations. Through the analysis of different types of control factors under Random case, it can be inferred that a consensus can be reached much quickly by exchanging information mutually. Fewer number of maven are required with concurrent TU but the number of iterations are lesser in sequential TU for reaching consensus. The plots for Optimization and De-Optimization (Figs. 3, 4, 5 and 6) indicate that it is possible for either mavens or laypeople to attract the consensus at any proportion since under Optimization, the collective opinion is completely biased towards mavens whereas under De-Optimization, the collective opinion is completely biased towards lay people. However, there was no simulation under Random or Neighbor type of interaction that imitated this behavior which indicates that the probability of such scenario is very slim. Therefore in general, the maven or laypeople must exist above a critical proportion if the interaction is Random or Neighbor to dominate the collective opinion (Table 3).
Conclusion
Social influence is prevalent in the formation of public consensus on various issues and quotidian activities at both microscopic and macroscopic levels. A massive surge has been observed in the studies related to this area from varied perspectives of philosophy and technology. This study and model accompanied with it can be used to estimate the collective opinion formation in a crowd. Results obtained through the simulations reveal that the stationary state numbers in all types of interactions decrease by increasing the weight of advice from 0.3 to 0.5. Moreover, mutual exchange of information is beneficial during opinion formation under Radnom and Neighbor since it leads to the agreement more quickly. In Random, if mutual exchange of information is not possible, then the time of update should be concurrent if the proportion of maven is less, otherwise the time of update should be sequential for reaching an agreement quickly. However in Neighbor, if mutual exchange of information is not possible, then the time of update should be sequential if the proportion of maven is less, otherwise the time of update should be concurrent for reaching an agreement quickly. We do not consider negative influences (ω<0 or ω>1) that could lead to highly unpredictable consensus. Also, some opinions might be randomly scattered within the system which can have an impact on the consensus reached. The effect of these determinants in the opinion formation require further research.
Abbreviations
CON, Concurrent; SEQ, Sequential; ME, Mutual Exchange; TOI, Type of Interaction; TU, Time of Update; S, Source; K, Key; T, Target; C, Credence; O, Opinion
References
Altman, I (1973). Reciprocity of Interpersonal Exchange. Journal for the Theory of Social Behaviour, 3(2), 249–261. doi:10.1111/j.1468-5914.1973.tb00325.x.
Bonabeau, E (2002). Agent-based modeling: Methods and techniques for simulating human systems. Proceedings of the National Academy of Sciences, 99(Supplement 3), 7280–7287. doi:10.1073/pnas.082080899.
Conradt, L, & Roper, TJ (2003). Group decision-making in animals. Nature, 421(6919), 155–158. doi:10.1038/nature01294.
Conradt, L, & Roper, TJ (2005). Consensus decision making in animals. Trends in Ecology & Evolution, 20(8), 449–456. doi:10.1016/j.tree.2005.05.008.
Deffuant, G, Neau, D, Amblard, F, Weisbuch, G (2000). Mixing beliefs among interacting agents. Advances in Complex Systems, 03(01n04), 87–98. doi:10.1142/S0219525900000078. http://www.worldscientific.com/doi/pdf/10.1142/S0219525900000078.
Dyer, JRG, Ioannou, CC, Morrell, LJ, Croft, DP, Couzin, ID, Waters, DA, Krause, J (2008). Consensus decision making in human crowds. Animal Behaviour, 75(2), 461–470. doi:10.1016/j.anbehav.2007.05.010.
Fisher, B, Turner, RK, Morling, P (2009). Defining and classifying ecosystem services for decision making. Ecological Economics, 68(3), 643–653. doi:10.1016/j.ecolecon.2008.09.014.
Glomb, TM, & Liao, H (2003). Interpersonal aggression in work groups: Social influence reciprocal and individual effects. Academy of Management Journal, 46(4), 486–496. doi:10.2307/30040640.
Hegselmann, R, & Krause, U (2002). Opinion dynamics and bounded confidence models, analysis, and simulation. Journal of Artificial Societies and Social Simulation, 5(3). Citeseer. http://jasss.soc.surrey.ac.uk/5/3/2.html.
Hirscher, T (2014). Consensus formation in the Deffuant model. Göteborg: Institutionen för matematiska vetenskaper, Chalmers tekniska högskola. http://publications.lib.chalmers.se/publication/195686.
King, AJ, Cheng, L, Starke, SD, Myatt, JP (2011). Is the true ‘wisdom of the crowd’ to copy successful individuals?Biology Letters, 8(2), 197–200. doi:10.1098/rsbl.2011.0795.
Lewis, K, Gonzalez, M, Kaufman, J (2011). Social selection and peer influence in an online social network. Proceedings of the National Academy of Sciences, 109(1), 68–72. doi:10.1073/pnas.1109739109.
Lorenz, J, Rauhut, H, Schweitzer, F, Helbing, D (2011). How social influence can undermine the wisdom of crowd effect. Proceedings of the National Academy of Sciences, 108(22), 9020–9025. doi:10.1073/pnas.1008636108.
Mercken, L, Candel, M, Willems, P, de Vries, H (2007). Disentangling social selection and social influence effects on adolescent smoking: the importance of reciprocity in friendships. Addiction, 102(9), 1483–1492. doi:10.1111/j.1360-0443.2007.01905.x.
Moussaïd, M, KÃemmer, JE, Analytis, PP, Neth, H (2013). Social Influence and the Collective Dynamics of Opinion Formation. PLoS ONE, 8(11), 78433. doi:10.1371/journal.pone.0078433.
Weisbuch, G (2004). Bounded confidence and social networks. The European Physical Journal B - Condensed Matter and Complex Systems, 38(2), 339–343. doi:10.1140/epjb/e2004-00126-9.
Wilensky, U (2014). Netlogo. Technical report, Center for Connected Learning and Computer-Based Modeling, Northwestern University, Evanston, IL (1999). http://ccl.northwestern.edu/netlogo/.
Yaniv, I (2004). Receiving other people’s advice: Influence and benefit. Organizational Behavior and Human Decision Processes, 93(1), 1–13.
Acknowledgements
We would like to thank the anonymous reviewers for their extremely insightful and valuable comments that enabled us to enhance the quality of our manuscript.
Authors’ contributions
All the authors contributed equally to the paper. All authors read and approved the final manuscript.
Authors’ information
Shubham Sharma : shubhamsharma67@gmail.com
Rinkaj Goyal: rinkajgoyal@gmail.com
Competing interests
The authors declare that they have no competing interests.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Verma, G., Sharma, S. & Goyal, R. An extended analysis of factors contributing to opinion formation in a bipartite society of mavens and laypeople. Comput Cogn Sci 2, 2 (2016). https://doi.org/10.1186/s40469-016-0009-1
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s40469-016-0009-1