What thirty years of research on organizational trust has to say about why some virtual communities feel safe and others feel dangerous — and how to build the kind that lasts.

The thing nobody tells you about trust
Here’s a thing you’ve probably experienced but never had a vocabulary for. You walk into a new online community — a Discord server, a game guild, a forum, a virtual world — and within about thirty seconds, before anyone has said a word to you, you have already made a judgment about whether you trust this place. Not whether you like it. Whether you trust it. Whether you are willing to put a small piece of yourself on the table and see what happens.
You can’t quite name what triggered the judgment. Something about the tone of the welcome message. Something about how organized the channels look. Something about whether the moderator names are visible or hidden. Something about whether the recent conversations feel warm or performative. You’re scanning for signals, dozens of them, faster than you can consciously process, and the aggregate of those signals produces a feeling that sits somewhere between “I could belong here” and “I should leave.”
What you just did is what trust researchers have been studying since the mid-1990s, and they’ve gotten surprisingly far. The most cited paper in the field — one of the most cited papers in all of organizational science, with thousands of citations across psychology, business, law, medicine, and computer science — was published in 1995 by three researchers at Notre Dame and Purdue: Roger Mayer, James Davis, and F. David Schoorman. Their paper, “An Integrative Model of Organizational Trust,” proposed that when a person decides whether to trust another person or institution, they are evaluating three things: ability (can they do what they say they can do?), benevolence (do they care about my wellbeing?), and integrity (do they follow a consistent set of principles?). If all three are present, trust forms. If any one is missing, something feels wrong — even if the person can’t explain what.
The model was originally designed for workplaces. A manager trusts an employee. A customer trusts a company. But the framework has since been applied to governments, medical relationships, educational institutions, and — increasingly — virtual communities. And when I looked at it next to the Four Pillars of Virtual Trust that form the core of this lesson, what struck me was how cleanly the two frameworks align, and how the places where they don’t align turn out to be the most interesting parts.
Let me walk you through it.
Pillar One: Contractual Trust — or, Integrity by Another Name
Do people keep their promises here? Are the rules clear? Are the rules enforced consistently? Can I depend on the people around me to do what they said they’d do?
This is the foundation of trust in any community, virtual or physical, and it maps directly onto what Mayer, Davis, and Schoorman called integrity. In their model, integrity is the perception that the trusted party adheres to a set of principles the trusting party finds acceptable. It doesn’t mean the two parties have the same principles. It means the trusted party has principles and follows them consistently. Consistency is the word that comes up again and again in the literature. A leader (or a moderator, or a system) who is fair on Monday and arbitrary on Tuesday destroys contractual trust faster than one who is harsh but predictable.
The research on this is deep and has been replicated across many contexts. Davis, Schoorman, Mayer, and Tan published a follow-up study in 2000 that tested the model empirically, and they found that perceived integrity was among the strongest predictors of actual trusting behavior. Consistency, a reputation for honesty, and fairness were the components that contributed most to the perception of integrity.
In virtual communities, this has a specific design implication that is easy to underestimate: the rules have to be visible and the enforcement has to be consistent. A community with unwritten rules that everyone “just knows” is a community that newcomers cannot trust, because the newcomer has no way to evaluate whether the community’s principles will be applied to them fairly. Written rules — clearly stated, easily found, consistently applied — are not bureaucracy. They are the architectural equivalent of walls. They tell the newcomer: this is where the boundaries are, and they are the same boundaries for everybody. That is the foundation on which everything else gets built.
Pillar Two: Communication Trust — the Pillar the Model Implies But Doesn’t Name
The original model treats communication mostly as a vehicle through which ability, benevolence, and integrity become visible. In a virtual community, though, communication isn’t just the vehicle — it’s the road itself. When the only way people can experience each other is through what they choose to say, how they say it, and what they choose to keep private, communication stops being a background condition and becomes its own category of trust. That’s why it earns its own pillar here.
Communication trust is a dual obligation: the obligation to share what needs sharing, and the obligation to protect what needs protecting. People need to know they’ll be told what they need to know when they need to know it. And they also need to know that what they share in confidence will stay in confidence.
The academic trust literature handles this mostly under the umbrella of integrity, but there’s a more recent body of work on trust in virtual communities specifically that carves out communication as a distinct mechanism. A study examining perceived interactivity in virtual communities found that connectedness and reciprocity — the quality and frequency of communication between members — were significant antecedents to interpersonal trust between members, separate from the trust members placed in the system itself. In other words, the research supports a distinction between trusting that the system works (competence trust, Pillar Three) and trusting that the people in the system communicate honestly and carefully (communication trust, Pillar Two). They are different psychological processes producing different outcomes.
The confidentiality side of communication trust has its own research tradition, mostly in the organizational psychology literature on psychological safety. Amy Edmondson at Harvard Business School has published extensively on the concept: a team has psychological safety when its members believe they can speak up, share mistakes, and be vulnerable without being punished. The research consistently shows that psychological safety is a prerequisite for the kind of honest communication that high-performing teams require. Virtual communities work the same way. A member who shares something vulnerable and sees it protected will share more. A member who sees their confidence betrayed — shared in a screenshot, laughed about in another channel, used as leverage in a disagreement — will not only stop sharing, they will leave, and they will warn others.
The design implication: build systems that protect privacy at least as carefully as you build systems that facilitate communication. Moderation logs that are visible to the person being moderated. DM systems that cannot be forwarded without the original sender’s knowledge. Confidential channels that are actually confidential. Every feature you build for communication is also, implicitly, a promise about confidentiality. Make sure you can keep the promise.
Pillar Three: Competence Trust — or, Ability by Another Name
Do the people running this place know what they’re doing?
This is Mayer, Davis, and Schoorman’s ability dimension, and it is the most domain-specific of the three. Unlike integrity and benevolence, which are relatively stable across contexts (a person with integrity has integrity whether they’re managing a team or organizing a neighborhood barbecue), ability is context-dependent. A brilliant programmer who is a terrible moderator destroys competence trust in a community even though their technical ability is beyond question. The relevant ability is the ability to do this specific thing well.
In virtual communities, competence trust has two faces. There is system competence — does the platform work? Does it crash? Are bugs fixed promptly? Does the search function actually find things? A community built on a platform that is visibly broken leaks competence trust with every crash and every glitch. And there is leadership competence — are the moderators trained? Do they handle conflicts fairly? Do they make decisions that the community can understand even when the community disagrees? When a crisis hits, do the leaders know what to do?
The research on virtual community trust specifically has confirmed that system trust and interpersonal trust are distinct constructs that develop through different mechanisms. Trust in the system is built through responsiveness and active control — the perception that the platform reacts to user needs and gives users meaningful control over their own experience. Trust in the people is built through connectedness and reciprocity, as we discussed in Pillar Two. Both kinds of trust are necessary for a community to feel safe, and a failure in either one undermines the whole structure.
The design implication: invest in moderator training and platform reliability with the same seriousness you invest in features and content. Nobody notices a working server, the same way nobody notices the absence of a toothache. But everybody notices the crash, and the moderator who handled a conflict badly, and the bug that ate their saved work. Competence trust is asymmetric — it takes a hundred reliable interactions to build and one spectacular failure to destroy.
Pillar Four: Care Trust — or, Benevolence and the Thing That Holds It All Together
Do the people in this community care about me?
This is Mayer, Davis, and Schoorman’s benevolence dimension, and it is the one that, in their model, had the single largest effect on trust formation in close relationships. Benevolence is the perception that the trusted party wants to do good to the trusting party, apart from any self-interested motive. It is not transactional. It is not “I’ll help you because I need something from you.” It is “I’ll help you because I care about what happens to you.”
In virtual communities, care trust is the difference between a space people visit and a space people defend. A member who believes the community genuinely cares about them will tolerate a lot — server outages, moderator mistakes, rule changes, even the occasional interpersonal conflict — because the underlying belief is that the people involved are trying, in good faith, to look out for each other. A member who does not believe the community cares will leave at the first sign of friction, because there is nothing worth staying for.
The research supports this powerfully. The 2007 follow-up paper by Schoorman, Mayer, and Davis — revisiting their 1995 model in light of a decade of additional research — explicitly addressed the role of affect and emotion in trust formation, noting that trust is not a purely cognitive calculation. It has an emotional component, and the emotional component is most strongly influenced by perceived benevolence. People don’t just think their way into trusting a community. They feel their way in, and what they’re feeling for is care.
You cannot fake care trust. You cannot build it with features. You cannot automate it with bots. It grows from the accumulated weight of small acts — a moderator who checks in privately after a heated argument, a community member who remembers that someone was having a hard week and asks how they’re doing, a leader who admits a mistake publicly and takes responsibility for fixing it. Each of these acts deposits a tiny amount into the care-trust account, and the balance of that account is what determines whether people stay when things get hard.
The design implication: model care from the top. Community culture flows downhill. If the leadership is transactional, the community will be transactional. If the moderators are kind, the community will learn that kindness is expected. If the first thing a newcomer experiences is a real human being saying “welcome, glad you’re here,” the newcomer’s care-trust meter starts at a higher baseline than if the first thing they experience is an automated welcome message that nobody reads. Design for the human touch, even and especially when the system surrounding it is digital.
The thing that ties all four together: reciprocity
There is one finding in the trust literature that I want to end on, because it is the insight that turns the Four Pillars from a static framework into a living dynamic.
Trust is reciprocal. This is one of the most consistent findings in the field, confirmed across dozens of studies in both physical and virtual contexts. Someone has to go first. Someone has to extend a small act of trust — share a piece of themselves, do a favor without being asked, follow a rule even when nobody is watching — before the other party can reciprocate. And when the reciprocation happens, the original trust deepens, and the cycle begins again. Each round of the cycle builds a slightly stronger bond than the round before.
This has been specifically documented in virtual community research. A study of Spanish-speaking free software communities found that disposition to trust, familiarity, and a norm of reciprocity were the three most significant antecedents to trust in a virtual community. Trust didn’t arrive all at once. It grew in layers, each layer dependent on the one before it, each layer making the next one possible. The researchers used the term “thin trust” to describe the initial, fragile, newcomer-level trust that forms when a person first enters a community — thin because it’s based on category membership and general expectations rather than direct experience. Over time, if the community’s four pillars are doing their work, thin trust thickens into something durable. But it starts thin. It always starts thin.
This is the most important design insight in the entire post: your community must be designed to tolerate thin trust. The newcomer who walks in the door has almost no trust yet, and they should not be expected to. They are going to lurk. They are going to watch. They are going to test the waters with the smallest possible investment, and they are going to evaluate the community’s response before they invest more. A community that demands thick trust from day one — that expects newcomers to introduce themselves, share personal information, commit to regular attendance, or demonstrate loyalty before they’ve had a chance to evaluate the space — is a community that will lose most of its newcomers at the threshold. The door needs to be wide enough for thin trust to walk through, and the interior needs to be warm enough for thin trust to thicken on its own schedule.
That’s the whole game. Build the four pillars. Make the door wide. Let the trust thicken.

What I want you to take with you
If you are building a virtual community — or any community — you are in the trust business. Every design decision you make, every moderation policy you write, every welcome message you craft, every crisis you handle is either building trust or spending it. The four pillars give you a vocabulary for knowing which kind of trust you’re working on at any given moment, and the research gives you confidence that the vocabulary is grounded in something real.
Contractual trust: are you keeping your promises? Communication trust: are you sharing what needs sharing and protecting what needs protecting? Competence trust: is the system reliable and is the leadership skilled? Care trust: does this community genuinely care about its members?
If all four are present, you have a community worth belonging to. If any one is cracked, the whole structure is weaker than it looks, and a bad enough day will bring it down. Build all four. Build them on purpose. And when you’re not sure which one needs attention, start with care — because care is the one that holds the other three in place.
Sources and further reading
On the foundational model of trust (ability, benevolence, integrity): Mayer, R. C., Davis, J. H., and Schoorman, F. D. (1995), “An Integrative Model of Organizational Trust,” Academy of Management Review, 20(3), 709-734. This is one of the most cited papers in organizational science. The 2007 follow-up: Schoorman, F. D., Mayer, R. C., and Davis, J. H. (2007), “An Integrative Model of Organizational Trust: Past, Present, and Future,” Academy of Management Review, 32(2), 344-354.
On the empirical validation of the model: Davis, J. H., Schoorman, F. D., Mayer, R. C., and Tan, H. H. (2000), “The trusted general manager and business unit performance: empirical evidence of a competitive advantage,” Strategic Management Journal, 21, 563-576.
On psychological safety and communication trust: Edmondson, A. C. (1999), “Psychological Safety and Learning Behavior in Work Teams,” Administrative Science Quarterly, 44(2), 350-383. Also: Edmondson, A. C. (2019), The Fearless Organization, Wiley — the popular synthesis of the research.
On trust in virtual communities specifically: The study finding connectedness and reciprocity as antecedents to member trust: published in Cognitive Computation (2013), examining perceived interactivity and trust in virtual community members. On trust antecedents in free software communities: Casaló, L. V., Flavián, C., and Guinalíu, M. (2008), “Fundaments of trust management in the development of virtual communities,” Management Research News, 31(5), 324-338. On “thin trust” and “swift trust” in online contexts: Putnam, R. D. (2000), Bowling Alone, Simon & Schuster; Jarvenpaa, S. L., and Leidner, D. E. (1999), “Communication and Trust in Global Virtual Teams,” Organization Science, 10(6), 791-815.
On trust reciprocity: The reciprocal nature of trust is a consistent finding across the trust literature. For a readable summary: Putnam, R. D. (2000), Bowling Alone. For the empirical evidence in organizational contexts: the 2007 Schoorman/Mayer/Davis follow-up paper explicitly addresses reciprocity as a mechanism.
Note to readers: verify the primary sources yourself before quoting. The trust literature is large and actively evolving, and the most important thing about a scholarly blog post is that it points you toward the research rather than replacing it.