Lately, I have been trying to set more boundaries on my social networking activities, but I have found it very difficult. I mostly limit my public posts to subjects about food and technology. If I post something that is going to be of a political nature or may push someone's buttons, it's set to friends-only. I err on the side of caution now.
In real life, we can detect the tone of someone's voice and read their facial expressions. These non-verbal cues help us decide if we should engage with someone or avoid them entirely. We also trust others to inform us which people are jerks, so we can choose to not associate with them. We have personal experience as well as other senses to inform us as well.
Online, there are certainly ways of limiting our interactions with people, such as creating "groups" of friends and then sharing out those posts to only groups that should see them.
But sorting people into groups is something that requires manual effort and is prone to error. The short-lived Google Plus had "Circles," which never really caught on, but I liked it much more than the way Facebook implements friend groupings.
We really know nothing about other people on Facebook
Although Facebook itself gathers a tremendous amount of information about each user, via the Facebook Graph API, which is used by the company and its developer partners to present advertisements and generate revenue, the data you -- as a Facebook user -- has about another person is very limited.
You can glance at someone's profile and possibly glean their interests from pictures of things they post, where they live, where they work, and what some of their likes are -- but even then, a Facebook profile is only as good as how much the owner wants to share about themselves upfront.
I am beginning to think we need to start treating social media services like Facebook, Instagram, and Twitter -- and yes, even LinkedIn -- as large-scale online dating services. That we need to make the data work for us, rather than for the social media services themselves.
If you are a cat person and the other is a dog person, or if you are a ravenous meat eater and the other is a vegan, traditionally, the software is supposed to determine which major personality traits are most important to you and line you up with that special someone accordingly. Those people shouldn't match if those traits are high priority compatibility factors.
The same needs to be done for Facebook and other services in an age where everyone seems to be at each other's throats online.
Data-driven boundaries are the future of social networking
Our socio-political alignment seems to be the area in which, as electronic social networking denizens, we come to loggerheads the most. We need the "dating service" to provide information to us, but as social network users, so we can gain a better understanding of the people around us.
Essentially, as users, we need to be able to separate the cat people from the dog people, at a glance.
How does this work? While I don't think it is super-exhaustive, I like the way The Political Compass test works, at least in terms of overall visualization.
Similar tests, such as iSideWith, have been made to help determine which presidential candidate in the past election best represents one's values in terms of the overall percentage of political alignment, so as a voter, you could assess accordingly.
How might this be implemented on a large-scale social network like Facebook? The way I envision it, there would be a basic questionnaire every user would need to fill out that would be similar to Political Compass or iSideWith -- and over time, more questions would be asked to refine the algorithm.
Instead of just seeing a user's name and avatar, you would see someone's compatibility with you expressed as either a numerical value or a color -- which could be on a spectrum from Infrared (red) to Ultraviolet (purple).
Depending on which side of the scale you fall on, people who fall closer to your side of the spectrum would be more compatible. Other forms of visualization about someone's personality makeup or even life experience would also be possible, such as a Meyers-Briggs Type Indicator (MBTI). These could be other optional questionnaires to enhance a user's overall experience.
Given all this data, a dashboard for each discussion thread would allow users to see average compatibility of all participants, or whether a single participant was compatible with the group overall. Many ways to interpret the data would be possible, too, to give a participant a more informed view of their digital surroundings.
That means, when armed with this information, one can choose whether or not to engage. Why expend the effort, if it is understood from the get-go that someone on the other side of the screen isn't open to your ideas?
The most serious implications for this would be the overall visibility of the personality data itself. If you think people are leery about populating Facebook profiles due to privacy issues now, I can't imagine them being happy about keying in these questionnaires.
Here is where things get far more interesting. Optionally, you could configure your Facebook settings in such a way that a barrier or boundary is created so that incompatible people never pass through it.
Essentially, the intelligent barrier would pre-populate a gigantic block list for you. But it would not be an impenetrable wall as such -- more like a permeable membrane.
A social membrane is another dimension, if you will.
To you, these incompatible people would not even exist. They would be invisible to you -- and you would be invisible to them if they have their settings set accordingly.
These social membranes could be tweaked, based on tolerance levels, and changes in overall moods over time. During peak election seasons, perhaps it makes sense to set a membrane with say, a 20-percent tolerance. You might only want to socialize with people that fall into a certain shade of blue. But maybe not too blue.
In addition to removing incompatible people from your feed, I envision the membranes preventing friend attempts, or at least notifying the person attempting the friend request that you aren't compatible. Of course, you'd be informed as well and could act accordingly.
Naturally, I also see membranes being used to filter certain types of group discussion.
Maybe that red person and I may have difficulty discussing politics. But who knows -- we might both enjoy a medium-rare hamburger and both agree that people who order them well-done are barbarians. But as burger soul-mates, we should be compatible, regardless of what we feel about who is in office.
So, perhaps, the food group should have a membrane override, in which tolerances are widened but membrane stats are presented to participants for situational awareness.
In addition to filters applied on an individual basis, a group might require a certain membrane quotient to join. And perhaps customized membranes could be created for how we feel about other subjects as well.
Do you like cream cheese in your sushi roll? Mayo on your pastrami sandwich? Ketchup on your hot dog? I don't want to know you, and I don't want you in my group. Sorry.
Obviously, there are many potential downsides to using this sort of technology. There is certainly a danger to having a mass echo chamber effect created by having large numbers of people participate in a threaded discussion that is only partially visible due to others residing behind invisible membranes.
There will be people who will want to live their digital lives without this filter. There will also be people that need to live their digital lives without this filter because they might have jobs that require social networking, or as a group admin, they will need to look at everything going on in their groups.
Because of the tremendous power to alter one's experience online, membranes would have to be optional, and also, the visibility of someone's full membrane decision matrix would have to be at the owner's discretion -- perhaps only friends, once accepted, should be able to view them.
Certainly, membrane data should not be visible to employers and entire classes of businesses and public entities, such as banks, insurance companies, and probably co-workers -- although, let's face it, we all know who at work is toxic, and we probably aren't making friends with those people anyway.
There is also tremendous potential for abuse here, especially given this technology isn't even in place, and we've already seen what interested political parties and threat actors will do with things like memes and exposing stuff like the Facebook Graph. We can only imagine what they would do if they had access to the raw data in membranes.
On the surface, creating these safe spaces for ourselves in the digital world in which we are surrounded only by people with like-minded personalities and worldview sounds like a bad thing. But this may be the only solution to a triggered life of anxiety, anger, and depression for a lot of mentally exhausted people who cannot disconnect from Facebook altogether.
Still, I truly believe we need technology like this if we are to ever make social networking a better, more conflict-free place to spend our digital lives. We set boundaries for ourselves in real-life and make decisions about who to include -- this should naturally extend to our digital existence as well.
Should we create "membranes" to govern our visibility of those who create conflict in our digital existence? Talk Back and Let Me Know.
Adjust these Facebook privacy settings to protect your personal data