Facebook’s recent changes (and their attendant controversies), coupled with conversation I had with Dave today, reminded me of an excerpt from an IM conversation between Mark Zuckerberg and GQ contributor Alex French, detailed in this 2008 (!) article:
(12:25 p.m.) Mark: There’s this definite evolution happening. Where the first part of the social web was mapping out the social graph. And the second phase is now mapping out the stream of everything that everyone does. All of human consciousness and communication.
(12:29 p.m.) Alex: Imagine if you could broadcast people’s emotions into a feed?
(12:30 p.m.) Mark: I think we’ll get there.
(12:30 p.m.) Alex: So how are you going to map all of human consciousness and communication?
(12:30 p.m.) Mark: We don’t map it directly. We give people tools so they can share as much as they want, but increasingly people share more and more things, and there’s this trend toward sharing a greater number of smaller things like status updates, wall posts, mobile photos, etc. A status update can approach being a projection of an emotion.
The above exchange betrays the motivation behind Facebook’s Open Graph (or whatever we’re calling it) and the new Timeline, and it is pretty clear, in light of these comments, why the recent Facebook changes work the way they work and look the way they look. Of course, it’d be difficult to argue that tracking and mapping things like Spotify and Netflix usage represent “all of human consciousness and communication,” but they do track and map certain human activities (as do FB’s check-ins, “likes,” and link and media sharing functions). But what is important to keep in mind is what Zuckerberg says in that last line: “A status update can approach being a projection of an emotion.” Based on this line, it doesn’t take much to imagine the sorts of things Facebook thinks it can glean from its users’ activity, such as emotion from a status update (and I don’t even think this concept is all that controversial - I mean, some status updates do, indeed, project emotion).
What matters here is the totality of what Facebook seemingly thinks it can construct out of its data: that by “mapping out the stream of everything that everyone does” they can approach a map of all human consciousness and communication. Now, as is well understood, the aggregation and use of all of this information raises obvious issues of power, privacy, access and control, and context - topics that other writers and commentators will address much more eloquently than I could at this juncture. Some of these commenters will cry (or, more likely, already have cried) foul at Facebook’s entire enterprise when couched this way, and I don’t think they are necessarily wrong to do so. I, however, am not one of those people (at least not today).
I am more interested - especially in light of the recent changes - in the problems inherent in the construction of Facebook’s map of “all human consciousness and communication” and in the problems with maps generally, that is: maps lie.
Consider the opening paragraph to Mark Monmonier’s How to Lie with Maps:
“Not only is it easy to lie with maps, it’s essential. To portray meaningful relationships for a complex, three-dimensional world on a flat sheet of paper or a video screen, a map must distort reality. As a scale model, the map must use symbols that almost always are proportionally much bigger or thicker than the features they represent. To avoid hiding critical information in a fog of detail, the map must offer a selective, incomplete view of reality. There’s no escape from the cartographic paradox: to present a useful and truthful picture, an accurate map must tell white lies.”
The same goes for Facebook: to portray meaningful relationships for a complex, three-dimensional world on an online social networking site, Facebook must distort reality. And, at least according to the most recent changes, part of that distortion includes categorizing the automatic reporting of certain “light-weight” activities (“Tony Hoffmann is listening to Robyn on Spotify.”) as “sharing.” Some have seen this move as demonstrating an unusual or unorthodox conception of sharing on the part of Zuck and company (or even that Facebook is actively interested in reconceptualizing sharing altogether). As Farhad Manjoo writes over at Slate:
Sharing, in Zuckerberg’s view, has morphed from an affirmative act—that video was hilarious, I think I’ll Like it!—to something more like an unconscious state of being. I watched that video, and therefore it will be shared.
At an item by item level, this certainly seems to be the case. Manjoo further notes:
For as much as he’s invested in sharing, though, Zuckerberg seems clueless about the motivation behind the act. Why do you share a story, video, or photo? Because you want your friends to see it. And why do you want your friends to see it? Because you think they’ll get a kick out of it. I know this sounds obvious, but it’s somehow eluded Zuckerberg that sharing is fundamentally about choosing.
Again, at the level of individual updates, this seems to hold true. But, if we zoom out a bit, things start to look a little different. If, as Manjoo asserts, sharing is fundamentally about choosing, then it is hard to say that users don’t still have a choice - they can choose to enable or disable these new notifications and, subsequently, not “share” such light-weight updates. In this sense, Facebook hasn’t re-conceptualized the act of sharing - they’ve relocated it. They’ve essentially just moved the locus of sharing by bumping it up one level of abstraction. Instead of sharing song by song, or film by film, users are simply implored to share “music” or “film” generally (via particular services, of course), and once a user has agreed to do so, these light-weight updates take care of the rest. More than demonstrating any unusual conception of sharing, this move, I think, demonstrates the ways in which Facebook views and categorizes and manages those things that users share.
Or, to put it another way, it demonstrates the lines that Facebook draws on its map of the world. Instead of oceans, lakes, and rivers, it’s music, movies and books; instead of continents and countries and cities, it’s brands and products and services. And just as the representation of the world is at the discretion of the cartographer, the representation of human behavior on Facebook is at the discretion of Zuckerberg and his employees. And this isn’t automatically a bad thing, it just is; a map must tell lies.
In saying this, I don’t mean to be a Facebook apologist, giving the company license to carve the world up however they please (though they undoubtedly do). Rather, by invoking Monmonier’s work on maps, I only mean to reorient our point of normative inquiry: instead of asking whether Facebook’s recent changes are good or bad or smart or dumb or whatever, we ought to ask what the world looks like according to Facebook’s map of it. And, further, in asking that question, we must be actively aware of Monmonier’s caveat that…
…a single map is but one of an indefinitely large number of maps that might be produced for the same situation or from the same data.
Any given map serves a given purpose, and
…map authors can freely experiment with features, measurements, area of coverage, and symbols and can pick the map that best presents their case or supports their unconscious bias.
With this caveat in mind, we ought to look at the world according to Facebook’s map and ask ourselves: does this look like a world we could live in?
[Cross-posted here at anthonyhoffmann.org]