24 and Zero Dark Thirty helped sell Americans on torture

A new Pew poll says Americans approve of CIA torture by 51%-29%, so bringing this out again:

Not just politicians, but also the media are responsible for selling Americans on torture. During the Bush years, there was the series 24, whose very premise – Jack Bauer only has 24 hours to stop the terrorists and save America – made the case that torture can be justified in an emergency. Last year, the film Zero Dark Thirty revived the argument by erroneously depicting torture as instrumental to finding (and killing) Osama Bin Laden. The order of events shown implies a connection between the torture of a detainee and what most Americans think is the most significant foreign policy achievement of the last decade. In other words, the plot exploits the jingoism of US audiences to convince them that while the CIA did some ugly things, it was all worth it in the end.

This kind of ideological manipulation is especially worrying in light of the filmmakers’ heavy collaboration with the CIA. The hacks behind Zero Dark Thirty got exclusive access to information about Bin Laden’s murder that was denied to the public. In return, the CIA got Oscar-nominated, chest-thumping propaganda. While some haveclaimed that the interrogation scenes are actually critical of torture, the camerawork and editing are careful to show us everything from the point of view of the CIA officers – not the detainee. For example, when he is stuffed into a box too small for his body, terrified and in pain, we don’t go in there with him. Our perspective stays outside. We’re invited to identify not with the tortured, but the torturers.

As Glenn Greenwald pointed out on MSNBC, “Americans know that torture is brutal – That’s why they think it works. They have supported torture because they believe that the people that we’re doing it to are primitive, violent, horrible savages who need to be treated brutally, because that’s the only way we can get information, and that’s the way we stay safe.”

Advertisements

A poem for Chelsea Manning on her birthday

US Army whistleblower Chelsea Manning turns 27 in prison today, serving 35 years for leaking proof of torture and other US war crimes in Afghanistan and Iraq. On the eve of her show trial in summer 2013, I wrote this poem for Chelsea (then known as Bradley).

As I explained in that post, I’d been reading a lot of Adorno: The title is a reference to his infamous dictum that “to write a poem after Auschwitz would be barbaric”, while the epigraph is a quote from a poignant passage in Dialectic of Enlightenment about the knowing resignation with which Americans accept their powerlessness in the capitalist economy.

We’ve let Chelsea down, and we’ll let her down every day of our lives until we honor her actions by our own courage to seek justice.

“No poems after Auschwitz”

KS 6/1/2013

It’s a free country

But freedom has rules:

You can say what you want about the Market

but if you don’t play ball

you’re not one of us.

We don’t leak the wrong footage

of the wrong Apache helicopters

swarming over Baghdad

picking off civilians like flies.

And on a rainy Tuesday in November

every four years

we pick Dear Leader

like free people do.

You’ll never change the world.

But to those who will try –

The risk you bear is

unspeakable

it’s terrifying

 

I am a failure, sagt der Amerikaner. – And that is that.

“Blade Runner” deconstructed whiteness and masculinity before it was cool

It’s been a while since I’ve done any philosophy here, so I thought I’d pick up where I left off in January, on a sort of autobiographical note: In a threepart series, I drew on my own experiences and reflections, in dialogue with the critical race theory of Charles W. Mills, to give an account of what it’s like to be “Not Quite White” as an unapologetically Eurocentric Iranian American.

A lot of the memories and reflections had to do with the phenomenon of “passing”. Passing is usually understood as creating or maintaining the illusion that one belongs to a privileged group. But in my analysis, it becomes clear that people don’t just pass for what they are not – they also pass for what (they think) they really “are”. 

I basically conclude that while I and other Americans of Middle Eastern descent often “pass” as white, there’s also always a deeper and more unconscious form of passing at work, in which I actually pass to myself.

That is, I – for understandable and hardly unique reasons – convinced myself that my ethnic identity is stable and non-contradictory, a reliable and clearly defined reflection of myself that I can fall back on in moments of isolation and self-doubt. Over the years, I passed to myself as both “white enough” and as Iranian/Middle Eastern (which is to say, not white).

I think this points to something crucial about the very essence of identity – that all identity is constructed and unstable, even the dominant identities (white, male, heterosexual, etc.) we critique as being the “default”, “neutral”, or “universal”.

The only “default” identity is a painful lack, an inability to see and know for sure what it is you “really are”. The only universality is the impenetrable darkness at the core of our being.

I can’t think of a popular movie that illustrates this more vividly or powerfully than Blade Runner (specifically, the edits known as the “director’s cut” and “final cut”).

If you’re not familiar with it (warning: the rest of this post is one big spoiler), Ridley Scott’s loose adaptation of Philip K. Dick’s novel Do Androids Dream of Electric Sheep? follows bounty hunter Rick Deckard as he hunts down four renegade androids (“replicants”) in the Los Angeles of 2019, which – like the rest of Earth – is a dark, polluted, overcrowded wasteland.

The people who live there are those without the power and privilege to escape to off-world colonies, where the elite use replicants as slave labor. They are identical to humans in every way, except for emotions – but as it turns out, replicants can develop those too. As a failsafe, the corporation that makes replicants designed them to have a four-year lifespan.

That precaution proves insufficient, though: These four replicants revolt, kill their masters, and travel to Earth to find their creator and make him change what he created. They don’t want to die. The immediate problem for them is that replicants are illegal on Earth, hence Deckard’s assignment. By the time their leader, Roy, learns that their genetic coding can’t be changed, they’ve been picked off one by one.

Throughout the film, the mutineers have to “pass” as human to avoid detection. There are plenty of examples that I won’t go into. Much more interesting, though, is the replicant Deckard meets when he visits the corporation: Rachael, assistant to the corporation’s head and chief designer, doesn’t know she’s a replicant. She’s so advanced that she almost fools the apparently foolproof Voight-Kampff test, which uses emotionally provocative questions to see if you’re human based on pupil dilations or something.

Unlike the other replicants in the film, Rachael has to come into consciousness, not of her essential humanity, but of her (supposed) lack of it. She lived her life as though she were just like everyone else when in reality, her very life on Earth is criminal.

What Rachael has to come to grips with is the same terrifying experience of identity-as-lack-of-identity that drove Roy and the other replicants to revolt in the first place: that they are not really what they “are”. Despite knowing that all they have in this world are implanted memories and a preset expiration date, they nonetheless have a subjectivity, a full personhood, that yearns for a freedom that will never come.

“Quite an experience to live in fear, isn’t it?” Roy asks Deckard during their final showdown. “That’s what it is to be a slave.” Rachael’s earth-shattering realization that she, too, is a “slave” is only possible through unconscious passing: Every day of her life, she’s passed for human to herself.

What’s the message here? The “director’s cut” and “final cut” both contain versions of a sequence, commonly referred to as “the unicorn dream”, that makes it clear. After Deckard and Rachael hook up – and after she asks him if he’s ever taken the Voight-Kampff test himself – the “final cut” intercuts a scene of Deckard, awake and staring ahead in his apartment, with a sun-bathed vision of a unicorn galloping in slow-motion.

These cuts, like the studio release, also show a mysterious minor character called Gaff (seemingly a sort of middle man between Deckard and his boss) making little origami figures at different points in the film. When Deckard returns to his apartment after his showdown with Roy, he finds Rachael there; they agree to run away together, but as he leaves his apartment for the last time, he notices a little origami figure on the ground. It’s a unicorn.

The inclusion of the “unicorn dream” unleashes this final scene’s explosive implication: that Deckard himself is a replicant, with implanted memories and dreams well known to Gaff, the man responsible for keeping an eye on him.

Deckard’s plotline actually charts a steady undermining of the basis of his self-identification as human: While searching the apartment of Leon, one of the replicants, Deckard finds some photos, which Leon has kept despite knowing they’re of someone else’s life.

Later, the film cuts from the unicorn sequence to a shot of Deckard’s own photographs in his apartment. In light of the final revelation, this cut acquires a new meaning that was already latent: Deckard’s pictures, too, are really someone else’s, no less a fantasy than the unicorn implanted in his mind. He, like Rachael, learns that he’s been passing for human, not to his boss or his handler, but to himself.

It’s crucial to note that by the end of the film, there are no major characters that have not been “outed” as replicants, whether to themselves, to other characters, or to the viewer. Why does Leon keep his photos even though he knows he’s a replicant? Because the knowledge that he’s not “really” human doesn’t negate his essentially human desire for identity and connection.

That’s the radical lesson of Blade Runner: We are never fully who we imagine ourselves to be, and it’s precisely that gap that makes us who we are. That gap makes it possible for us to pass even for what we think we already are.

It’s a nice philosophical point, but as always we have to ask – why does it matter?

If we plot the film as a racial or gender allegory, Deckard and Rachael aren’t white people/men who learn they aren’t really white/male. They’re white people/men who come into consciousness of whiteness and masculinity as constructs, ideals that restrict them and that they inevitably fail to embody.

Their identities aren’t simply given as a default, but a social determination; their memories aren’t really their own, but the reflection of cultural memory. What this means for political struggle is that no identity, not even the dominant identity, is free from the oppression of the social construction of difference.

Our identities are most constructed and fragile in those cases where we think ourselves certain of belonging to a privileged class.

In other words, the struggle against patriarchy can’t claim as its goal a society in which everyone is treated like a man, and the struggle against white supremacy can’t envision one in which we are all treated like white people. It’s precisely masculinity and whiteness that are the biggest fakes of all – so our project has to include convincing those who identify with those constructs that their behavior is actually restricted, their sense of self mutilated, by the systems of domination that seem to benefit them.

Liberation depends, in part, on the realization of the oppressors that they too are not free.

Is technology capitalist? (Part II of II)

Last week, I asked if advances in technology – when they render human labor obsolete – will always be bad for workers in modern economies. I shared with you some of what Marx has to say concerning technology: Basically, for Marx, while capital profits by replacing workers with robots, there’s nothing about technology as such that requires it to benefit capital.

The real agent of production in advanced capitalism isn’t the worker, but the automated machine – to be exact, it’s the “general intellect” (the collective mind of society) embodied in the machine. The general intellect applies its knowledge/skill to the development of more efficient (and therefore more profitable) technologies.

As more and more of society’s productive forces go towards producing the means of production (as opposed to the “immediate” production of products themselves), the very notion of labor shifts from one centered on material production to one in which the production of ideas plays an ever-greater role. The development of the general intellect introduces a new notion of collective – not individual – production, production by society as a collective agent.

By itself, the tendency of the general intellect is to reduce necessary labor to a minimum, which means that it takes less work to produce a certain amount of something. This has the potential to afford workers unprecedented free time, time in which they could pursue their interests, live rich and stable lives, and contribute to heretofore unrealized ways of organizing social life and exercising the collective will of the people. 

But capital has another tendency: namely, to take that potentially “free” time and, instead of being satisfied with the same products in less time, demand more products in the same (or more) time. “The most advanced machinery,” Marx observes, “forces the worker to work longer today than the savage does or he himself did with the simplest, crudest tools.” Of course, it’s not really the machinery that’s doing the forcing, but rather, capitalist property relations and capital’s insatiable need to circulate.

In capitalism, Marx writes, there’s a chain of measurement linking wealth to the market value of a product, and that market value (“exchange-value”) with the time it takes a human worker to make it.

But in the course of the historical movement Marx describes, as more and more of the production process requires little or no input from workers, wealth loses its basis in labor-time. The rich get richer because they own the means of production, while workers’ minimal contribution to the production process serves as an excuse for permanent job security and downward pressure on wages.

Marx identifies this as a contradiction of capitalism, and suggests that it will ultimately lead to capitalism’s breakdown. In socialism, the reduction of necessary labor afforded by technological advancement would, unhampered by capital’s demand to convert free time into surplus labor, actually let us keep our “free” time free, to do with as we please.

But technology does nothing on its own – technology will cease to be a weapon of the bourgeoisie only if we make it so through active class struggle in the sphere of social relations (in other words, the “who-owns-what” of the economy). 

So was Marx wrong? Some contemporary theorists, among them Jodi Dean and Slavoj Žižek, have argued that in the neoliberal era, the means of production most suitable from capital’s perspective is information and communication technology (ICT, i.e. computers, telecommunications).

Marx saw in the general intellect the promise of capitalism’s breakdown and a new collectivity, but as these theorists claim, in a world of global finance and social media, it’s precisely the general intellect that serves capital more than ever.

The “force of knowledge” that Marx saw would increasingly replace the exploitation of immediate labor as the primary generator of wealth is now the site of a radically different, basically unrecognizable form of exploitation, whose effects are as real and devastating as they are virtual and intangible.

“How did Bill Gates become the richest man in America?” Žižek asks. Certainly not through exploitation of labor in the traditional sense: Microsoft can hardly be said to have bested its competitors by offering the lowest prices, and its programmers and software engineers are paid relatively well.

Rather, Gates made his fortune from monopoly patents, copyrights, and trademarks, charging rent for software that practically everyone uses: “Gates effectively privatised part of the general intellect and became rich from appropriating the rent that followed. The possibility of the privatisation of the general intellect was something Marx never envisioned in his writings about capitalism”.

Remember: Marx is aware that in the process he’s analyzing, “the accumulation of knowledge and skill, the general productive forces of the mind of society, is absorbed into capital”. But Marx specifies that the general intellect is absorbed into fixed capital, that is, into the making of more automated and efficient machinery.

If we are to make sense of Žižek’s claim that Marx failed to anticipate the general intellect’s privatization, we have to look at the other side of the “difference within capital itself”, namely at circulating capital. In the so-called information economy that’s come to define “post-industrial” capitalism, information is absorbed into capital not only as fixed capital, but also as circulating capital.

Of course, the term “post-industrial” is potentially misleading: Humans beings will, for the foreseeable future, continue to rely on industrial production of some kind or another for the goods we need (or don’t need, as the case may be).

But Žižek, following Michael Hardt and Antonio Negri, emphasizes that contemporary capitalism’s “post-industrial” character isn’t due to the absence of industry. It’s because industrial production has been subordinated to “immaterial production”, the production of things other than physical objects. More and more, it’s not just the means of production but products themselves that are part of the general intellect, from commodity futures markets to dating websites.

The hegemony of immaterial production finds its concrete expression in the so-called financialization of the economy, a process that has driven capitalism’s globalization under the auspices of neoliberal financial institutions like the World Bank and International Monetary Fund.

The global financial system affects virtually every sphere of economic activity in every country on earth: Globalization of agriculture, for example, has meant that food prices are now due less to “supply and demand” than to speculation in commodity futures markets – investors profit when prices go up and millions in the global South can no longer afford food.

Is there a more chilling confirmation that the breakdown of wealth’s relation to immediate production has meant everything but collapse for capitalism?

Dean mostly concurs with Žižek’s analysis, arguing that what we call “post-industrial” capitalism (marked by shrinking welfare states and growing wealth inequality) is best described as “communicative capitalism”. In The Communist Horizon, she develops a theory of communicative capitalism as the ideal form of capitalism from the point of view of neoliberal class domination (e.g., austerity“free trade”, privatization, deregulation, financialization, and so on).

Dean defines communicative capitalism as

an ideological formation wherein capitalism and democracy converge in networked communication technologies. Ideals of access, inclusion, discussion, and participation take material form in the expansion and intensification of global telecommunications. Changes in information networks associated with digitalization, speed, storage capacity accelerate and intensify elements of capitalism and democracy, consolidating the two into a single ideological formation.

True to Marx’s rejection of technological determinism, Dean acknowledges nothing about ICT necessarily entails neoliberal economic policies: The ideology of communicative capitalism could, in theory, have accompanied Keynesian or social-democratic policies as well.

But as economist Richard Wolff points out, it’s precisely the development of computer systems and modern telecommunications that empowered capitalists in the industrialized West to relocate their production.

Beginning in the early 1970s, when the Nixon administration kicked off the present era of deregulated capital flows, ICT made it easier and easier for Western corporations to become “multinational”: to get up and leave those industrialized countries where labor had achieved modest welfare state protections (unionization, full employment, corporate taxes, safety regulations, etc.).

Recall Paul Krugman’s graph showing the stagnation of real wages starting in 1970: What has produced communicative capitalism, then, is the latest stage of the relationship described by Marx, between the most advanced means of production—ICT—and the existing relations of production—the accumulation private property (and increasingly, the privatized general intellect) in the hands of an elite intent on rolling back already meager concessions on the part of capital.

Dean is very clear that communicative capitalism is the ideological convergence of capitalism and democracy. I claim that democracy’s essential role in justifying/perpetuating capitalism can help explain why the general intellect’s emancipatory potential has so far failed to come to fruition.

Dean observes that, reflecting the traces of anti-establishment protest in the cultural politics of neoliberalism, ICT in communicative capitalism is “celebrated for making work fun, inspiring creativity, and opening up entrepreneurial opportunities,” contributing “to the production of new knowledge-based enterprises.”

Taken at face value, this techno-utopianism, which presents capitalism as capitalism without exploitation, conformity, or drudgery, appears to be the ideology of communicative capitalism.

But if we take ideology to refer not only to what we consciously think, but also to what we do (and do “without thinking”), then the function of such techno-utopian rhetoric is to mask the ideological truth of communicative capitalism. “Its more pronounced legacy,” Dean notes, “has been widespread de-skilling, surveillance, and the acceleration and intensification of work and culture: the freedom of ‘telecommuting’ quickly morphed into the tether of 24/7 availability, permanent work.”

The notion of democracy works to suppress this second level of meaning, because our engagement with the internet and other communications comes to define what it means to participate in the general intellect.

It’s hard to find a discussion of the impact of social media on our lives that doesn’t make some reference to the new sense of connection/collectivity/connectivity made possible by this virtual social space.

In mainstream media, Facebook and Twitter are credited with sparking uprisings that toppled dictators, struggles that involved masses of people marching and sometimes dying in the streets. Our communities have, according to this narrative, been displaced into an online realm, a displacement that enriches us all by connecting us to people, products, and ideas we may never have even known existed. The general intellect finds its most literal expression in the wealth of information available “at the click of a button”.

In an era of permanent job insecurity, if we as individuals want to be competitive professionally and connected socially, we have little choice but to integrate into the virtual collective made possible by ICT. Communicative capitalism casts this forced integration as the very essence of democratic participation.

This concept finds its logical extreme in the almost comical demand of the short-lived Pirate Parties of Sweden and Germany for “liquid democracy”, a politically neutral model in which citizens vote directly on referendums over the internet.

The general intellect has indeed been privatized, and it appears – for the moment – that capitalism has put off the contradiction between the free time potentially created by advanced technology and the role of labor-time in measuring wealth.

When Dean refers to communicative capitalism’s “tether of 24/7 availability, permanent work”, she’s pointing to the resolution of the problem of free time. When work is play and “free” time is treated as an extension of the workday, there is no free time. It doesn’t exist.

Marx argues that class struggle in the sphere of social relations can claim and reshape society’s productive forces in the service of a radically egalitarian society; yet today our participation in that sphere is confined to the work of posting on Facebook, circulating petitions, planning actions and events, reading and replying to emails, and so on.

This, too, is work – work for which we aren’t paid, but which turns a tiny elite of capitalists like Mark Zuckerberg into mega-billionaires. We (“the rest of us”) are the new proletariat, because we’re all subject to an exploitation that is as undeniable as it is unrecognizable.

Our social space has itself been displaced from physical reality onto a collective consciousness that, by allowing us to pursue our desire for community and professional success while sitting at home, appears to rule out the very possibility – to say nothing of the necessity – of genuinely revolutionary, collective struggle.

Is technology capitalist? (Part I of II)

Today, you don’t have to be a Marxist to notice that the increasing automation of labor has radical implications for workers in advanced capitalist economies. In his December 8, 2012 New York Times column, economist Paul Krugman explains the phenomenon of “reshoring” – the return of manufacturing to the industrialized nations it once fled.

As robots replace workers, Krugman argues, the labor costs of manufacturing in the United States go down relative to labor costs in, say, Indonesia or Bangladesh. The US also provides the added benefit of bigger markets and better infrastructure. But the jobs that left with the manufacturing industry aren’t coming back with it.

The phenomenon of reshoring is made possible by something Krugman calls “capital-biased technological change”: technological development driven by and in the interest of capital. He illustrates this point by presenting data showing the downward trajectory of US labor’s share of income between 1970 and 2010.

As automated technology becomes more and more sophisticated and efficient, it’s more and more profitable for so-called middle class manufacturing jobs to be performed by robots.

Those jobs have, over the last several decades, been steadily replaced by low-wage service sector jobs that make up a growing trend towards chronic underemployment, permanent job insecurity, deskilling, and de-unionization of the US workforce. All of those factors contribute to downward pressure on wages, producing the stagnation of real wages shown in the data.

Krugman is quick to shoot down the standard liberal line about the need for better education: “Better education,” he writes, “won’t do much to reduce inequality if the rewards simply go to those with the most assets.” What he’s pointing to is a contradiction of capitalism—between the interests of capital and the interests of labor.

He himself isn’t quite sure what to do with this analysis, and admits as much, concluding that, “the capital/labor dimension of inequality…has echoes of old-fashioned Marxism—which shouldn’t be a reason to avoid facts, but too often is. And it has really uncomfortable implications.”

If politics in the liberal-democratic frame has proven incapable of resolving this contradiction—as Krugman acknowledges—then maybe that “old-fashioned Marxism” is worth revisiting and updating. You don’t have to be a Marxist to see that there’s a problem with who owns and benefits from the means of production—but it doesn’t hurt.

Marx himself doesn’t see anything inherently capitalist about technology or automation. In the passage from his Grundrisse known as the “Fragment on Machines”, Marx lays out a pretty intricate argument that the driver of technological change isn’t technology itself, or even the natural sciences—what really points technological development in this or that direction is society’s relations of production, the “who-owns-what” of the economy.

Marx begins the story of “capital-biased technological change” with a schema of the production process capitalism inherited from mercantilism. Its elements are raw material (what is worked on), living labor (who works on it), and the means of production (what they use to work on it).

Each of these has its own specific use-value, the value that is created by its use: Raw material is consumed insofar as it’s made into a product; living labor is consumed in the work process itself, requiring rest and sustenance between work days; and the consumption of the means of production, which can be as simple as a hand tool, uses it up at a far slower rate.

Together, these three elements have to produce more value for the enterprise than the capitalist invested in them. The capitalist takes this excess value (which Marx calls the surplus value) for themselves, and the cycle begins anew. The business must create surplus value in order to grow; the more efficient production becomes, the more surplus value it creates.

But human skill and energy is limited, and capital cannot abide that limit—so it’s the means of production that must constantly be improved and replaced. In the process, the distinction between living labor and the means of production is eroded—for the first time in economic history.

The initial, pre-capitalist schema gives way to a new schema: Instead of raw material, living labor, and means of production, we have circulating capital (not just raw material, but also product) and fixed capital (means of production).

“The relationship of the factors to each other,” Marx writes, now appears as a “difference within capital itself”, so that the elements are just two different kinds of capital. This transformation sounds abstract, but bear with me. What’s Marx really saying?

The three elements are based on the notion of labor inherited from mercantilism, where each is particular element is indispensable. You can no sooner have production without living labor than production without, well, the means of production. But over time, this relation of the elements to each other is absorbed by capital, and changed to fit capital’s insatiable demand to circulate.

Workers, no longer a distinct element from the means of production, are instead subsumed under the category of fixed capital: Living labor is now just another means of production. The “difference within capital itself” is the difference between circulating capital and fixed capital, between what the enterprise circulates and what it keeps.

Capital has an incentive to rely less and less on the strength and skill of human workers in favor of more efficient means of production. This process culminates in the machine. In the form of the machine, and above all the automated machine, “the means of production is transformed into a form of existence most suitable to fixed capital and capital as a whole”.

What’s notable here is that when Marx talks about capital, he describes it not as a passive object, but as an active subject, independent of any individual human’s consciousness. Capital turns the means of production into a part of itself, imbued with capital’s drive to circulate—capitalism’s development is due to an inner logic of capital that’s at work whether we’re aware of it or not.

The most “primitive” means of production are hand tools, which allow/require their wielder to do work requiring skill or strength. By the time Marx wrote the fragment in 1857, technology had already developed so far in the direction of efficiency and automated self-sufficiency that an individual worker’s contribution to the process was often minimal.

The machine, by contrast, is never just a tool, but part of an entire automated system, the monstrous organ of some unfathomable beast. The workings of the system are too intricate and opaque for the ordinary worker to exercise any agency in its use: Rather, the machine dictates what and how to produce, leaving the worker no choice but to observe, maybe press a button or two, mostly just keep the machine running smoothly—in the only manner it was built to run.

No longer skilled creators of the product, workers are instead made increasingly redundant. Some lose their jobs, while others keep working but are relegated to the status of an observer or operator, a status that serves to justify permanent job insecurity and downward pressure on wages.

We can now begin to recognize what Marx’s analysis can tell us about the problem of “capital-biased technological change”: Anxieties about our powerlessness in the face of technological change are really anxieties about capital as a subject, as something not entirely within our control as humans. After all, capital is the agent that drives “technology” to make possible ever greater levels of productivity.

In order to do that—to continually revolutionize the means of production—capital needs something from humans that can’t be replaced by automated machines: Knowledge. “The accumulation of knowledge and skill, the general productive force of the mind of society, is absorbed into capital,” Marx says, “and thus appears as a characteristic of capital, more precisely as a characteristic of fixed capital.”

The knowledge/skill in question isn’t that of the individual worker, whose job it is to supervise the machines. It’s the knowledge/skill accumulated by society as a whole, a sort of collective societal consciousness Marx calls the “general intellect”.

That means the physical and chemical laws we glean from science are applied to the improvement of machines, resulting in more automation and efficiency. Conversely, it also means that “invention becomes a business”, so that the application of scientific knowledge to production actually becomes a driving force of scientific research itself.

Since this kind of knowledge is manifest in the workings of the machine, and not in the worker, the worker is no longer the primary agent of labor. The primary agent is, in fact, the machine—to be precise, it’s the general intellect embodied in the machine.

What’s important for Marx is that the machine, as a means of production, creates value for the capitalist by increasing the ratio of “surplus labor” to “necessary labor”. Necessary labor is the amount of labor the capitalist needs to recoup the cost of buying labor to begin with; surplus labor is that little (or not so little) bit extra beyond the “cost of doing business”.

So the greater the ratio of surplus labor to necessary labor, the more surplus value the capitalist takes for themselves. In effect, technological advancement lets an enterprise produce a lot, quickly, without much input from workers. “[C]apital here—completely unintentionally—reduces human labor, the expenditure of energy, to a minimum,” Marx writes.

But capital also has a tendency to convert potential “free time” into surplus labor. It’s not enough to produce the same amount of products in less time—on the contrary, capital demands growth, which means it demands more products in the same amount of time.

Since the primary source of wealth is no longer labor “in its immediate form” (labor performed by a worker, using the means of production, on raw material), the relationship between wealth and human labor loses all proportion.

This is because, according to the so-called labor theory of value, a product’s exchange-value (its market value) is measured in terms of the time it takes the worker to produce it. Exchange-value, in turn, is the measure of use-value; particular use-values, taken together, constitute wealth.

On one hand, the foundation of value in capitalism remains labor-time; on the other hand, the direction of technological change undermines that very foundation. When this chain of measurement connecting wealth with labor-time breaks down completely, Marx says, the chains of wage slavery will be broken with it.

So at the same time as technology in the hands of the bourgeoisie is a weapon in the class war, it also contains, within very idea, the seeds of capitalism’s collapse. A socialist use of technology would have, as its goal,

The free development of individualities and, hence, not the reduction of necessary labor in order to get surplus labor, but rather, the reduction of the necessary labor of society to a minimum, which contains the promise of the artistic, scientific, etc. development of individuals through the time and resources that have been made free to all.

Not only does technology itself have no agency in Marx’s story, but capital, too, doesn’t have a monopoly on subjectivity. Society’s productive forces don’t exist in a vacuum—they are connected with the social relations that determine how they’re used, what they’re used for, and who benefits from their use.

The distinction between the “reduction of necessary labor” in the interest of extracting surplus labor and the “reduction of the necessary labor of society” is key, because it points to not only the possibility, but in fact the necessity of continuing the development of the means of production in socialism.

It would be wrong to construe this argument as some kind of techno-utopianism: The potential for society’s productive forces to create more “free time” for everyone is just that, a potential. Since subjectivity resides not in the means of production but in capital, the time saved by technology will never be “free” until people liberate society from capital’s tendency to maximize the ratio of surplus labor to necessary labor.

Today, 150 years after Marx wrote the Grundrisse, capital has proven itself to be nothing if not an adaptable subject. In its current neoliberal form, capitalism’s hegemony is global. Does that necessarily mean Marx’s prognosis was wrong? Check back next week for a couple of the answers I find most convincing.

Learning to be Not Quite White (Part III of III)

This post is the last in a three-part series about my experience of growing up as an Iranian American in the United States. Here, I tie up some loose ends from parts I and II, so if you haven’t read them, I encourage you to do so before reading this conclusion.

– KS

Last week, I wrote about how my parents, immigrants from Iran who met in Berkeley, California, refused in some very important ways to assimilate into white Americanness. But that’s not the whole story.

They also, as far back as I can remember, refused to integrate fully into the Iranian expatriate community in the US. In particular, my mother’s disdain for the so-called Persians of Tehrangeles played a role in my father’s decision to turn down a job offer from the University of Southern California in favor of a post at the University of Illinois.

There are, of course, “LA Persians” everywhere, including my hometown: Even there, my parents have relatively few close friends in Champaign-Urbana’s insular, compact circle of Iranian expats.

Over the years, my mother’s disdain became my own. I learned—first from her, but later from my own interactions—that Persians were cliquish, superficial, conservative, apolitical, materialistic, and racist. Not all Iranians and Iranian Americans in the United States are, by this definition, Persians; but in my experience, a great deal of them are.

Persians call themselves Persians because of the different associations white Americans have with the terms “Iranian” and “Persian”: “Iranian” conjures up images of the Islamic Republic, scary men with beards, the hostage crisis, oppressed women covered in bedsheets, terrorism, religious fanaticism, the threat of nuclear holocaust, the famed Axis of Evil. “Persian”, on the other hand, makes you think of cats and rugs.

The choice, for wealthy Iranians eager to accommodate the ignorance and prejudice of their gracious white hosts, is clear. But the intensity of my disgust at this attitude is, at least partially, a projection onto others of something I find deeply unsettling about myself: I use “Persian” as a derogatory term to describe a tendency of Iranians in the United States—a tendency I’m inclined to condemn as cowardice. But is the fear of those Iranians any less grounded, their masking of a potentially dangerous difference any less justified than my own anxieties?

In his Beyond Good and Evil, Friedrich Nietzsche writes, “He who battles monsters should take care, lest he, in so doing, become a monster himself. And when you gaze long into the abyss, the abyss gazes also into you.” What do people do when they look in the mirror? Do they fix their hair? Do they judge what they see? Do they confront the phantasmagorical image of shame and self-loathing gazing back into them?

What do they do? Do they cower? Do they lash out at the monstrous excess, that part of themselves that they can’t control, the unconscious, the abyss that lies at the center of their own subjectivity?

I originally wrote this essay for a seminar about representations of difference – mostly focused on issues of race and ethnicity, but also intersecting with issues of gender, sexuality, class, and so on. At the first meeting, I began a comment with the words, “I feel pretty white….” In the days and weeks that followed, I was haunted by those words. They rang hollow, and the sound of their emptiness was deafening.

This is a cathartic sort of writing for me, because as I write, I excavate realities of my experience, realities long since buried. When I was 10, the same year as the terrorist attacks of 9/11, I stopped the Persian reading and writing lessons I had been taking from my grandmother.

A decade later, I decided that my academic passion lay in the cultural traditions of Germany and Scandinavia. To call my professional interests Eurocentric is, in that sense, a severe understatement. In fact, I’ve gotten used to regarding my lack of interest in Iranian culture as a sort of virtue, an act of personal agency that subverts the expectation that an Iranian American will be concerned, first and foremost, with Iranian things.

Who can look into the murky void, the dark night of my soul, and say for sure that I only quit my Persian lessons out of laziness? Who can say for sure that it’s only my taste, temperament, and political orientation that drew me towards Brecht and von Trier, and not Rumi and Kiarostami?

Who can say? Perhaps not even I. But every day I discover more and more clues that point to the hidden flip-side of my supposed personal agency—a ravenous hunger, long suppressed, to “reconnect with my roots”.

On my most recent trip to Iran, in 2011, I resolved to relearn the Persian script. I’m still as fluent in Persian as I ever was (it was, after all, my first language, the language my parents spoke at home), but the fact that I don’t work with Persian texts has kept me at what must be, at best, a first-grade reading level—to say nothing of writing. That impulse to relearn the alphabet only hints at the true extent of my desire to recover an Iranian identity that seems never to have been fully, consciously mine to begin with.

Since I began this work of dredging up my past, I’ve started watching the Bravo reality show Shahs of Sunset, which follows the drama and hijinks of young, wealthy, attractive LA Persians. I’d been aware of this show for a couple of years, but had remained resolute in my refusal to watch it. It’s an offensive, exploitative reinforcement, I thought, of already pervasive stereotypes about Iranian Americans—and besides, I wasn’t interested in how these people lived their lives anyway.

But it only took a couple of episodes until I was hooked; to see young Iranians so comfortable with a clearly defined, explicitly non-white ethnic identity had an allure I couldn’t get anywhere else.

In one episode, the group rents a pimped-out party bus, complete with neon lights and stripper poles. “It’s so Persian that it’s actually Saudi,” one character remarks. “So Persian that it’s actually Saudi”? I grew up thinking that, in the world I wanted to be a part of, no one knew or gave a damn what the difference between “Persian” and “Saudi” even was.

These Persians might be the opposite of everything I stand for as someone who considers themselves a radical leftist intellectual – but they have something I don’t: A sense of security and community that seems to flow organically from their ethnic identification.

What box do the Shahs of Sunset check on application forms? I only check “white” when I have the option of specifying that I am not “European” but “Middle Eastern”—if not, I decline to answer. This isn’t a principled position so much as it is a reflection of my own uncertainty about the role that the source of my nonwhiteness, my Iranianness, actually plays in my life.

This year, on the day after Thanksgiving, I went to the Metropolitan Museum of Art with two old white friends from Illinois and a new friend I met in New York. I spent most of my time there in the gallery displaying Turkish, Arab, Iranian, and other Middle Eastern “artworks” (artifacts). It was unprecedented for me to actually enjoy walking through an art gallery, but I was enthralled. I wandered from display case to display case, immediately moving on when I discerned that the object in question wasn’t from historical Iran.

That’s all I wanted to see: just Iranian things, no matter how old, now matter how inscrutable the calligraphy, no matter how little I may have in common with the people who made them.

I don’t even remember learning an interesting fact: I just looked, thought back to my grandmother’s stories from the Shahnameh of Ferdowsi, and explained to one of my friends with glee that my mom is from the capital of Iran, and my dad is from the capital of Iran a thousand years ago; that legend has it chess and backgammon came into being in a territorial dispute between Iran and India; that written Persian looked nothing like Arabic script until the Arabs invaded and conquered Iran in the 7th century.

Some rich white man paid good money to bring these fragments from my prehistory here, to me, in New York City, in 2013. Perhaps some of those pieces were buried underground: perhaps deeper than my own desire not just to be whole, but also to be proud of it. And me? I’ve only just started digging.

Learning to be Not Quite White (Part II of III)

In my last post, I sketched a phenomenological account of my experience of being “Not Quite White” as an Iranian American in the United States.

Recall that in The Racial Contract, Charles W. Mills stresses that no possible division within the category of “nonwhite” even approaches the radical nature of the white-nonwhite split. But the categories, he insists, must be “fuzzified” to allow for not only shades of nonwhiteness (including blackness), but also the historical instability of whiteness itself.

It has not always been the case that all people of European descent, including Jews, were treated as “fully” white (i.e., Anglo-Saxon). The “fuzzy” space occupied by Irish and Italian Americans into the early 20th century was a liminal position, an in-between place; I have lived my life in an even more liminal position, just on the other side of the white-nonwhite split.

It’s precisely because the racial and ethnic identity of Middle Easterners in the United States is so “fuzzy”, so shrouded in ambiguity, that my account has the potential to reveal something very fundamental about the experience of difference. So I’d like to pick up where I left off in my previous post, to develop further the phenomenology of Not Quite White, again drawing primarily on my own memory and experience.

Growing up in Illinois as a child of Iranian immigrants, the Persian and English languages demarcated not just my vocabulary, but social space itself: Persian marked the space of home, of family, and of comfort, while English was the medium through which I ventured out into the uncertain world of school, sports, television, films, and friends. Most things had a Persian name and an English name, but sometimes things were so tied to their cultural context—the dynamics of friend groups at school, or my parents’ strict ban on sleepovers—that they were difficult to translate into the other.

For most of my childhood, I couldn’t or didn’t want to bridge the gap between the world of my family and the world of my peers. I feel I’m only now excavating those memories and that history, and it’s only now that I have the language to speak the otherwise incommunicable, even the repressed.

As I mentioned previously, as a child, I had no way of linking this tension or incommensurability with questions of “white” or “black”—I only knew that I was neither of these, and that I was embarrassed to bring my friends home. As I delve into my memories from this time, it’s quite clear that what I feared most (and tried hardest to prevent) was for my family’s Iranianness to be revealed for the outside world (most importantly, my friends) to see.

The prospect of “bridging the gap” was, for me, the prospect of being “outed” as not being American enough: It offered nothing but humiliation. I was afraid my friends would sniffle and sneer at my Iranian lunch—dishes like dolme, stuffed grape leaves, and adas polow, lentil rice. I now find this food scrumptious, but for a good chunk of elementary school, more often than not, I’d discretely throw it away.

I was afraid my friends would come over to our house and see that we had no video games, no Nerf guns, no sugary cereal, and no soda. I was afraid my mom would bring us a spread of fruit and nuts (she always did), and that my friends wouldn’t eat any (they almost never did). I was afraid they would discover, to their horror, that I wasn’t allowed to watch television (except for PBS, of course), a fact I hid somewhat effectively by catching up on cartoons every Saturday morning at the house of my ever-indulgent grandmother.

So in my own interactions with peers, at school and elsewhere, I made a conscious effort to erase from my persona any traces of difference. Of course, my anxieties didn’t always stem directly or necessarily from the fact that my parents were Iranian immigrants; but my parents, as they now tell me (with some regret, having read my previous post), made very little effort to assimilate into mainstream US culture.

They distrusted not only the food and media Americans consumed, but Americans themselves: “Sleepovers” at friends’ houses were out of the question, because who knows what kind of crazy people with guns one might find in the house of an American? I took my parents’ policies not only as a rejection of my friends’ culture, but also (implicitly) as a threat directed at me personally, as an American: namely, that I, too, should take care not to internalize the behaviors and values of my friends, lest I bring them home with me.

This isn’t to say, however, that my parents were totally oblivious to the tensions of growing up with my ethnicity in the United States. One day, while I was in middle school, my mom gave me a pair of tweezers and taught me how to pluck my eyebrows. On the whole, Iranians (and Middle Easterners in general) are known for being a hairy bunch—not only compared to whites, but virtually all other ethnic groups as well. Although I wasn’t fully aware of this at the time, the more a phenotype deviates from the Northern European ideal, the more white supremacist culture devalues those “deviant” characteristics, deriding them as unattractive and abnormal.

Iranians and other Middle Easterners are, by and large, the most European-looking of the non-European ethnicities, but the differences that do exist between Europeans and Middle Easterners are marked: These include darker skin tones, ranging from “tan” (meaning tanned white skin) to cappuccino to milk chocolate; shorter statures; and the early appearance and abundance of body hair.

On this day, my mom was concerned with my nascent “unibrow”, the hair between the eyebrows that supposedly renders the brows one long strip. This is a characteristic that, I now know, is ridiculed in the media and in everyday conversation. Mockery of unibrows is particularly prominent in racist/sexist discussions of how Middle Eastern women look, where unibrows are often brought up as evidence that Middle Eastern women are brown and ugly.

Whether my mother knew it or not (recent conversations suggest that she didn’t), she had formally introduced me to a harsh reality of being Not Quite White in mainstream US culture: While we as nonwhites measure our attractiveness and desirability against an impossible white supremacist standard of beauty, we as Middle Eastern nonwhites are perhaps uniquely situated to mask or obscure our physical difference—to “pass” as white.

I was about to finish the fifth grade when my father gave me my first electric razor (James Bond uses it, he told me). None of the boys in my class—white, black, east Asian—seemed to have hair anywhere but the tops of their heads: not on their faces, and certainly not on their legs. A year or so earlier, a young boy, redheaded, freckled, and white as a sheet, saw my legs on the playground and called me “wolfman”.

I was in sixth grade when I first tried to use James Bond’s razor to shave my legs: I wanted to be rid of this curse, this overgrown shrubbery that threatened to cast me, for the rest of my life, as a wolfman—more animal than human. Over the years, I’ve tried various methods of hair removal on various parts of my body. Some hurt a little, some hurt a lot. I still pluck my eyebrows every night, and more recently I’ve started plucking them along the bottom as well, to make them look thinner and less “bushy”.

There’s an operative assumption in this and all other attempts at downplaying my ethnic identity, to which I will return: that I can control other people’s perception of me, including their perception of my ethnicity.

Body hair can be removed, or at least managed. But not everything is so mutable: A hair is a hair, but (to quote Marlo Stanfield) “my name is my name.” My name has an interesting history that’s worth recounting. On my maternal grandmother’s insistence, my name was to be an ethnic Persian (that is, not Arabic) name. In the Shahnameh (“Book of Kings”) of Ferdowsi, the national epic of Iran and the Persian-speaking world, Kumars is the name of the first human, who had the foresight to also make himself the first king.

My parents, for their part, tested this spelling out on their white friends, to see how they would pronounce it. They reportedly all got it right the first time, which by my count makes them the only white people in history to have done so. In fact, American English speakers pronounce “Ku” not as “Kyoo”, but as “Koo”. This is due in no small part to the common Indian name Kumar, which many Americans recognize from—if nowhere else—the buddy comedy Harold and Kumar Go to White Castle.

The “mars” is even trickier: Its correct pronunciation is the complete opposite of the English pronunciation of Mars (the planet and Roman god of war): The “a” is the same as in “cat”, the “r” is rolled, and the “s” is pronounced like “sass”.

Coaching a well-meaning friend or acquaintance through the “authentic” (correct) Persian pronunciation of my name is just that—work. It’s an ordeal. Without exception, those who undertake the challenge either fail or lapse immediately upon succeeding.

For most of my life, I never corrected anyone’s pronunciation of my name—not for lack of patience, exactly, but for fear that it would call attention to my Iranianness (As a child, I used to wish my name were Jason, which I now know to be an almost comically white name). As a result, some of my oldest friends still call me “Koo-Mars”: Despite now knowing better, they can’t shake the habit.

It’s only in the last five years or so that I’ve begun to break my own habit of quietly acquiescing to whatever embarrassing butchery follows the teacher’s longest pause during roll call. I’ve told myself the fact that no one I meet can pronounce my name doesn’t bother me, that I know better than to take it personally—but the truth is that it bothers me to no end. I rarely remember the names of people I meet, because as soon as I introduce myself to someone, instead of listening for their name, I’m already bracing to repeat mine, to spell it, to explain its origin to my well-intentioned interlocutor.

To make matters simpler—for others, too, but mostly for myself—I’ve decided that my name has a correct mispronunciation in English: As long as it’s “Kyoo” and not “Koo”, I let the rest slide. This struggle over my name is as much an internal struggle as it is an external one—for two decades, I told myself I took no pride in my name, because I took no pride in the Iranian identity it stands for.

And just as I failed to be assertive about how my name is spoken, so too did I fail to claim anything like a positive Iranian identity to fill in the gap left by my alienation from white Americanness. But with maturity and reflection has come the realization that while I’m frustrated by the difficulty people have with my name, I’m not ashamed of it.

In stark contrast to East Asian immigrants in the United States, immigrants from the Middle East and South Asia generally don’t give their children Anglo-Saxon first names: My name is a testament to my parents’ refusal to assimilate into white Americanness, and today, more than ever, I’m proud and grateful that my name isn’t Jason Salehi.

So in the final entry in this series, I’d like to take up this question: Can “Iranian American” ever be more than a hyphen, an in-between place, a signifier of lack? In other words: If I’ve disavowed part of my identity, can it also be reclaimed?