For two years, I tracked down dozens of . . . Chinese in Upper Egypt [who were] selling lingerie. In a deeply conservative region, where Egyptian families rarely allow women to work or own businesses, the Chinese flourished because of their status as outsiders. They didn’t gossip, and they kept their opinions to themselves. In a New Yorker article entitled “Learning to Speak Lingerie,” I described the Chinese use of Arabic as another non-threatening characteristic. I wrote, “Unlike Mandarin, Arabic is inflected for gender, and Chinese dealers, who learn the language strictly by ear, often pick up speech patterns from female customers. I’ve come to think of it as the lingerie dialect, and there’s something disarming about these Chinese men speaking in the feminine voice.” . . .
When I wrote about the Chinese in the New Yorker, most readers seemed to appreciate the unusual perspective. But as I often find with topics that involve the Middle East, some people had trouble getting past the black-and-white quality of a byline. “This piece is so orientalist I don’t know what to do,” Aisha Gani, a reporter who worked at The Guardian, tweeted. Another colleague at the British paper, Iman Amrani, agreed: “I wouldn’t have minded an article on the subject written by an Egyptian woman—probably would have had better insight.” . . .
As an MOL (man of language), I also take issue with this kind of essentialism. Empathy and understanding are not inherited traits, and they are not strictly tied to gender and race. An individual who wrestles with a difficult language can learn to be more sympathetic to outsiders and open to different experiences of the world. This learning process—the embarrassments, the frustrations, the gradual sense of understanding and connection—is invariably transformative. In Upper Egypt, the Chinese experience of struggling to learn Arabic and local culture had made them much more thoughtful. In the same way, I was interested in their lives not because of some kind of voyeurism, but because I had also experienced Egypt and Arabic as an outsider. And both the Chinese and the Egyptians welcomed me because I spoke their languages. My identity as a white male was far less important than my ability to communicate.
And that easily lobbed word—“Orientalist”—hardly captures the complexity of our interactions. What exactly is the dynamic when a man from Missouri observes a Zhejiang native selling lingerie to an Upper Egyptian woman? . . . If all of us now stand beside the same river, speaking in ways we all understand, who’s looking east and who’s looking west? Which way is Oriental?
For all of our current interest in identity politics, there’s no corresponding sense of identity linguistics. You are what you speak—the words that run throughout your mind are at least as fundamental to your selfhood as is your ethnicity or your gender. And sometimes it’s healthy
to consider human characteristics that are not inborn, rigid, and outwardly defined. After all, you can always learn another language and change who you are.
British colonial policy . . . went through two policy phases, or at least there were two strategies between which its policies actually oscillated, sometimes to its great advantage. At first, the new colonial apparatus exercised caution and occupied India by a mix of military power and subtle diplomacy, the high ground in the middle of the circle of circles. This, however, pushed them into contradictions. For, whatever their sense of the strangeness of the country and the thinness of colonial presence, the British colonial state represented the great conquering discourse of Enlightenment rationalism, entering India precisely at the moment of its greatest unchecked arrogance. As inheritors and representatives of this discourse, which carried everything before it, this colonial state could hardly adopt for long such a self-denying attitude. It had restructured everything in Europe—the productive system, the political regimes, the moral and cognitive orders—and would do the same in India, particularly as some empirically inclined theorists of that generation considered the colonies a massive laboratory of utilitarian
or other theoretical experiments. Consequently, the colonial state could not settle simply for eminence at the cost of its marginality; it began to take initiatives to introduce the logic of modernity into Indian society. But this modernity did not enter a passive society. Sometimes, its initiatives were resisted by pre-existing structural forms. At times, there was a more direct form of collective resistance. Therefore the map of continuity and discontinuity that this state left behind at the time of independence was rather complex and has to be traced with care.
Most significantly, of course, initiatives for modernity came to assume an external character. The acceptance of modernity came to be connected, ineradicably, with subjection. This again points to two different problems, one theoretical, the other political. Theoretically, because modernity was externally introduced, it is explanatorily unhelpful to apply the logical format of the ‘transition process’ to this pattern of change. Such a logical format would be wrong on two counts. First, however subtly, it would imply that what was proposed to be built was something like European capitalism. (And, in any case, historians have forcefully argued that what it was to replace was not like feudalism, with or without modificatory adjectives.) But, more fundamentally, the logical structure of endogenous change does not apply here. Here transformation agendas attack as an external force. This externality is not something that can be casually mentioned and forgotten. It is inscribed on every move, every object, every proposal, every legislative act, each line of causality. It comes to be marked on the epoch itself. This repetitive emphasis on externality should not be seen as a nationalist initiative that is so well-rehearsed in Indian social science. . . .
Quite apart from the externality of the entire historical proposal of modernity, some of its contents were remarkable. . . . Economic reforms, or rather alterations . . . did not foreshadow the construction of a classical capitalist economy, with its necessary emphasis on extractive and transport sectors. What happened was the creation of a degenerate version of capitalism —what early dependency theorists called the ‘development of underdevelopment’.
Around the world, capital cities are disgorging bureaucrats. In the post-colonial fervour of the 20th century, coastal capitals picked by trade-focused empires were spurned for “regionally neutral” new ones. But decamping wholesale is costly and unpopular; governments these days prefer piecemeal dispersal. The trend reflects how the world has changed. In past eras, when information travelled at a snail’s pace, civil servants had to cluster together. But now desk-workers can ping emails and video-chat around the world. Travel for face-to-face meetings may be unavoidable, but transport links, too, have improved.
Proponents of moving civil servants around promise countless benefits. It disperses the risk that a terrorist attack or natural disaster will cripple an entire government. Wonks in the sticks will be inspired by new ideas that walled-off capitals cannot conjure up. Autonomous regulators perform best far from the pressure and lobbying of the big city. Some even hail a cure for ascendant cynicism and populism. The unloved bureaucrats of faraway capitals will become as popular as firefighters once they mix with regular folk.
Beyond these sunny visions, dispersing central-government functions usually has three specific aims: to improve the lives of both civil servants and those living in clogged capitals; to save money; to redress regional imbalances. The trouble is that these goals are not always realised.
The first aim—improving living conditions—has a long pedigree. After the second world war, Britain moved thousands of civil servants to “agreeable English country towns” as London was rebuilt. But swapping the capital for somewhere smaller is not always agreeable. Attrition rates can exceed 80%. . . . The second reason to pack bureaucrats off is to save money. Office space costs far more in capitals. Agencies that are moved elsewhere can often recruit better workers on lower salaries than in capitals, where well-paying multinationals mop up talent.
The third reason to shift is to rebalance regional inequality. Norway treats federal jobs as a resource every region deserves to enjoy, like profits from oil. Where government jobs go, private ones follow. Sometimes the aim is to fulfil the potential of a country’s second-tier cities. Unlike poor, remote places, bigger cities can make the most of relocated government agencies, linking them to local universities and businesses and supplying a better-educated workforce. The decision in 1946 to set up America’s Centres for Disease Control in Atlanta rather than Washington, D.C., has transformed the city into a hub for health-sector research and business.
The dilemma is obvious. Pick small, poor towns, and areas of high unemployment get new jobs, but it is hard to attract the most qualified workers; opt for larger cities with infrastructure and better-qualified residents, and the country’s most deprived areas see little benefit.
Others contend that decentralisation begets corruption by making government agencies less accountable. A study in America found that state-government corruption is worse when the state capital is isolated—journalists, who tend to live in the bigger cities, become less watchful of those in power.