Join WhatsApp Icon CAT WhatsApp Group

Top 100 CAT 2025 VARC Questions PDF with Video Solutions

Nehal Sharma

83

Nov 17, 2025

Latest Updates:

  • November 17, 2025: Here we have discussed the complete FMS Delhi RTI Data 2025, including PI shortlist cutoffs, final admission scores, applications received, class profile, and seat details.Read More
  • November 17, 2025: Here we have discussed the Top 100 CAT 2025 VARC Questions PDF with video solutions to help you improve RC, Para Jumbles, Para Summary, and overall VARC accuracy.Read More
Top 100 CAT 2025 VARC Questions PDF with Video Solutions

Top 100 CAT 2025 VARC Questions PDF

Cracku’s Top 100 CAT 2025 VARC Questions PDF with Video Solutions is a complete and easy-to-use resource to improve your CAT VARC preparation. It covers all the major areas of the VARC section Reading Comprehension, Para Jumbles, Para Summary, and Odd One Out—helping you build accuracy and strong understanding.

Every question includes a simple, step-by-step video explanation that shows the correct method, reading approach, and logic needed for CAT. Whether you are aiming for a 99+ percentile or want to strengthen your basics in RC and VA, these 100 questions will help you practice all difficulty levels for CAT 2025.

Why Practice with Cracku’s Top 100 VARC Questions for CAT 2025?

The CAT VARC section 2025 requires good reading skills, strong logic, and regular practice. Solving high-quality questions with clear video solutions helps you:

  • Improve comprehension through structured reading

  • Learn the right approach to Para Jumbles and Para Summary

  • Build accuracy with expert tips and solving techniques

  • Gain confidence to handle tricky RC passages

With the CAT 2025 VARC PDF and Video Solutions, you can revise quickly, avoid common mistakes, and steadily improve your performance in one of the toughest sections of the exam.

List of Top 100 CAT VARC Questions 

Instruction for set :

Read the passage carefully and answer the following questions:

The brave new economy being rebuilt in the wake of the financial meltdown is being built on low-wage service work, as manufacturing’s decline has accelerated and construction ground to a halt. At the beginning of the Great Recession, economist Heather Boushey noted at Slate, manufacturing and construction made up fully half the jobs lost, along with financial services and other business fields, and writers declared the “Mancession” or “He-cession” or even, as Hanna Rosin’s popular book has it, The End of Men. But as others have pointed out, as the recession drags on, it’s women who’ve faced the largest losses, not only in direct attacks on public sector jobs that are dominated by women, but in increased competition from the men pushed out of their previous professions. Some 60 percent of the jobs lost in the public sector were held by women, according to the Institute for Women’s Policy Research. And women have regained only 12 percent of the jobs lost during the recession, while men have regained 63 percent of the jobs they lost.

Women may be overrepresented in the growing sectors of the economy, but those sectors pay poverty wages. The public sector job cuts that have been largely responsible for unemployment remaining at or near 8 percent have fallen disproportionately on women (and women of color are hit the hardest). Those good union jobs disappear, and are replaced with a minimum-wage gig at Walmart—and even in retail, women make only 90 percent of what men make.

“All work is gendered. And the economy that we have assigns different levels of value based off of that,” says Ai-Jen Poo, executive director of the National Domestic Workers Alliance. Poo came to labor organizing through feminism. As a volunteer in a domestic violence shelter for Asian immigrant women, she explains, she realized that it was women who had economic opportunities who were able to break the cycle of violence. She brings a sharp gender analysis to the struggle for respect and better treatment for the workers, mostly women, who “make all other work possible.”

“Society has devalued that work over time,” she notes of the cleaning, caring, cooking, and other work domestic workers perform, largely hidden from public view, “and we think that that has a lot to do with who’s done the work.”

This argument was at the root of the fight for access to employment outside of the “pink-collar” fields. To be trapped in women’s jobs was to be forever trapped in a certain vision of femininity. Breaking out of “women’s work” was a form of breaking through the “feminine mystique” that Betty Friedan decried. But that work still needs to be done, and, Poo notes, the conditions that have long defined domestic work and service work—instability, lack of training, lack of career pathways, low pay—are now increasingly the reality for all American workers, not just women. When we focus on equal access at the top, we miss out the real story, which historian Bethany Moreton points out, “is not ‘Oh wow, women get to be lawyers,’ but that men get to be casualized clerks.”

Question 1

Which of the following statements can be inferred from the passage?

Show Answer

Instruction for set :

Read the passage carefully and answer the following questions:

The brave new economy being rebuilt in the wake of the financial meltdown is being built on low-wage service work, as manufacturing’s decline has accelerated and construction ground to a halt. At the beginning of the Great Recession, economist Heather Boushey noted at Slate, manufacturing and construction made up fully half the jobs lost, along with financial services and other business fields, and writers declared the “Mancession” or “He-cession” or even, as Hanna Rosin’s popular book has it, The End of Men. But as others have pointed out, as the recession drags on, it’s women who’ve faced the largest losses, not only in direct attacks on public sector jobs that are dominated by women, but in increased competition from the men pushed out of their previous professions. Some 60 percent of the jobs lost in the public sector were held by women, according to the Institute for Women’s Policy Research. And women have regained only 12 percent of the jobs lost during the recession, while men have regained 63 percent of the jobs they lost.

Women may be overrepresented in the growing sectors of the economy, but those sectors pay poverty wages. The public sector job cuts that have been largely responsible for unemployment remaining at or near 8 percent have fallen disproportionately on women (and women of color are hit the hardest). Those good union jobs disappear, and are replaced with a minimum-wage gig at Walmart—and even in retail, women make only 90 percent of what men make.

“All work is gendered. And the economy that we have assigns different levels of value based off of that,” says Ai-Jen Poo, executive director of the National Domestic Workers Alliance. Poo came to labor organizing through feminism. As a volunteer in a domestic violence shelter for Asian immigrant women, she explains, she realized that it was women who had economic opportunities who were able to break the cycle of violence. She brings a sharp gender analysis to the struggle for respect and better treatment for the workers, mostly women, who “make all other work possible.”

“Society has devalued that work over time,” she notes of the cleaning, caring, cooking, and other work domestic workers perform, largely hidden from public view, “and we think that that has a lot to do with who’s done the work.”

This argument was at the root of the fight for access to employment outside of the “pink-collar” fields. To be trapped in women’s jobs was to be forever trapped in a certain vision of femininity. Breaking out of “women’s work” was a form of breaking through the “feminine mystique” that Betty Friedan decried. But that work still needs to be done, and, Poo notes, the conditions that have long defined domestic work and service work—instability, lack of training, lack of career pathways, low pay—are now increasingly the reality for all American workers, not just women. When we focus on equal access at the top, we miss out the real story, which historian Bethany Moreton points out, “is not ‘Oh wow, women get to be lawyers,’ but that men get to be casualized clerks.”

Question 2

Women stand to lose the most during the recession and in its immediate aftermath for all of the following reasons, EXCEPT:

Show Answer

Instruction for set :

Read the passage carefully and answer the following questions:

The brave new economy being rebuilt in the wake of the financial meltdown is being built on low-wage service work, as manufacturing’s decline has accelerated and construction ground to a halt. At the beginning of the Great Recession, economist Heather Boushey noted at Slate, manufacturing and construction made up fully half the jobs lost, along with financial services and other business fields, and writers declared the “Mancession” or “He-cession” or even, as Hanna Rosin’s popular book has it, The End of Men. But as others have pointed out, as the recession drags on, it’s women who’ve faced the largest losses, not only in direct attacks on public sector jobs that are dominated by women, but in increased competition from the men pushed out of their previous professions. Some 60 percent of the jobs lost in the public sector were held by women, according to the Institute for Women’s Policy Research. And women have regained only 12 percent of the jobs lost during the recession, while men have regained 63 percent of the jobs they lost.

Women may be overrepresented in the growing sectors of the economy, but those sectors pay poverty wages. The public sector job cuts that have been largely responsible for unemployment remaining at or near 8 percent have fallen disproportionately on women (and women of color are hit the hardest). Those good union jobs disappear, and are replaced with a minimum-wage gig at Walmart—and even in retail, women make only 90 percent of what men make.

“All work is gendered. And the economy that we have assigns different levels of value based off of that,” says Ai-Jen Poo, executive director of the National Domestic Workers Alliance. Poo came to labor organizing through feminism. As a volunteer in a domestic violence shelter for Asian immigrant women, she explains, she realized that it was women who had economic opportunities who were able to break the cycle of violence. She brings a sharp gender analysis to the struggle for respect and better treatment for the workers, mostly women, who “make all other work possible.”

“Society has devalued that work over time,” she notes of the cleaning, caring, cooking, and other work domestic workers perform, largely hidden from public view, “and we think that that has a lot to do with who’s done the work.”

This argument was at the root of the fight for access to employment outside of the “pink-collar” fields. To be trapped in women’s jobs was to be forever trapped in a certain vision of femininity. Breaking out of “women’s work” was a form of breaking through the “feminine mystique” that Betty Friedan decried. But that work still needs to be done, and, Poo notes, the conditions that have long defined domestic work and service work—instability, lack of training, lack of career pathways, low pay—are now increasingly the reality for all American workers, not just women. When we focus on equal access at the top, we miss out the real story, which historian Bethany Moreton points out, “is not ‘Oh wow, women get to be lawyers,’ but that men get to be casualized clerks.”

Question 3

Ai-Jen Poo would agree with which of the following statements?

Show Answer

Instruction for set :

Read the passage carefully and answer the following questions:

The brave new economy being rebuilt in the wake of the financial meltdown is being built on low-wage service work, as manufacturing’s decline has accelerated and construction ground to a halt. At the beginning of the Great Recession, economist Heather Boushey noted at Slate, manufacturing and construction made up fully half the jobs lost, along with financial services and other business fields, and writers declared the “Mancession” or “He-cession” or even, as Hanna Rosin’s popular book has it, The End of Men. But as others have pointed out, as the recession drags on, it’s women who’ve faced the largest losses, not only in direct attacks on public sector jobs that are dominated by women, but in increased competition from the men pushed out of their previous professions. Some 60 percent of the jobs lost in the public sector were held by women, according to the Institute for Women’s Policy Research. And women have regained only 12 percent of the jobs lost during the recession, while men have regained 63 percent of the jobs they lost.

Women may be overrepresented in the growing sectors of the economy, but those sectors pay poverty wages. The public sector job cuts that have been largely responsible for unemployment remaining at or near 8 percent have fallen disproportionately on women (and women of color are hit the hardest). Those good union jobs disappear, and are replaced with a minimum-wage gig at Walmart—and even in retail, women make only 90 percent of what men make.

“All work is gendered. And the economy that we have assigns different levels of value based off of that,” says Ai-Jen Poo, executive director of the National Domestic Workers Alliance. Poo came to labor organizing through feminism. As a volunteer in a domestic violence shelter for Asian immigrant women, she explains, she realized that it was women who had economic opportunities who were able to break the cycle of violence. She brings a sharp gender analysis to the struggle for respect and better treatment for the workers, mostly women, who “make all other work possible.”

“Society has devalued that work over time,” she notes of the cleaning, caring, cooking, and other work domestic workers perform, largely hidden from public view, “and we think that that has a lot to do with who’s done the work.”

This argument was at the root of the fight for access to employment outside of the “pink-collar” fields. To be trapped in women’s jobs was to be forever trapped in a certain vision of femininity. Breaking out of “women’s work” was a form of breaking through the “feminine mystique” that Betty Friedan decried. But that work still needs to be done, and, Poo notes, the conditions that have long defined domestic work and service work—instability, lack of training, lack of career pathways, low pay—are now increasingly the reality for all American workers, not just women. When we focus on equal access at the top, we miss out the real story, which historian Bethany Moreton points out, “is not ‘Oh wow, women get to be lawyers,’ but that men get to be casualized clerks.”

Question 4

"Breaking out of women’s work was a form of breaking through the feminine mystique". Which of the following statements best captures the sense of this statement?

Show Answer

Instruction for set :

Read the passage carefully and answer the following questions:

In the early darkness of a midsummer Monday in 1945, some of the world’s most brilliant scientists and engineers prepared to detonate the first nuclear explosive device. They called it “the Gadget.” At 5:29 a.m., a flash erupted so intensely bright that it briefly blinded even through the lenses. One of the scientists, J. Robert Oppenheimer, thought of the Hindu scriptures that had inspired him since youth: “If the radiance of a thousand suns were to burst at once into the sky, that would be like the splendour of the mighty one.” Oppenheimer felt the weight of the purpose to which this awesome power — roughly equivalent to 44 million pounds of TNT, plus lethal radioactivity — would be put, and another line from the Bhagavad-Gita sprang to mind. “I am become Death, the destroyer of worlds.”

Three weeks later, on Aug. 6, a device on the same now-proven principles fell...over Hiroshima, Japan. Three days after that, on Aug. 9, a third bomb, essentially identical to the Gadget, was dropped on the port city of Nagasaki.

Journalists announce and discard eras as casually as bingo numbers, but a true historical dividing line was blasted into the sands of New Mexico. Before the blinding flash, only gods destroyed worlds. Then the power passed into human hands. That dividing line is so stark that it obscures the long path that led to our nuclear fate. The Bomb is often presented as a discrete choice, an option unmoored from history, unrelated to the countless choices and decisions, accidents and discoveries, that produced it. A simple yes or no: However, the Bomb is better understood as the terrible yet logical — and probably inevitable — result of a chain of developments in science, technology and the nature of war going back centuries. These developments moved in one direction only, toward ever more fearsome weapons and ever more catastrophic wars. This momentum made the nuclear age unavoidable.

Henry Stimson, the elderly secretary of war, alluded to this inevitability in April 1945. His memo to Truman brought the new president into the tight circle of secrecy around the development of “the most terrible weapon ever known in human history,” as Stimson put it. The Allies were given no choice but to outrace the Germans and Japanese and Soviets to this weapon, for its scientific basis was known to physicists worldwide from the earliest days of the war. Unfortunately, the world’s “moral advancement” had not accelerated to keep pace with its technical proficiency, Stimson added ruefully, thus: “modern civilization might be completely destroyed.”

In the 75 years since,  the famous Doomsday Clock of the Bulletin of the Atomic Scientists has never permitted much reason for hope...yet, the worst-case scenario remains in abeyance. What’s more, while the Bomb has not ended all wars, its menacing umbrella has spared the world from another industrial-scale conflagration among great powers. This is no small thing. During the last 31 years before the nuclear age, warfare consumed more than 100 million lives, the majority of them civilians. The only thing worse than Hiroshima and Nagasaki was the unspeakable brutality that preceded them.

In the New Mexico desert, perhaps — perhaps — the world finally found a weapon awful enough to end the escalation.

Question 5

Why, according to the passage, was our nuclear fate inexorable?

Show Answer

Instruction for set :

Read the passage carefully and answer the following questions:

In the early darkness of a midsummer Monday in 1945, some of the world’s most brilliant scientists and engineers prepared to detonate the first nuclear explosive device. They called it “the Gadget.” At 5:29 a.m., a flash erupted so intensely bright that it briefly blinded even through the lenses. One of the scientists, J. Robert Oppenheimer, thought of the Hindu scriptures that had inspired him since youth: “If the radiance of a thousand suns were to burst at once into the sky, that would be like the splendour of the mighty one.” Oppenheimer felt the weight of the purpose to which this awesome power — roughly equivalent to 44 million pounds of TNT, plus lethal radioactivity — would be put, and another line from the Bhagavad-Gita sprang to mind. “I am become Death, the destroyer of worlds.”

Three weeks later, on Aug. 6, a device on the same now-proven principles fell...over Hiroshima, Japan. Three days after that, on Aug. 9, a third bomb, essentially identical to the Gadget, was dropped on the port city of Nagasaki.

Journalists announce and discard eras as casually as bingo numbers, but a true historical dividing line was blasted into the sands of New Mexico. Before the blinding flash, only gods destroyed worlds. Then the power passed into human hands. That dividing line is so stark that it obscures the long path that led to our nuclear fate. The Bomb is often presented as a discrete choice, an option unmoored from history, unrelated to the countless choices and decisions, accidents and discoveries, that produced it. A simple yes or no: However, the Bomb is better understood as the terrible yet logical — and probably inevitable — result of a chain of developments in science, technology and the nature of war going back centuries. These developments moved in one direction only, toward ever more fearsome weapons and ever more catastrophic wars. This momentum made the nuclear age unavoidable.

Henry Stimson, the elderly secretary of war, alluded to this inevitability in April 1945. His memo to Truman brought the new president into the tight circle of secrecy around the development of “the most terrible weapon ever known in human history,” as Stimson put it. The Allies were given no choice but to outrace the Germans and Japanese and Soviets to this weapon, for its scientific basis was known to physicists worldwide from the earliest days of the war. Unfortunately, the world’s “moral advancement” had not accelerated to keep pace with its technical proficiency, Stimson added ruefully, thus: “modern civilization might be completely destroyed.”

In the 75 years since,  the famous Doomsday Clock of the Bulletin of the Atomic Scientists has never permitted much reason for hope...yet, the worst-case scenario remains in abeyance. What’s more, while the Bomb has not ended all wars, its menacing umbrella has spared the world from another industrial-scale conflagration among great powers. This is no small thing. During the last 31 years before the nuclear age, warfare consumed more than 100 million lives, the majority of them civilians. The only thing worse than Hiroshima and Nagasaki was the unspeakable brutality that preceded them.

In the New Mexico desert, perhaps — perhaps — the world finally found a weapon awful enough to end the escalation.

Question 6

According to the passage, all of the following are true except:

I. The scientists needed a green light from Truman to proceed with the Bomb's development, which is why he was brought into their tight circle.

II. The testing of the first nuclear explosive device was done three months before the bombing of Hiroshima.

III. Physicists around the world were aware of the science that goes behind the making of a nuclear weapon right from the initial days of the war.

Show Answer

Instruction for set :

Read the passage carefully and answer the following questions:

In the early darkness of a midsummer Monday in 1945, some of the world’s most brilliant scientists and engineers prepared to detonate the first nuclear explosive device. They called it “the Gadget.” At 5:29 a.m., a flash erupted so intensely bright that it briefly blinded even through the lenses. One of the scientists, J. Robert Oppenheimer, thought of the Hindu scriptures that had inspired him since youth: “If the radiance of a thousand suns were to burst at once into the sky, that would be like the splendour of the mighty one.” Oppenheimer felt the weight of the purpose to which this awesome power — roughly equivalent to 44 million pounds of TNT, plus lethal radioactivity — would be put, and another line from the Bhagavad-Gita sprang to mind. “I am become Death, the destroyer of worlds.”

Three weeks later, on Aug. 6, a device on the same now-proven principles fell...over Hiroshima, Japan. Three days after that, on Aug. 9, a third bomb, essentially identical to the Gadget, was dropped on the port city of Nagasaki.

Journalists announce and discard eras as casually as bingo numbers, but a true historical dividing line was blasted into the sands of New Mexico. Before the blinding flash, only gods destroyed worlds. Then the power passed into human hands. That dividing line is so stark that it obscures the long path that led to our nuclear fate. The Bomb is often presented as a discrete choice, an option unmoored from history, unrelated to the countless choices and decisions, accidents and discoveries, that produced it. A simple yes or no: However, the Bomb is better understood as the terrible yet logical — and probably inevitable — result of a chain of developments in science, technology and the nature of war going back centuries. These developments moved in one direction only, toward ever more fearsome weapons and ever more catastrophic wars. This momentum made the nuclear age unavoidable.

Henry Stimson, the elderly secretary of war, alluded to this inevitability in April 1945. His memo to Truman brought the new president into the tight circle of secrecy around the development of “the most terrible weapon ever known in human history,” as Stimson put it. The Allies were given no choice but to outrace the Germans and Japanese and Soviets to this weapon, for its scientific basis was known to physicists worldwide from the earliest days of the war. Unfortunately, the world’s “moral advancement” had not accelerated to keep pace with its technical proficiency, Stimson added ruefully, thus: “modern civilization might be completely destroyed.”

In the 75 years since,  the famous Doomsday Clock of the Bulletin of the Atomic Scientists has never permitted much reason for hope...yet, the worst-case scenario remains in abeyance. What’s more, while the Bomb has not ended all wars, its menacing umbrella has spared the world from another industrial-scale conflagration among great powers. This is no small thing. During the last 31 years before the nuclear age, warfare consumed more than 100 million lives, the majority of them civilians. The only thing worse than Hiroshima and Nagasaki was the unspeakable brutality that preceded them.

In the New Mexico desert, perhaps — perhaps — the world finally found a weapon awful enough to end the escalation.

Question 7

'To end the escalation' in the last sentence of the passage refers to?

Show Answer

Instruction for set :

Read the passage carefully and answer the following questions:

In the early darkness of a midsummer Monday in 1945, some of the world’s most brilliant scientists and engineers prepared to detonate the first nuclear explosive device. They called it “the Gadget.” At 5:29 a.m., a flash erupted so intensely bright that it briefly blinded even through the lenses. One of the scientists, J. Robert Oppenheimer, thought of the Hindu scriptures that had inspired him since youth: “If the radiance of a thousand suns were to burst at once into the sky, that would be like the splendour of the mighty one.” Oppenheimer felt the weight of the purpose to which this awesome power — roughly equivalent to 44 million pounds of TNT, plus lethal radioactivity — would be put, and another line from the Bhagavad-Gita sprang to mind. “I am become Death, the destroyer of worlds.”

Three weeks later, on Aug. 6, a device on the same now-proven principles fell...over Hiroshima, Japan. Three days after that, on Aug. 9, a third bomb, essentially identical to the Gadget, was dropped on the port city of Nagasaki.

Journalists announce and discard eras as casually as bingo numbers, but a true historical dividing line was blasted into the sands of New Mexico. Before the blinding flash, only gods destroyed worlds. Then the power passed into human hands. That dividing line is so stark that it obscures the long path that led to our nuclear fate. The Bomb is often presented as a discrete choice, an option unmoored from history, unrelated to the countless choices and decisions, accidents and discoveries, that produced it. A simple yes or no: However, the Bomb is better understood as the terrible yet logical — and probably inevitable — result of a chain of developments in science, technology and the nature of war going back centuries. These developments moved in one direction only, toward ever more fearsome weapons and ever more catastrophic wars. This momentum made the nuclear age unavoidable.

Henry Stimson, the elderly secretary of war, alluded to this inevitability in April 1945. His memo to Truman brought the new president into the tight circle of secrecy around the development of “the most terrible weapon ever known in human history,” as Stimson put it. The Allies were given no choice but to outrace the Germans and Japanese and Soviets to this weapon, for its scientific basis was known to physicists worldwide from the earliest days of the war. Unfortunately, the world’s “moral advancement” had not accelerated to keep pace with its technical proficiency, Stimson added ruefully, thus: “modern civilization might be completely destroyed.”

In the 75 years since,  the famous Doomsday Clock of the Bulletin of the Atomic Scientists has never permitted much reason for hope...yet, the worst-case scenario remains in abeyance. What’s more, while the Bomb has not ended all wars, its menacing umbrella has spared the world from another industrial-scale conflagration among great powers. This is no small thing. During the last 31 years before the nuclear age, warfare consumed more than 100 million lives, the majority of them civilians. The only thing worse than Hiroshima and Nagasaki was the unspeakable brutality that preceded them.

In the New Mexico desert, perhaps — perhaps — the world finally found a weapon awful enough to end the escalation.

Question 8

All of the following can be inferred from the passage except:

I. According to the author, the bombing of Hiroshima and Nagasaki was necessary to stop smaller wars that mainly killed civilians.

II. Because of America's efforts towards making weapons ever more lethal, the Doomsday Clock has never given us a reason for hope.

III. The Allies had to outrace the Axis powers in building the nuclear weapon because they were dedicated to restoring world peace.

IV. The Hindu scriptures, especially Bhagavad-Gita, inspired Oppenheimer to become a scientist since his youth.

Show Answer

Instruction for set :

Read the passage carefully and answer the following questions:

In the early darkness of a midsummer Monday in 1945, some of the world’s most brilliant scientists and engineers prepared to detonate the first nuclear explosive device. They called it “the Gadget.” At 5:29 a.m., a flash erupted so intensely bright that it briefly blinded even through the lenses. One of the scientists, J. Robert Oppenheimer, thought of the Hindu scriptures that had inspired him since youth: “If the radiance of a thousand suns were to burst at once into the sky, that would be like the splendour of the mighty one.” Oppenheimer felt the weight of the purpose to which this awesome power — roughly equivalent to 44 million pounds of TNT, plus lethal radioactivity — would be put, and another line from the Bhagavad-Gita sprang to mind. “I am become Death, the destroyer of worlds.”

Three weeks later, on Aug. 6, a device on the same now-proven principles fell...over Hiroshima, Japan. Three days after that, on Aug. 9, a third bomb, essentially identical to the Gadget, was dropped on the port city of Nagasaki.

Journalists announce and discard eras as casually as bingo numbers, but a true historical dividing line was blasted into the sands of New Mexico. Before the blinding flash, only gods destroyed worlds. Then the power passed into human hands. That dividing line is so stark that it obscures the long path that led to our nuclear fate. The Bomb is often presented as a discrete choice, an option unmoored from history, unrelated to the countless choices and decisions, accidents and discoveries, that produced it. A simple yes or no: However, the Bomb is better understood as the terrible yet logical — and probably inevitable — result of a chain of developments in science, technology and the nature of war going back centuries. These developments moved in one direction only, toward ever more fearsome weapons and ever more catastrophic wars. This momentum made the nuclear age unavoidable.

Henry Stimson, the elderly secretary of war, alluded to this inevitability in April 1945. His memo to Truman brought the new president into the tight circle of secrecy around the development of “the most terrible weapon ever known in human history,” as Stimson put it. The Allies were given no choice but to outrace the Germans and Japanese and Soviets to this weapon, for its scientific basis was known to physicists worldwide from the earliest days of the war. Unfortunately, the world’s “moral advancement” had not accelerated to keep pace with its technical proficiency, Stimson added ruefully, thus: “modern civilization might be completely destroyed.”

In the 75 years since,  the famous Doomsday Clock of the Bulletin of the Atomic Scientists has never permitted much reason for hope...yet, the worst-case scenario remains in abeyance. What’s more, while the Bomb has not ended all wars, its menacing umbrella has spared the world from another industrial-scale conflagration among great powers. This is no small thing. During the last 31 years before the nuclear age, warfare consumed more than 100 million lives, the majority of them civilians. The only thing worse than Hiroshima and Nagasaki was the unspeakable brutality that preceded them.

In the New Mexico desert, perhaps — perhaps — the world finally found a weapon awful enough to end the escalation.

Question 9

Why did Oppenheimer recall the quote "I am become Death, the destroyer of worlds" after the detonation of the first nuclear explosive device?

Show Answer

Instruction for set :

Read the passage carefully and answer the following questions:

It’s been two decades since the Human Genome Project first unveiled a rough draft of our genetic instruction book. The promise of that medical moon shot was that doctors would soon be able to look at an individual’s DNA and prescribe the right medicines for that person’s illness or even prevent certain diseases. That promise, known as precision medicine, has yet to be fulfilled in any widespread way. True, researchers are getting clues about some genetic variants linked to certain conditions and some that affect how drugs work in the body. But many of those advances have benefited just one group: people whose ancestral roots stem from Europe. In other words, white people.

Instead of a truly human genome that represents everyone, “what we have is essentially a European genome,” says Constance Hilliard, an evolutionary historian at the University of North Texas in Denton. “That data doesn’t work for anybody apart from people of European ancestry.” She’s talking about more than the Human Genome Project’s reference genome. That database is just one of many that researchers are using to develop precision medicine strategies. Often those genetic databases draw on data mainly from white participants. But race isn’t the issue. The problem is that collectively, those data add up to a catalog of genetic variants that don’t represent the full range of human genetic diversity.

One solution is to make customized reference genomes for populations whose members die from cancer or heart disease at higher rates than other groups, for example, or who face other worse health outcomes, Hilliard suggests. Hilliard’s hypothesis is that precision medicine, which tailors treatments based on a person’s genetic data, lifestyle, environment and physiology, is more likely to succeed when researchers consider the histories of groups that have worse health outcomes.

And the more specific the better. For instance, African Americans who descended from enslaved people have geographic and ecological origins as well as evolutionary and social histories distinct from those of recent African immigrants to the United States. Those histories have left stamps in the DNA that can make a difference in people’s health today. The same goes for Indigenous people from various parts of the world and Latino people from Mexico versus the Caribbean or Central or South America.

Results of a survey conducted by Science News revealed that one big drawback to Hilliard’s proposal may be social rather than scientific. Many respondents expressed concern that even well-intentioned scientists might do research that ultimately increases bias and discrimination toward certain groups. As one respondent put it, “The idea of diversity is being stretched into an arena where racial differences will be emphasized and commonalities minimized. The fear is that any differences that are found would be exploited by those who want to denigrate others. This is truly the entry to a racist philosophy.” Indeed, the Chinese government has come under fire for using DNA to identify members of the Uighur Muslim ethnic group, singling them out for surveillance and sending some to “reeducation camps.”

Hilliard says that the argument that minorities become more vulnerable when they open themselves to genetic research is valid. “Genomics, like nuclear fusion, can be weaponized and dangerous,” she says in response to respondents' concerns. “Minorities can choose to be left out of the genomic revolution or they can make full use of it,” by adding their genetic data to the mix.

Question 10

Which of the following statements can be inferred from the passage?

Show Answer

Instruction for set :

Read the passage carefully and answer the following questions:

It’s been two decades since the Human Genome Project first unveiled a rough draft of our genetic instruction book. The promise of that medical moon shot was that doctors would soon be able to look at an individual’s DNA and prescribe the right medicines for that person’s illness or even prevent certain diseases. That promise, known as precision medicine, has yet to be fulfilled in any widespread way. True, researchers are getting clues about some genetic variants linked to certain conditions and some that affect how drugs work in the body. But many of those advances have benefited just one group: people whose ancestral roots stem from Europe. In other words, white people.

Instead of a truly human genome that represents everyone, “what we have is essentially a European genome,” says Constance Hilliard, an evolutionary historian at the University of North Texas in Denton. “That data doesn’t work for anybody apart from people of European ancestry.” She’s talking about more than the Human Genome Project’s reference genome. That database is just one of many that researchers are using to develop precision medicine strategies. Often those genetic databases draw on data mainly from white participants. But race isn’t the issue. The problem is that collectively, those data add up to a catalog of genetic variants that don’t represent the full range of human genetic diversity.

One solution is to make customized reference genomes for populations whose members die from cancer or heart disease at higher rates than other groups, for example, or who face other worse health outcomes, Hilliard suggests. Hilliard’s hypothesis is that precision medicine, which tailors treatments based on a person’s genetic data, lifestyle, environment and physiology, is more likely to succeed when researchers consider the histories of groups that have worse health outcomes.

And the more specific the better. For instance, African Americans who descended from enslaved people have geographic and ecological origins as well as evolutionary and social histories distinct from those of recent African immigrants to the United States. Those histories have left stamps in the DNA that can make a difference in people’s health today. The same goes for Indigenous people from various parts of the world and Latino people from Mexico versus the Caribbean or Central or South America.

Results of a survey conducted by Science News revealed that one big drawback to Hilliard’s proposal may be social rather than scientific. Many respondents expressed concern that even well-intentioned scientists might do research that ultimately increases bias and discrimination toward certain groups. As one respondent put it, “The idea of diversity is being stretched into an arena where racial differences will be emphasized and commonalities minimized. The fear is that any differences that are found would be exploited by those who want to denigrate others. This is truly the entry to a racist philosophy.” Indeed, the Chinese government has come under fire for using DNA to identify members of the Uighur Muslim ethnic group, singling them out for surveillance and sending some to “reeducation camps.”

Hilliard says that the argument that minorities become more vulnerable when they open themselves to genetic research is valid. “Genomics, like nuclear fusion, can be weaponized and dangerous,” she says in response to respondents' concerns. “Minorities can choose to be left out of the genomic revolution or they can make full use of it,” by adding their genetic data to the mix.

Question 11

Which of the following statements is Constance Hilliard least likely to agree with?

Show Answer

Instruction for set :

Read the passage carefully and answer the following questions:

It’s been two decades since the Human Genome Project first unveiled a rough draft of our genetic instruction book. The promise of that medical moon shot was that doctors would soon be able to look at an individual’s DNA and prescribe the right medicines for that person’s illness or even prevent certain diseases. That promise, known as precision medicine, has yet to be fulfilled in any widespread way. True, researchers are getting clues about some genetic variants linked to certain conditions and some that affect how drugs work in the body. But many of those advances have benefited just one group: people whose ancestral roots stem from Europe. In other words, white people.

Instead of a truly human genome that represents everyone, “what we have is essentially a European genome,” says Constance Hilliard, an evolutionary historian at the University of North Texas in Denton. “That data doesn’t work for anybody apart from people of European ancestry.” She’s talking about more than the Human Genome Project’s reference genome. That database is just one of many that researchers are using to develop precision medicine strategies. Often those genetic databases draw on data mainly from white participants. But race isn’t the issue. The problem is that collectively, those data add up to a catalog of genetic variants that don’t represent the full range of human genetic diversity.

One solution is to make customized reference genomes for populations whose members die from cancer or heart disease at higher rates than other groups, for example, or who face other worse health outcomes, Hilliard suggests. Hilliard’s hypothesis is that precision medicine, which tailors treatments based on a person’s genetic data, lifestyle, environment and physiology, is more likely to succeed when researchers consider the histories of groups that have worse health outcomes.

And the more specific the better. For instance, African Americans who descended from enslaved people have geographic and ecological origins as well as evolutionary and social histories distinct from those of recent African immigrants to the United States. Those histories have left stamps in the DNA that can make a difference in people’s health today. The same goes for Indigenous people from various parts of the world and Latino people from Mexico versus the Caribbean or Central or South America.

Results of a survey conducted by Science News revealed that one big drawback to Hilliard’s proposal may be social rather than scientific. Many respondents expressed concern that even well-intentioned scientists might do research that ultimately increases bias and discrimination toward certain groups. As one respondent put it, “The idea of diversity is being stretched into an arena where racial differences will be emphasized and commonalities minimized. The fear is that any differences that are found would be exploited by those who want to denigrate others. This is truly the entry to a racist philosophy.” Indeed, the Chinese government has come under fire for using DNA to identify members of the Uighur Muslim ethnic group, singling them out for surveillance and sending some to “reeducation camps.”

Hilliard says that the argument that minorities become more vulnerable when they open themselves to genetic research is valid. “Genomics, like nuclear fusion, can be weaponized and dangerous,” she says in response to respondents' concerns. “Minorities can choose to be left out of the genomic revolution or they can make full use of it,” by adding their genetic data to the mix.

Question 12

The central point in the fifth paragraph is that

Show Answer

Instruction for set :

Read the passage carefully and answer the following questions:

It’s been two decades since the Human Genome Project first unveiled a rough draft of our genetic instruction book. The promise of that medical moon shot was that doctors would soon be able to look at an individual’s DNA and prescribe the right medicines for that person’s illness or even prevent certain diseases. That promise, known as precision medicine, has yet to be fulfilled in any widespread way. True, researchers are getting clues about some genetic variants linked to certain conditions and some that affect how drugs work in the body. But many of those advances have benefited just one group: people whose ancestral roots stem from Europe. In other words, white people.

Instead of a truly human genome that represents everyone, “what we have is essentially a European genome,” says Constance Hilliard, an evolutionary historian at the University of North Texas in Denton. “That data doesn’t work for anybody apart from people of European ancestry.” She’s talking about more than the Human Genome Project’s reference genome. That database is just one of many that researchers are using to develop precision medicine strategies. Often those genetic databases draw on data mainly from white participants. But race isn’t the issue. The problem is that collectively, those data add up to a catalog of genetic variants that don’t represent the full range of human genetic diversity.

One solution is to make customized reference genomes for populations whose members die from cancer or heart disease at higher rates than other groups, for example, or who face other worse health outcomes, Hilliard suggests. Hilliard’s hypothesis is that precision medicine, which tailors treatments based on a person’s genetic data, lifestyle, environment and physiology, is more likely to succeed when researchers consider the histories of groups that have worse health outcomes.

And the more specific the better. For instance, African Americans who descended from enslaved people have geographic and ecological origins as well as evolutionary and social histories distinct from those of recent African immigrants to the United States. Those histories have left stamps in the DNA that can make a difference in people’s health today. The same goes for Indigenous people from various parts of the world and Latino people from Mexico versus the Caribbean or Central or South America.

Results of a survey conducted by Science News revealed that one big drawback to Hilliard’s proposal may be social rather than scientific. Many respondents expressed concern that even well-intentioned scientists might do research that ultimately increases bias and discrimination toward certain groups. As one respondent put it, “The idea of diversity is being stretched into an arena where racial differences will be emphasized and commonalities minimized. The fear is that any differences that are found would be exploited by those who want to denigrate others. This is truly the entry to a racist philosophy.” Indeed, the Chinese government has come under fire for using DNA to identify members of the Uighur Muslim ethnic group, singling them out for surveillance and sending some to “reeducation camps.”

Hilliard says that the argument that minorities become more vulnerable when they open themselves to genetic research is valid. “Genomics, like nuclear fusion, can be weaponized and dangerous,” she says in response to respondents' concerns. “Minorities can choose to be left out of the genomic revolution or they can make full use of it,” by adding their genetic data to the mix.

Question 13

"But many of those advances have benefited just one group: people whose ancestral roots stem from Europe. In other words, white people." Which of the following cannot be inferred from the statement?

Show Answer

Instruction for set :

Read the passage carefully and answer the following questions:

It’s been two decades since the Human Genome Project first unveiled a rough draft of our genetic instruction book. The promise of that medical moon shot was that doctors would soon be able to look at an individual’s DNA and prescribe the right medicines for that person’s illness or even prevent certain diseases. That promise, known as precision medicine, has yet to be fulfilled in any widespread way. True, researchers are getting clues about some genetic variants linked to certain conditions and some that affect how drugs work in the body. But many of those advances have benefited just one group: people whose ancestral roots stem from Europe. In other words, white people.

Instead of a truly human genome that represents everyone, “what we have is essentially a European genome,” says Constance Hilliard, an evolutionary historian at the University of North Texas in Denton. “That data doesn’t work for anybody apart from people of European ancestry.” She’s talking about more than the Human Genome Project’s reference genome. That database is just one of many that researchers are using to develop precision medicine strategies. Often those genetic databases draw on data mainly from white participants. But race isn’t the issue. The problem is that collectively, those data add up to a catalog of genetic variants that don’t represent the full range of human genetic diversity.

One solution is to make customized reference genomes for populations whose members die from cancer or heart disease at higher rates than other groups, for example, or who face other worse health outcomes, Hilliard suggests. Hilliard’s hypothesis is that precision medicine, which tailors treatments based on a person’s genetic data, lifestyle, environment and physiology, is more likely to succeed when researchers consider the histories of groups that have worse health outcomes.

And the more specific the better. For instance, African Americans who descended from enslaved people have geographic and ecological origins as well as evolutionary and social histories distinct from those of recent African immigrants to the United States. Those histories have left stamps in the DNA that can make a difference in people’s health today. The same goes for Indigenous people from various parts of the world and Latino people from Mexico versus the Caribbean or Central or South America.

Results of a survey conducted by Science News revealed that one big drawback to Hilliard’s proposal may be social rather than scientific. Many respondents expressed concern that even well-intentioned scientists might do research that ultimately increases bias and discrimination toward certain groups. As one respondent put it, “The idea of diversity is being stretched into an arena where racial differences will be emphasized and commonalities minimized. The fear is that any differences that are found would be exploited by those who want to denigrate others. This is truly the entry to a racist philosophy.” Indeed, the Chinese government has come under fire for using DNA to identify members of the Uighur Muslim ethnic group, singling them out for surveillance and sending some to “reeducation camps.”

Hilliard says that the argument that minorities become more vulnerable when they open themselves to genetic research is valid. “Genomics, like nuclear fusion, can be weaponized and dangerous,” she says in response to respondents' concerns. “Minorities can choose to be left out of the genomic revolution or they can make full use of it,” by adding their genetic data to the mix.

Question 14

Hilliard likens Genomics to nuclear fusion for which one of the following reasons?

Show Answer

Instruction for set :

Read the passage carefully and answer the following questions:

Cryptocurrencies have long been heralded as the future of finance, but it wasn’t until 2020 that it finally caught on to an old idea: making money with money. In the crypto world, decentralized finance (or DeFi) encompasses a wide array of blockchain-based applications intended to enhance cryptocurrency holders’ returns without relying on intermediaries — to earn the kind of passive returns an investor might get from a savings account, a Treasury bill, or an Apple Inc. bond.

The idea seems to be catching fire: Deposits in DeFi applications grew from about $1 billion in June to just under $40 billion by late January 2021, suggesting that DeFi could be a major element of crypto from here on out. In the tradition of disruptive innovations — as Clayton Christensen envisioned them — DeFi can be the evolution of blockchain technology that might launch it into mainstream.

The premise of DeFi is simple: Fix the longstanding inefficiency in crypto finance of capital being kept idle at a nonzero opportunity cost. Now, most investors buy crypto with the hope that the value of the currency itself will rise, as Bitcoin has. In general, that strategy has worked just fine. The value of cryptocurrencies has appreciated so rapidly that there just wasn’t much incentive to worry about gains of a few percent here and there.

But the recent rise of stablecoins, which are designed keep their value constant, has changed that calculation. The combined market cap of stablecoins such as Terra and USDC has more than quadrupled in 2020. Now, vast passive income opportunities are being awakened by DeFi.

The appeal of a lower-risk approach to crypto is obvious and has the potential to expand the pool of investors. For the first time, it’s possible to be compensated for owning cryptos (even in the absence of price appreciation), which brings real, tangible utilities to digital currencies and changes the narrative of an asset class whose sole purpose used to be about being sold at a higher price. Therefore, many of the DeFi protocols today might have the potential to become big and bold enough to rival their centralized counterparts, while staying true to their decentralized roots. Furthermore, with volatility out of the picture and the promise of more stable returns, institutional investors are now considering crypto as part of their investments in alternatives.

The search for passive returns on crypto assets, called “yield farming,” is already taking shape on a number of new lending platforms. Compound Labs has launched one of the biggest DeFi lending platforms, where users can now borrow and lend any cryptocurrency on a short-term basis at algorithmically determined rates. A prototypical yield farmer moves assets around pools on Compound, constantly chasing the pool offering the highest annual percentage yield (APY). Practically, it echoes a strategy in traditional finance — a foreign currency carry trade — where a trader seeks to borrow the currency charging a lower interest rate and lend the one offering a higher return.

Crypto yield farming, however, offers more incentives. For instance, by depositing stablecoins into a digital account, investors would be rewarded in at least two ways. First, they receive the APY on their deposits. Second, and more importantly, certain protocols offer an additional subsidy, in the form of a new token, on top of the yield that it charges the borrower and pays to the lender.

Question 15

Which of the following statements cannot be inferred from the passage?

I. DeFi is a form of finance that does not rely on financial intermediaries.

II. Capital invested in crypto finance would have zero opportunity cost with increased deposits in DeFi applications.

III. Deposits in DeFi applications would make cryptocurrency investments free from volatility.

IV. People follow traditional financial strategies on DeFi lending platforms.

Show Answer

Instruction for set :

Read the passage carefully and answer the following questions:

Cryptocurrencies have long been heralded as the future of finance, but it wasn’t until 2020 that it finally caught on to an old idea: making money with money. In the crypto world, decentralized finance (or DeFi) encompasses a wide array of blockchain-based applications intended to enhance cryptocurrency holders’ returns without relying on intermediaries — to earn the kind of passive returns an investor might get from a savings account, a Treasury bill, or an Apple Inc. bond.

The idea seems to be catching fire: Deposits in DeFi applications grew from about $1 billion in June to just under $40 billion by late January 2021, suggesting that DeFi could be a major element of crypto from here on out. In the tradition of disruptive innovations — as Clayton Christensen envisioned them — DeFi can be the evolution of blockchain technology that might launch it into mainstream.

The premise of DeFi is simple: Fix the longstanding inefficiency in crypto finance of capital being kept idle at a nonzero opportunity cost. Now, most investors buy crypto with the hope that the value of the currency itself will rise, as Bitcoin has. In general, that strategy has worked just fine. The value of cryptocurrencies has appreciated so rapidly that there just wasn’t much incentive to worry about gains of a few percent here and there.

But the recent rise of stablecoins, which are designed keep their value constant, has changed that calculation. The combined market cap of stablecoins such as Terra and USDC has more than quadrupled in 2020. Now, vast passive income opportunities are being awakened by DeFi.

The appeal of a lower-risk approach to crypto is obvious and has the potential to expand the pool of investors. For the first time, it’s possible to be compensated for owning cryptos (even in the absence of price appreciation), which brings real, tangible utilities to digital currencies and changes the narrative of an asset class whose sole purpose used to be about being sold at a higher price. Therefore, many of the DeFi protocols today might have the potential to become big and bold enough to rival their centralized counterparts, while staying true to their decentralized roots. Furthermore, with volatility out of the picture and the promise of more stable returns, institutional investors are now considering crypto as part of their investments in alternatives.

The search for passive returns on crypto assets, called “yield farming,” is already taking shape on a number of new lending platforms. Compound Labs has launched one of the biggest DeFi lending platforms, where users can now borrow and lend any cryptocurrency on a short-term basis at algorithmically determined rates. A prototypical yield farmer moves assets around pools on Compound, constantly chasing the pool offering the highest annual percentage yield (APY). Practically, it echoes a strategy in traditional finance — a foreign currency carry trade — where a trader seeks to borrow the currency charging a lower interest rate and lend the one offering a higher return.

Crypto yield farming, however, offers more incentives. For instance, by depositing stablecoins into a digital account, investors would be rewarded in at least two ways. First, they receive the APY on their deposits. Second, and more importantly, certain protocols offer an additional subsidy, in the form of a new token, on top of the yield that it charges the borrower and pays to the lender.

Question 16

Which of the following would the author cite as a reason for activity in DeFi applications being nascent until recently?

Show Answer

Instruction for set :

Read the passage carefully and answer the following questions:

Cryptocurrencies have long been heralded as the future of finance, but it wasn’t until 2020 that it finally caught on to an old idea: making money with money. In the crypto world, decentralized finance (or DeFi) encompasses a wide array of blockchain-based applications intended to enhance cryptocurrency holders’ returns without relying on intermediaries — to earn the kind of passive returns an investor might get from a savings account, a Treasury bill, or an Apple Inc. bond.

The idea seems to be catching fire: Deposits in DeFi applications grew from about $1 billion in June to just under $40 billion by late January 2021, suggesting that DeFi could be a major element of crypto from here on out. In the tradition of disruptive innovations — as Clayton Christensen envisioned them — DeFi can be the evolution of blockchain technology that might launch it into mainstream.

The premise of DeFi is simple: Fix the longstanding inefficiency in crypto finance of capital being kept idle at a nonzero opportunity cost. Now, most investors buy crypto with the hope that the value of the currency itself will rise, as Bitcoin has. In general, that strategy has worked just fine. The value of cryptocurrencies has appreciated so rapidly that there just wasn’t much incentive to worry about gains of a few percent here and there.

But the recent rise of stablecoins, which are designed keep their value constant, has changed that calculation. The combined market cap of stablecoins such as Terra and USDC has more than quadrupled in 2020. Now, vast passive income opportunities are being awakened by DeFi.

The appeal of a lower-risk approach to crypto is obvious and has the potential to expand the pool of investors. For the first time, it’s possible to be compensated for owning cryptos (even in the absence of price appreciation), which brings real, tangible utilities to digital currencies and changes the narrative of an asset class whose sole purpose used to be about being sold at a higher price. Therefore, many of the DeFi protocols today might have the potential to become big and bold enough to rival their centralized counterparts, while staying true to their decentralized roots. Furthermore, with volatility out of the picture and the promise of more stable returns, institutional investors are now considering crypto as part of their investments in alternatives.

The search for passive returns on crypto assets, called “yield farming,” is already taking shape on a number of new lending platforms. Compound Labs has launched one of the biggest DeFi lending platforms, where users can now borrow and lend any cryptocurrency on a short-term basis at algorithmically determined rates. A prototypical yield farmer moves assets around pools on Compound, constantly chasing the pool offering the highest annual percentage yield (APY). Practically, it echoes a strategy in traditional finance — a foreign currency carry trade — where a trader seeks to borrow the currency charging a lower interest rate and lend the one offering a higher return.

Crypto yield farming, however, offers more incentives. For instance, by depositing stablecoins into a digital account, investors would be rewarded in at least two ways. First, they receive the APY on their deposits. Second, and more importantly, certain protocols offer an additional subsidy, in the form of a new token, on top of the yield that it charges the borrower and pays to the lender.

Question 17

According to the passage, a 'crypto currency' carry trade would necessarily involve

I. Borrowing from the pool charging a lower algorithmically determined rate.

II. Converting the borrowed cryptocurrency into another cryptocurrency offering a higher algorithmically determined rate and lending it out.

III. Not collecting the return from cryptocurrency lent out until the exchange rate resets to the value at the time of borrowing.

IV. Profit or loss due to difference in borrowed and lent out rates.

Show Answer

Instruction for set :

Read the passage carefully and answer the following questions:

Cryptocurrencies have long been heralded as the future of finance, but it wasn’t until 2020 that it finally caught on to an old idea: making money with money. In the crypto world, decentralized finance (or DeFi) encompasses a wide array of blockchain-based applications intended to enhance cryptocurrency holders’ returns without relying on intermediaries — to earn the kind of passive returns an investor might get from a savings account, a Treasury bill, or an Apple Inc. bond.

The idea seems to be catching fire: Deposits in DeFi applications grew from about $1 billion in June to just under $40 billion by late January 2021, suggesting that DeFi could be a major element of crypto from here on out. In the tradition of disruptive innovations — as Clayton Christensen envisioned them — DeFi can be the evolution of blockchain technology that might launch it into mainstream.

The premise of DeFi is simple: Fix the longstanding inefficiency in crypto finance of capital being kept idle at a nonzero opportunity cost. Now, most investors buy crypto with the hope that the value of the currency itself will rise, as Bitcoin has. In general, that strategy has worked just fine. The value of cryptocurrencies has appreciated so rapidly that there just wasn’t much incentive to worry about gains of a few percent here and there.

But the recent rise of stablecoins, which are designed keep their value constant, has changed that calculation. The combined market cap of stablecoins such as Terra and USDC has more than quadrupled in 2020. Now, vast passive income opportunities are being awakened by DeFi.

The appeal of a lower-risk approach to crypto is obvious and has the potential to expand the pool of investors. For the first time, it’s possible to be compensated for owning cryptos (even in the absence of price appreciation), which brings real, tangible utilities to digital currencies and changes the narrative of an asset class whose sole purpose used to be about being sold at a higher price. Therefore, many of the DeFi protocols today might have the potential to become big and bold enough to rival their centralized counterparts, while staying true to their decentralized roots. Furthermore, with volatility out of the picture and the promise of more stable returns, institutional investors are now considering crypto as part of their investments in alternatives.

The search for passive returns on crypto assets, called “yield farming,” is already taking shape on a number of new lending platforms. Compound Labs has launched one of the biggest DeFi lending platforms, where users can now borrow and lend any cryptocurrency on a short-term basis at algorithmically determined rates. A prototypical yield farmer moves assets around pools on Compound, constantly chasing the pool offering the highest annual percentage yield (APY). Practically, it echoes a strategy in traditional finance — a foreign currency carry trade — where a trader seeks to borrow the currency charging a lower interest rate and lend the one offering a higher return.

Crypto yield farming, however, offers more incentives. For instance, by depositing stablecoins into a digital account, investors would be rewarded in at least two ways. First, they receive the APY on their deposits. Second, and more importantly, certain protocols offer an additional subsidy, in the form of a new token, on top of the yield that it charges the borrower and pays to the lender.

Question 18

The primary purpose of the passage is to

Show Answer

Instruction for set :

Read the passage carefully and answer the following questions:

Cryptocurrencies have long been heralded as the future of finance, but it wasn’t until 2020 that it finally caught on to an old idea: making money with money. In the crypto world, decentralized finance (or DeFi) encompasses a wide array of blockchain-based applications intended to enhance cryptocurrency holders’ returns without relying on intermediaries — to earn the kind of passive returns an investor might get from a savings account, a Treasury bill, or an Apple Inc. bond.

The idea seems to be catching fire: Deposits in DeFi applications grew from about $1 billion in June to just under $40 billion by late January 2021, suggesting that DeFi could be a major element of crypto from here on out. In the tradition of disruptive innovations — as Clayton Christensen envisioned them — DeFi can be the evolution of blockchain technology that might launch it into mainstream.

The premise of DeFi is simple: Fix the longstanding inefficiency in crypto finance of capital being kept idle at a nonzero opportunity cost. Now, most investors buy crypto with the hope that the value of the currency itself will rise, as Bitcoin has. In general, that strategy has worked just fine. The value of cryptocurrencies has appreciated so rapidly that there just wasn’t much incentive to worry about gains of a few percent here and there.

But the recent rise of stablecoins, which are designed keep their value constant, has changed that calculation. The combined market cap of stablecoins such as Terra and USDC has more than quadrupled in 2020. Now, vast passive income opportunities are being awakened by DeFi.

The appeal of a lower-risk approach to crypto is obvious and has the potential to expand the pool of investors. For the first time, it’s possible to be compensated for owning cryptos (even in the absence of price appreciation), which brings real, tangible utilities to digital currencies and changes the narrative of an asset class whose sole purpose used to be about being sold at a higher price. Therefore, many of the DeFi protocols today might have the potential to become big and bold enough to rival their centralized counterparts, while staying true to their decentralized roots. Furthermore, with volatility out of the picture and the promise of more stable returns, institutional investors are now considering crypto as part of their investments in alternatives.

The search for passive returns on crypto assets, called “yield farming,” is already taking shape on a number of new lending platforms. Compound Labs has launched one of the biggest DeFi lending platforms, where users can now borrow and lend any cryptocurrency on a short-term basis at algorithmically determined rates. A prototypical yield farmer moves assets around pools on Compound, constantly chasing the pool offering the highest annual percentage yield (APY). Practically, it echoes a strategy in traditional finance — a foreign currency carry trade — where a trader seeks to borrow the currency charging a lower interest rate and lend the one offering a higher return.

Crypto yield farming, however, offers more incentives. For instance, by depositing stablecoins into a digital account, investors would be rewarded in at least two ways. First, they receive the APY on their deposits. Second, and more importantly, certain protocols offer an additional subsidy, in the form of a new token, on top of the yield that it charges the borrower and pays to the lender.

Question 19

Which of the following statements cannot be inferred from the fifth paragraph ("The appeal....in alternatives")?

I. With more institutional investors considering crypto as part of their investments in alternatives, speculative investment in crypto would come down.

II. Trading volume in some of the DeFi platforms could soon surpass the trading volume seen in centralized platforms.

III. DeFi has opened up lower-risk investment opportunities in crypto and could lead to the migration of a large number of investors from centralized platforms to decentralized platforms.

IV. With the increasing popularity of DeFi, price appreciation of cryptocurrencies will no longer be a major motivating factor.

Show Answer

Question 20

Four sentences are given below. These sentences, when rearranged in proper order, form a logical and meaningful paragraph. Rearrange the sentences and enter the correct order as the answer.

1. Moreover, as a physician, to write against the principle of love would be to criticize my own profession since medicine is the science by which we understand the “loves” of the body.

2. It is the most noble and powerful as well as the most ancient of all the gods forged in the pagan imagination.

3. At first it would appear a vain and useless enterprise to give instruction on how to cure love, since poets, philosophers, and theologians have acknowledged it to be the cause of all good.

4. It is a chart-in-brief of justice, of temperance, of strength and wisdom, the author of medicine, poetry, and music — of all the liberal arts.

Show Answer

Question 21

The passage given below is followed by four summaries. Choose the option that best captures the author’s position.

The magnitude of plastic packaging that is used and casually discarded — air pillows, Bubble Wrap, shrink wrap, envelopes, bags — portends gloomy consequences. These single-use items are primarily made from polyethylene, though vinyl is also used. In marine environments, this plastic waste can cause disease and death for coral, fish, seabirds and marine mammals. Plastic debris is often mistaken for food, and microplastics release chemical toxins as they degrade. Data suggests that plastics have infiltrated human food webs and placentas. These plastics have the potential to disrupt the endocrine system, which releases hormones into the bloodstream that help control growth and development during childhood, among many other important processes.

Show Answer

Question 22

Five sentences related to a topic are given below. Four of them can be put together to form a meaningful and coherent short paragraph. Identify the odd one out

1. The culprit is atmospheric turbulence caused by the mixing of air of different temperatures.

2. Light bends, or refracts, when it travels through different mediums, which is why a straw in a glass of water looks like it leans at a different angle under the water than above it.

3. The same thing happens when light travels through air of different temperatures.

4. The layer of gas between Earth and the rest of the cosmos keeps us alive, but it also constantly changes the path of any photon of light that travels through it.

5.  The more turbulent the atmosphere, the worse the seeing.

Show Answer

Question 23

There is a sentence that is missing in the paragraph below. Look at the paragraph and decide where (option 1, 2, 3, or 4) the following sentence would best fit.

Sentence: The debacle began on Joe Biden’s watch.

Paragraph: America’s demand that TikTok sever ties with its Chinese parent, ByteDance, started as a principled national-security policy ......(1)...... It has descended into a seedy free-for-all, with Mr Trump sounding more like an influencer on a TikTok shopping channel than a statesman. Allies of the president are jostling to buy the app in a deal that may not even solve the security problems it was designed to overcome .....(2)..... Last April the then-president signed a bill mandating the app to find a buyer. Otherwise it would be removed from American app stores ......(3)...... Following an objection from TikTok, the Supreme Court upheld the law on January 17th. The ban was due to begin two days later, on Mr Biden’s last day in office ......(4)...... But, perhaps not wanting to be remembered as the president who killed America’s favourite pastime, he opted not to take action, leaving the problem to his successor.

Show Answer

Instruction for set :

Read the passage carefully and answer the following questions:

Recognising our feelings of grief is important, not only because we cannot mourn our losses if we do not acknowledge them, but also because the literature and science of grief offer guidance for how to respond to this pandemic in ways that make psychological healing possible. In his 2008 book, The New Black: Mourning, Melancholia and Depression, the psychoanalyst Darian Leader argues that British society has lost a vital connection to grief, preferring to interpret the pain of unacknowledged or unresolved loss and separation medically, as depression, and opting for what he calls “mental hygiene” - the management of troublesome, superficial symptoms - over the deeper, harder work of mourning. We do not find it easy, in this culture of self-optimisation and life-hacks, to accept that grief is not something you can “get over”, that there is no cure for pain. The act of mourning is not to recover from loss, Leader argues, but rather to find a way to accommodate and live with it. And, if we put off or bypass the work of mourning, the pain of our losses will return to torment us, often in disruptive or unexpected ways.

The anthropologist Geoffrey Gorer argued that the mass deaths of the First World War so overwhelmed British communities that people began to abandon traditional mourning rituals, something that served to transform grief from a communal experience to a private emotion. The pandemic might be accelerating this process, as people are left to mourn alone in lockdown and to pay their final respects over Zoom. And yet, Darian Leader contends that we cannot properly mourn in isolation; mourning is a social task. “A loss, after all, always requires some kind of recognition, some sense that it has been witnessed and made real,” Leader writes. This is why we have such an elemental need to feel heard, why we make the effort to commemorate past conflicts, why post-conflict truth and reconciliation commissions are less about punishment than recognising the crimes. The demands for a public inquiry into the British government’s pandemic response speaks to this need, and to another dimension of pandemic grief.

Leader argues that public displays of grief help facilitate individual mourning. In his view, it is through public ceremonies that people are able to access their own, personal grief. This is the function performed by traditions of hiring professional mourners to keen at funerals, and it helps explain why celebrity deaths sometimes unleash an outpouring of grief. The near-hysterical response to the death of Princess Diana in 1997 was not, as some newspapers contended, a mark of “mourning sickness” or “crocodile tears”. Rather, the public mood provided people with a way to access their grief over other, unrelated losses.

Those who study grief often point to the inevitability of pain. When people put off the business of mourning, the pain of loss and separation finds a way to reassert itself. Leader describes the phenomenon of “anniversary symptoms”, the findings that adult hospitalisation dates coincide remarkably with anniversaries of childhood losses, or that GP surgery records reveal that people often return to doctors in the same week or month as their previous visit. “Rather than access their memories, the body commemorates them,” Leader writes.

Question 24

Darian Leader argues that British society has lost a vital connection to grief because

Show Answer

Instruction for set :

Read the passage carefully and answer the following questions:

Recognising our feelings of grief is important, not only because we cannot mourn our losses if we do not acknowledge them, but also because the literature and science of grief offer guidance for how to respond to this pandemic in ways that make psychological healing possible. In his 2008 book, The New Black: Mourning, Melancholia and Depression, the psychoanalyst Darian Leader argues that British society has lost a vital connection to grief, preferring to interpret the pain of unacknowledged or unresolved loss and separation medically, as depression, and opting for what he calls “mental hygiene” - the management of troublesome, superficial symptoms - over the deeper, harder work of mourning. We do not find it easy, in this culture of self-optimisation and life-hacks, to accept that grief is not something you can “get over”, that there is no cure for pain. The act of mourning is not to recover from loss, Leader argues, but rather to find a way to accommodate and live with it. And, if we put off or bypass the work of mourning, the pain of our losses will return to torment us, often in disruptive or unexpected ways.

The anthropologist Geoffrey Gorer argued that the mass deaths of the First World War so overwhelmed British communities that people began to abandon traditional mourning rituals, something that served to transform grief from a communal experience to a private emotion. The pandemic might be accelerating this process, as people are left to mourn alone in lockdown and to pay their final respects over Zoom. And yet, Darian Leader contends that we cannot properly mourn in isolation; mourning is a social task. “A loss, after all, always requires some kind of recognition, some sense that it has been witnessed and made real,” Leader writes. This is why we have such an elemental need to feel heard, why we make the effort to commemorate past conflicts, why post-conflict truth and reconciliation commissions are less about punishment than recognising the crimes. The demands for a public inquiry into the British government’s pandemic response speaks to this need, and to another dimension of pandemic grief.

Leader argues that public displays of grief help facilitate individual mourning. In his view, it is through public ceremonies that people are able to access their own, personal grief. This is the function performed by traditions of hiring professional mourners to keen at funerals, and it helps explain why celebrity deaths sometimes unleash an outpouring of grief. The near-hysterical response to the death of Princess Diana in 1997 was not, as some newspapers contended, a mark of “mourning sickness” or “crocodile tears”. Rather, the public mood provided people with a way to access their grief over other, unrelated losses.

Those who study grief often point to the inevitability of pain. When people put off the business of mourning, the pain of loss and separation finds a way to reassert itself. Leader describes the phenomenon of “anniversary symptoms”, the findings that adult hospitalisation dates coincide remarkably with anniversaries of childhood losses, or that GP surgery records reveal that people often return to doctors in the same week or month as their previous visit. “Rather than access their memories, the body commemorates them,” Leader writes.

Question 25

Which of the following statements is Leader least likely to agree with?

Show Answer

Instruction for set :

Read the passage carefully and answer the following questions:

Recognising our feelings of grief is important, not only because we cannot mourn our losses if we do not acknowledge them, but also because the literature and science of grief offer guidance for how to respond to this pandemic in ways that make psychological healing possible. In his 2008 book, The New Black: Mourning, Melancholia and Depression, the psychoanalyst Darian Leader argues that British society has lost a vital connection to grief, preferring to interpret the pain of unacknowledged or unresolved loss and separation medically, as depression, and opting for what he calls “mental hygiene” - the management of troublesome, superficial symptoms - over the deeper, harder work of mourning. We do not find it easy, in this culture of self-optimisation and life-hacks, to accept that grief is not something you can “get over”, that there is no cure for pain. The act of mourning is not to recover from loss, Leader argues, but rather to find a way to accommodate and live with it. And, if we put off or bypass the work of mourning, the pain of our losses will return to torment us, often in disruptive or unexpected ways.

The anthropologist Geoffrey Gorer argued that the mass deaths of the First World War so overwhelmed British communities that people began to abandon traditional mourning rituals, something that served to transform grief from a communal experience to a private emotion. The pandemic might be accelerating this process, as people are left to mourn alone in lockdown and to pay their final respects over Zoom. And yet, Darian Leader contends that we cannot properly mourn in isolation; mourning is a social task. “A loss, after all, always requires some kind of recognition, some sense that it has been witnessed and made real,” Leader writes. This is why we have such an elemental need to feel heard, why we make the effort to commemorate past conflicts, why post-conflict truth and reconciliation commissions are less about punishment than recognising the crimes. The demands for a public inquiry into the British government’s pandemic response speaks to this need, and to another dimension of pandemic grief.

Leader argues that public displays of grief help facilitate individual mourning. In his view, it is through public ceremonies that people are able to access their own, personal grief. This is the function performed by traditions of hiring professional mourners to keen at funerals, and it helps explain why celebrity deaths sometimes unleash an outpouring of grief. The near-hysterical response to the death of Princess Diana in 1997 was not, as some newspapers contended, a mark of “mourning sickness” or “crocodile tears”. Rather, the public mood provided people with a way to access their grief over other, unrelated losses.

Those who study grief often point to the inevitability of pain. When people put off the business of mourning, the pain of loss and separation finds a way to reassert itself. Leader describes the phenomenon of “anniversary symptoms”, the findings that adult hospitalisation dates coincide remarkably with anniversaries of childhood losses, or that GP surgery records reveal that people often return to doctors in the same week or month as their previous visit. “Rather than access their memories, the body commemorates them,” Leader writes.

Question 26

Which of the following statements cannot be inferred from the passage?

Show Answer

Instruction for set :

Read the passage carefully and answer the following questions:

Recognising our feelings of grief is important, not only because we cannot mourn our losses if we do not acknowledge them, but also because the literature and science of grief offer guidance for how to respond to this pandemic in ways that make psychological healing possible. In his 2008 book, The New Black: Mourning, Melancholia and Depression, the psychoanalyst Darian Leader argues that British society has lost a vital connection to grief, preferring to interpret the pain of unacknowledged or unresolved loss and separation medically, as depression, and opting for what he calls “mental hygiene” - the management of troublesome, superficial symptoms - over the deeper, harder work of mourning. We do not find it easy, in this culture of self-optimisation and life-hacks, to accept that grief is not something you can “get over”, that there is no cure for pain. The act of mourning is not to recover from loss, Leader argues, but rather to find a way to accommodate and live with it. And, if we put off or bypass the work of mourning, the pain of our losses will return to torment us, often in disruptive or unexpected ways.

The anthropologist Geoffrey Gorer argued that the mass deaths of the First World War so overwhelmed British communities that people began to abandon traditional mourning rituals, something that served to transform grief from a communal experience to a private emotion. The pandemic might be accelerating this process, as people are left to mourn alone in lockdown and to pay their final respects over Zoom. And yet, Darian Leader contends that we cannot properly mourn in isolation; mourning is a social task. “A loss, after all, always requires some kind of recognition, some sense that it has been witnessed and made real,” Leader writes. This is why we have such an elemental need to feel heard, why we make the effort to commemorate past conflicts, why post-conflict truth and reconciliation commissions are less about punishment than recognising the crimes. The demands for a public inquiry into the British government’s pandemic response speaks to this need, and to another dimension of pandemic grief.

Leader argues that public displays of grief help facilitate individual mourning. In his view, it is through public ceremonies that people are able to access their own, personal grief. This is the function performed by traditions of hiring professional mourners to keen at funerals, and it helps explain why celebrity deaths sometimes unleash an outpouring of grief. The near-hysterical response to the death of Princess Diana in 1997 was not, as some newspapers contended, a mark of “mourning sickness” or “crocodile tears”. Rather, the public mood provided people with a way to access their grief over other, unrelated losses.

Those who study grief often point to the inevitability of pain. When people put off the business of mourning, the pain of loss and separation finds a way to reassert itself. Leader describes the phenomenon of “anniversary symptoms”, the findings that adult hospitalisation dates coincide remarkably with anniversaries of childhood losses, or that GP surgery records reveal that people often return to doctors in the same week or month as their previous visit. “Rather than access their memories, the body commemorates them,” Leader writes.

Question 27

The author cites the example of post-conflict truth and reconciliation commissions to drive home the point that

Show Answer

Instruction for set :

Read the passage and answer the following questions:

Like other artists, the actor is a kind of shaman. If the audience is lucky, we go with this emotional magician to other worlds and other versions of ourselves. Our enchantment or immersion into another world is not just theoretical, but sensory and emotional. The actor’s imagination has gone into emotional territories of intense feeling before us. Now they guide us like a psychopomp into those emotional territories by recreating them in front of us. Aristotle called this imaginative power phantasia. We might mistakenly think that phantasia is just for artists and entertainers, a rare and special talent, but it’s actually a cognitive faculty that functions in all human beings. The actor might guide us, but it’s our own imagination that enables us to immerse fully into the story. If we activate our power of phantasia, we voluntarily summon up the real emotions we see on stage: fear, anxiety, rage, love and more. In waking life, we see this voluntary phantasia at work but, for many of us, the richest experience of phantasia comes in sleep, when the involuntary imagination awakes in the form of dreams. During sleep, your body is turned off by the temporary paralysis of sleep atonia, but your limbic brain is running hot. In waking life, the limbic system is responsible for many of the basic mammalian survival aspects of our existence: emotions, attention and focus, and is deeply involved in the fight-or-flight response to danger. The dreaming brain isn’t just faking a battle but actually fighting one in our neuroendocrine axis. That’s why we sometimes wake up sweating with our heart racing.

An actor, for preparing a role, does not rely on direct observations of human behaviour alone. According to the extended mind theory, humans offload much of who they are into the environment. The philosophers David Chalmers and Andy Clark argue that our minds don’t reside exclusively in our brains or bodies, but extend out into the physical environment (in diaries, maps, calculators and now smartphones, etc). Consequently, you can learn a great deal about someone by spending time in their home - not deducing facts like Sherlock Holmes, but absorbing subtle understandings of character, taste, temperament and life history. When an actor prepares to play a historical figure, he might find deep insights in the extended mind - the written record, the physical environs, the clothing and so on. A small detail can turn the key and open up a real ‘visitation’ from the past. Stage actors ‘read the room’ in a similar way to our primate cousins reading their social world of dominance. A lifetime of subconsciously reading rooms (reading people) gives artists a rich palette of insights, feelings and behaviours.

For Plato, the imagination produces only illusion, which distracts from reality, itself apprehended by reason. The artist is concerned with producing images, which are merely shadows, reflecting, like a mirror, the surface of things, while Truth lies beyond the sensory world. In the Republic, Plato places imagery and art low on the ladder of knowledge and metaphysics, although ironically he tells us this in an imaginary allegory of the cave story. By contrast, Aristotle saw imagination as a necessary ingredient to knowledge. Memory is a repository of images and events, but imagination (phantasia) calls up, unites and combines those memories into tools for judgment and decision-making. Imagination constructs alternative scenarios from the raw data of empirical senses, and then our rational faculties can evaluate them and use them to make moral choices, or predict social behaviours, or even build better scientific theories.

Question 28

The author would not agree with which of the following?

I. The part of our brain responsible for our response to danger operates at full capacity even when we are asleep.

II. The brain cannot differentiate between real and imaginary fight-or-flight situations, which is why people sometimes wake up sweating with their heart racing.

III. A good actor enables the audience to experience the character's emotional range without the need to activate their own Phantasia.

Show Answer

Instruction for set :

Read the passage and answer the following questions:

Like other artists, the actor is a kind of shaman. If the audience is lucky, we go with this emotional magician to other worlds and other versions of ourselves. Our enchantment or immersion into another world is not just theoretical, but sensory and emotional. The actor’s imagination has gone into emotional territories of intense feeling before us. Now they guide us like a psychopomp into those emotional territories by recreating them in front of us. Aristotle called this imaginative power phantasia. We might mistakenly think that phantasia is just for artists and entertainers, a rare and special talent, but it’s actually a cognitive faculty that functions in all human beings. The actor might guide us, but it’s our own imagination that enables us to immerse fully into the story. If we activate our power of phantasia, we voluntarily summon up the real emotions we see on stage: fear, anxiety, rage, love and more. In waking life, we see this voluntary phantasia at work but, for many of us, the richest experience of phantasia comes in sleep, when the involuntary imagination awakes in the form of dreams. During sleep, your body is turned off by the temporary paralysis of sleep atonia, but your limbic brain is running hot. In waking life, the limbic system is responsible for many of the basic mammalian survival aspects of our existence: emotions, attention and focus, and is deeply involved in the fight-or-flight response to danger. The dreaming brain isn’t just faking a battle but actually fighting one in our neuroendocrine axis. That’s why we sometimes wake up sweating with our heart racing.

An actor, for preparing a role, does not rely on direct observations of human behaviour alone. According to the extended mind theory, humans offload much of who they are into the environment. The philosophers David Chalmers and Andy Clark argue that our minds don’t reside exclusively in our brains or bodies, but extend out into the physical environment (in diaries, maps, calculators and now smartphones, etc). Consequently, you can learn a great deal about someone by spending time in their home - not deducing facts like Sherlock Holmes, but absorbing subtle understandings of character, taste, temperament and life history. When an actor prepares to play a historical figure, he might find deep insights in the extended mind - the written record, the physical environs, the clothing and so on. A small detail can turn the key and open up a real ‘visitation’ from the past. Stage actors ‘read the room’ in a similar way to our primate cousins reading their social world of dominance. A lifetime of subconsciously reading rooms (reading people) gives artists a rich palette of insights, feelings and behaviours.

For Plato, the imagination produces only illusion, which distracts from reality, itself apprehended by reason. The artist is concerned with producing images, which are merely shadows, reflecting, like a mirror, the surface of things, while Truth lies beyond the sensory world. In the Republic, Plato places imagery and art low on the ladder of knowledge and metaphysics, although ironically he tells us this in an imaginary allegory of the cave story. By contrast, Aristotle saw imagination as a necessary ingredient to knowledge. Memory is a repository of images and events, but imagination (phantasia) calls up, unites and combines those memories into tools for judgment and decision-making. Imagination constructs alternative scenarios from the raw data of empirical senses, and then our rational faculties can evaluate them and use them to make moral choices, or predict social behaviours, or even build better scientific theories.

Question 29

What is the main point of the second paragraph of the passage?

Show Answer

Instruction for set :

Read the passage and answer the following questions:

Like other artists, the actor is a kind of shaman. If the audience is lucky, we go with this emotional magician to other worlds and other versions of ourselves. Our enchantment or immersion into another world is not just theoretical, but sensory and emotional. The actor’s imagination has gone into emotional territories of intense feeling before us. Now they guide us like a psychopomp into those emotional territories by recreating them in front of us. Aristotle called this imaginative power phantasia. We might mistakenly think that phantasia is just for artists and entertainers, a rare and special talent, but it’s actually a cognitive faculty that functions in all human beings. The actor might guide us, but it’s our own imagination that enables us to immerse fully into the story. If we activate our power of phantasia, we voluntarily summon up the real emotions we see on stage: fear, anxiety, rage, love and more. In waking life, we see this voluntary phantasia at work but, for many of us, the richest experience of phantasia comes in sleep, when the involuntary imagination awakes in the form of dreams. During sleep, your body is turned off by the temporary paralysis of sleep atonia, but your limbic brain is running hot. In waking life, the limbic system is responsible for many of the basic mammalian survival aspects of our existence: emotions, attention and focus, and is deeply involved in the fight-or-flight response to danger. The dreaming brain isn’t just faking a battle but actually fighting one in our neuroendocrine axis. That’s why we sometimes wake up sweating with our heart racing.

An actor, for preparing a role, does not rely on direct observations of human behaviour alone. According to the extended mind theory, humans offload much of who they are into the environment. The philosophers David Chalmers and Andy Clark argue that our minds don’t reside exclusively in our brains or bodies, but extend out into the physical environment (in diaries, maps, calculators and now smartphones, etc). Consequently, you can learn a great deal about someone by spending time in their home - not deducing facts like Sherlock Holmes, but absorbing subtle understandings of character, taste, temperament and life history. When an actor prepares to play a historical figure, he might find deep insights in the extended mind - the written record, the physical environs, the clothing and so on. A small detail can turn the key and open up a real ‘visitation’ from the past. Stage actors ‘read the room’ in a similar way to our primate cousins reading their social world of dominance. A lifetime of subconsciously reading rooms (reading people) gives artists a rich palette of insights, feelings and behaviours.

For Plato, the imagination produces only illusion, which distracts from reality, itself apprehended by reason. The artist is concerned with producing images, which are merely shadows, reflecting, like a mirror, the surface of things, while Truth lies beyond the sensory world. In the Republic, Plato places imagery and art low on the ladder of knowledge and metaphysics, although ironically he tells us this in an imaginary allegory of the cave story. By contrast, Aristotle saw imagination as a necessary ingredient to knowledge. Memory is a repository of images and events, but imagination (phantasia) calls up, unites and combines those memories into tools for judgment and decision-making. Imagination constructs alternative scenarios from the raw data of empirical senses, and then our rational faculties can evaluate them and use them to make moral choices, or predict social behaviours, or even build better scientific theories.

Question 30

Which of the following is true about an actor as per the passage?

Show Answer

Instruction for set :

Read the passage and answer the following questions:

Like other artists, the actor is a kind of shaman. If the audience is lucky, we go with this emotional magician to other worlds and other versions of ourselves. Our enchantment or immersion into another world is not just theoretical, but sensory and emotional. The actor’s imagination has gone into emotional territories of intense feeling before us. Now they guide us like a psychopomp into those emotional territories by recreating them in front of us. Aristotle called this imaginative power phantasia. We might mistakenly think that phantasia is just for artists and entertainers, a rare and special talent, but it’s actually a cognitive faculty that functions in all human beings. The actor might guide us, but it’s our own imagination that enables us to immerse fully into the story. If we activate our power of phantasia, we voluntarily summon up the real emotions we see on stage: fear, anxiety, rage, love and more. In waking life, we see this voluntary phantasia at work but, for many of us, the richest experience of phantasia comes in sleep, when the involuntary imagination awakes in the form of dreams. During sleep, your body is turned off by the temporary paralysis of sleep atonia, but your limbic brain is running hot. In waking life, the limbic system is responsible for many of the basic mammalian survival aspects of our existence: emotions, attention and focus, and is deeply involved in the fight-or-flight response to danger. The dreaming brain isn’t just faking a battle but actually fighting one in our neuroendocrine axis. That’s why we sometimes wake up sweating with our heart racing.

An actor, for preparing a role, does not rely on direct observations of human behaviour alone. According to the extended mind theory, humans offload much of who they are into the environment. The philosophers David Chalmers and Andy Clark argue that our minds don’t reside exclusively in our brains or bodies, but extend out into the physical environment (in diaries, maps, calculators and now smartphones, etc). Consequently, you can learn a great deal about someone by spending time in their home - not deducing facts like Sherlock Holmes, but absorbing subtle understandings of character, taste, temperament and life history. When an actor prepares to play a historical figure, he might find deep insights in the extended mind - the written record, the physical environs, the clothing and so on. A small detail can turn the key and open up a real ‘visitation’ from the past. Stage actors ‘read the room’ in a similar way to our primate cousins reading their social world of dominance. A lifetime of subconsciously reading rooms (reading people) gives artists a rich palette of insights, feelings and behaviours.

For Plato, the imagination produces only illusion, which distracts from reality, itself apprehended by reason. The artist is concerned with producing images, which are merely shadows, reflecting, like a mirror, the surface of things, while Truth lies beyond the sensory world. In the Republic, Plato places imagery and art low on the ladder of knowledge and metaphysics, although ironically he tells us this in an imaginary allegory of the cave story. By contrast, Aristotle saw imagination as a necessary ingredient to knowledge. Memory is a repository of images and events, but imagination (phantasia) calls up, unites and combines those memories into tools for judgment and decision-making. Imagination constructs alternative scenarios from the raw data of empirical senses, and then our rational faculties can evaluate them and use them to make moral choices, or predict social behaviours, or even build better scientific theories.

Question 31

Which of the following can be inferred from the passage?

Show Answer

Instruction for set :

Read the passage and answer the following questions:

Like other artists, the actor is a kind of shaman. If the audience is lucky, we go with this emotional magician to other worlds and other versions of ourselves. Our enchantment or immersion into another world is not just theoretical, but sensory and emotional. The actor’s imagination has gone into emotional territories of intense feeling before us. Now they guide us like a psychopomp into those emotional territories by recreating them in front of us. Aristotle called this imaginative power phantasia. We might mistakenly think that phantasia is just for artists and entertainers, a rare and special talent, but it’s actually a cognitive faculty that functions in all human beings. The actor might guide us, but it’s our own imagination that enables us to immerse fully into the story. If we activate our power of phantasia, we voluntarily summon up the real emotions we see on stage: fear, anxiety, rage, love and more. In waking life, we see this voluntary phantasia at work but, for many of us, the richest experience of phantasia comes in sleep, when the involuntary imagination awakes in the form of dreams. During sleep, your body is turned off by the temporary paralysis of sleep atonia, but your limbic brain is running hot. In waking life, the limbic system is responsible for many of the basic mammalian survival aspects of our existence: emotions, attention and focus, and is deeply involved in the fight-or-flight response to danger. The dreaming brain isn’t just faking a battle but actually fighting one in our neuroendocrine axis. That’s why we sometimes wake up sweating with our heart racing.

An actor, for preparing a role, does not rely on direct observations of human behaviour alone. According to the extended mind theory, humans offload much of who they are into the environment. The philosophers David Chalmers and Andy Clark argue that our minds don’t reside exclusively in our brains or bodies, but extend out into the physical environment (in diaries, maps, calculators and now smartphones, etc). Consequently, you can learn a great deal about someone by spending time in their home - not deducing facts like Sherlock Holmes, but absorbing subtle understandings of character, taste, temperament and life history. When an actor prepares to play a historical figure, he might find deep insights in the extended mind - the written record, the physical environs, the clothing and so on. A small detail can turn the key and open up a real ‘visitation’ from the past. Stage actors ‘read the room’ in a similar way to our primate cousins reading their social world of dominance. A lifetime of subconsciously reading rooms (reading people) gives artists a rich palette of insights, feelings and behaviours.

For Plato, the imagination produces only illusion, which distracts from reality, itself apprehended by reason. The artist is concerned with producing images, which are merely shadows, reflecting, like a mirror, the surface of things, while Truth lies beyond the sensory world. In the Republic, Plato places imagery and art low on the ladder of knowledge and metaphysics, although ironically he tells us this in an imaginary allegory of the cave story. By contrast, Aristotle saw imagination as a necessary ingredient to knowledge. Memory is a repository of images and events, but imagination (phantasia) calls up, unites and combines those memories into tools for judgment and decision-making. Imagination constructs alternative scenarios from the raw data of empirical senses, and then our rational faculties can evaluate them and use them to make moral choices, or predict social behaviours, or even build better scientific theories.

Question 32

The irony in the last paragraph of the passage is:

Show Answer

Instruction for set :

Read the passage carefully and answer the following questions:

Déjà vu — French for “already seen” — is a mental sensation of intense familiarity coupled with the awareness that the familiarity is mistaken. It’s a recognition we know is wrong, a memory we know doesn’t exist. This conflict between what we know and what we remember is why déjà vu feels so eerie — almost paranormal or out-of-body. This awareness is very common. Déjà vu is almost impossible to study — people are rarely hooked up to electrodes or undergoing internal scans when they experience it — so most information about the sensation comes from self-reports, which suggest at least two-thirds of people will experience this fleeting mental trickery at some point in their lives. People who travel often or who watch a lot of movies may be more prone to déjà vu than others who don’t. The sensation does not seem to occur before age 8-9 (or perhaps children younger than that don’t have the ability to describe it), and experiences of déjà vu become less common as we age.

But as for why we experience déjà vu at all — that’s less clear. Multiple theories attempt to explain it, with each being a potentially legitimate source of the sensation. Like a physical itch, the mental itch of déjà vu likely has many causes, experts say.

Probably the strongest theory, with some experimental backing, is that the false familiarity isn’t a sign of faulty memory, so much as it’s a sign of a well-functioning brain that actively fact-checks itself. Human memory is notoriously faulty and malleable; this theory holds that déjà vu occurs as our brains’ frontal regions evaluate our memories and flag an error.

Another explanation for déjà vu, with some experimental findings to back it up, is that our stored memories still influence our present perception even if we can’t consciously recall them. A 2012 study that immersed participants in different virtual reality scenes saw most report déjà vu when viewing a scene that appeared similar to a previous one — even if they could not directly recall the earlier scene or its similarity. They just found the new scene inexplicably familiar.

Other explanations for déjà vu are more speculative. One suggests that déjà vu occurs when a familiar object appears incongruously. Seeing known objects or people out of context or unexpectedly is when familiarity strikes us, not seeing them within the usual, expected context. For instance, seeing your building’s security guard at the gate wouldn’t feel familiar — it just is; but seeing him at a restaurant might bring feelings of familiarity, even if you can’t place him. In the moment of out-of-context perception, our brains process the familiarity of known things first, even if we don’t consciously recognize them, and that initial familiarity can color our perception of the whole otherwise-unfamiliar experience. But ultimately, the mechanisms behind the creeping been-here-done-this-before feeling are as mysterious as the sensation itself. But one thing scientists know for sure: déjà vu becomes more common when we are stressed and tired.

Question 33

According to the author, Deja vu is almost impossible to study because:

Show Answer

Instruction for set :

Read the passage carefully and answer the following questions:

Déjà vu — French for “already seen” — is a mental sensation of intense familiarity coupled with the awareness that the familiarity is mistaken. It’s a recognition we know is wrong, a memory we know doesn’t exist. This conflict between what we know and what we remember is why déjà vu feels so eerie — almost paranormal or out-of-body. This awareness is very common. Déjà vu is almost impossible to study — people are rarely hooked up to electrodes or undergoing internal scans when they experience it — so most information about the sensation comes from self-reports, which suggest at least two-thirds of people will experience this fleeting mental trickery at some point in their lives. People who travel often or who watch a lot of movies may be more prone to déjà vu than others who don’t. The sensation does not seem to occur before age 8-9 (or perhaps children younger than that don’t have the ability to describe it), and experiences of déjà vu become less common as we age.

But as for why we experience déjà vu at all — that’s less clear. Multiple theories attempt to explain it, with each being a potentially legitimate source of the sensation. Like a physical itch, the mental itch of déjà vu likely has many causes, experts say.

Probably the strongest theory, with some experimental backing, is that the false familiarity isn’t a sign of faulty memory, so much as it’s a sign of a well-functioning brain that actively fact-checks itself. Human memory is notoriously faulty and malleable; this theory holds that déjà vu occurs as our brains’ frontal regions evaluate our memories and flag an error.

Another explanation for déjà vu, with some experimental findings to back it up, is that our stored memories still influence our present perception even if we can’t consciously recall them. A 2012 study that immersed participants in different virtual reality scenes saw most report déjà vu when viewing a scene that appeared similar to a previous one — even if they could not directly recall the earlier scene or its similarity. They just found the new scene inexplicably familiar.

Other explanations for déjà vu are more speculative. One suggests that déjà vu occurs when a familiar object appears incongruously. Seeing known objects or people out of context or unexpectedly is when familiarity strikes us, not seeing them within the usual, expected context. For instance, seeing your building’s security guard at the gate wouldn’t feel familiar — it just is; but seeing him at a restaurant might bring feelings of familiarity, even if you can’t place him. In the moment of out-of-context perception, our brains process the familiarity of known things first, even if we don’t consciously recognize them, and that initial familiarity can color our perception of the whole otherwise-unfamiliar experience. But ultimately, the mechanisms behind the creeping been-here-done-this-before feeling are as mysterious as the sensation itself. But one thing scientists know for sure: déjà vu becomes more common when we are stressed and tired.

Question 34

Which of the following statements CANNOT be inferred from the passage?

Show Answer

Instruction for set :

Read the passage carefully and answer the following questions:

Déjà vu — French for “already seen” — is a mental sensation of intense familiarity coupled with the awareness that the familiarity is mistaken. It’s a recognition we know is wrong, a memory we know doesn’t exist. This conflict between what we know and what we remember is why déjà vu feels so eerie — almost paranormal or out-of-body. This awareness is very common. Déjà vu is almost impossible to study — people are rarely hooked up to electrodes or undergoing internal scans when they experience it — so most information about the sensation comes from self-reports, which suggest at least two-thirds of people will experience this fleeting mental trickery at some point in their lives. People who travel often or who watch a lot of movies may be more prone to déjà vu than others who don’t. The sensation does not seem to occur before age 8-9 (or perhaps children younger than that don’t have the ability to describe it), and experiences of déjà vu become less common as we age.

But as for why we experience déjà vu at all — that’s less clear. Multiple theories attempt to explain it, with each being a potentially legitimate source of the sensation. Like a physical itch, the mental itch of déjà vu likely has many causes, experts say.

Probably the strongest theory, with some experimental backing, is that the false familiarity isn’t a sign of faulty memory, so much as it’s a sign of a well-functioning brain that actively fact-checks itself. Human memory is notoriously faulty and malleable; this theory holds that déjà vu occurs as our brains’ frontal regions evaluate our memories and flag an error.

Another explanation for déjà vu, with some experimental findings to back it up, is that our stored memories still influence our present perception even if we can’t consciously recall them. A 2012 study that immersed participants in different virtual reality scenes saw most report déjà vu when viewing a scene that appeared similar to a previous one — even if they could not directly recall the earlier scene or its similarity. They just found the new scene inexplicably familiar.

Other explanations for déjà vu are more speculative. One suggests that déjà vu occurs when a familiar object appears incongruously. Seeing known objects or people out of context or unexpectedly is when familiarity strikes us, not seeing them within the usual, expected context. For instance, seeing your building’s security guard at the gate wouldn’t feel familiar — it just is; but seeing him at a restaurant might bring feelings of familiarity, even if you can’t place him. In the moment of out-of-context perception, our brains process the familiarity of known things first, even if we don’t consciously recognize them, and that initial familiarity can color our perception of the whole otherwise-unfamiliar experience. But ultimately, the mechanisms behind the creeping been-here-done-this-before feeling are as mysterious as the sensation itself. But one thing scientists know for sure: déjà vu becomes more common when we are stressed and tired.

Question 35

Which of the following is NOT one of the theories attempting to explain deja-vu?

Show Answer

Instruction for set :

Read the passage carefully and answer the following questions:

Déjà vu — French for “already seen” — is a mental sensation of intense familiarity coupled with the awareness that the familiarity is mistaken. It’s a recognition we know is wrong, a memory we know doesn’t exist. This conflict between what we know and what we remember is why déjà vu feels so eerie — almost paranormal or out-of-body. This awareness is very common. Déjà vu is almost impossible to study — people are rarely hooked up to electrodes or undergoing internal scans when they experience it — so most information about the sensation comes from self-reports, which suggest at least two-thirds of people will experience this fleeting mental trickery at some point in their lives. People who travel often or who watch a lot of movies may be more prone to déjà vu than others who don’t. The sensation does not seem to occur before age 8-9 (or perhaps children younger than that don’t have the ability to describe it), and experiences of déjà vu become less common as we age.

But as for why we experience déjà vu at all — that’s less clear. Multiple theories attempt to explain it, with each being a potentially legitimate source of the sensation. Like a physical itch, the mental itch of déjà vu likely has many causes, experts say.

Probably the strongest theory, with some experimental backing, is that the false familiarity isn’t a sign of faulty memory, so much as it’s a sign of a well-functioning brain that actively fact-checks itself. Human memory is notoriously faulty and malleable; this theory holds that déjà vu occurs as our brains’ frontal regions evaluate our memories and flag an error.

Another explanation for déjà vu, with some experimental findings to back it up, is that our stored memories still influence our present perception even if we can’t consciously recall them. A 2012 study that immersed participants in different virtual reality scenes saw most report déjà vu when viewing a scene that appeared similar to a previous one — even if they could not directly recall the earlier scene or its similarity. They just found the new scene inexplicably familiar.

Other explanations for déjà vu are more speculative. One suggests that déjà vu occurs when a familiar object appears incongruously. Seeing known objects or people out of context or unexpectedly is when familiarity strikes us, not seeing them within the usual, expected context. For instance, seeing your building’s security guard at the gate wouldn’t feel familiar — it just is; but seeing him at a restaurant might bring feelings of familiarity, even if you can’t place him. In the moment of out-of-context perception, our brains process the familiarity of known things first, even if we don’t consciously recognize them, and that initial familiarity can color our perception of the whole otherwise-unfamiliar experience. But ultimately, the mechanisms behind the creeping been-here-done-this-before feeling are as mysterious as the sensation itself. But one thing scientists know for sure: déjà vu becomes more common when we are stressed and tired.

Question 36

In the first paragraph, what is the "conflict" that the author refers to?

Show Answer

Instruction for set :

Read the passage carefully and answer the following questions:

Déjà vu — French for “already seen” — is a mental sensation of intense familiarity coupled with the awareness that the familiarity is mistaken. It’s a recognition we know is wrong, a memory we know doesn’t exist. This conflict between what we know and what we remember is why déjà vu feels so eerie — almost paranormal or out-of-body. This awareness is very common. Déjà vu is almost impossible to study — people are rarely hooked up to electrodes or undergoing internal scans when they experience it — so most information about the sensation comes from self-reports, which suggest at least two-thirds of people will experience this fleeting mental trickery at some point in their lives. People who travel often or who watch a lot of movies may be more prone to déjà vu than others who don’t. The sensation does not seem to occur before age 8-9 (or perhaps children younger than that don’t have the ability to describe it), and experiences of déjà vu become less common as we age.

But as for why we experience déjà vu at all — that’s less clear. Multiple theories attempt to explain it, with each being a potentially legitimate source of the sensation. Like a physical itch, the mental itch of déjà vu likely has many causes, experts say.

Probably the strongest theory, with some experimental backing, is that the false familiarity isn’t a sign of faulty memory, so much as it’s a sign of a well-functioning brain that actively fact-checks itself. Human memory is notoriously faulty and malleable; this theory holds that déjà vu occurs as our brains’ frontal regions evaluate our memories and flag an error.

Another explanation for déjà vu, with some experimental findings to back it up, is that our stored memories still influence our present perception even if we can’t consciously recall them. A 2012 study that immersed participants in different virtual reality scenes saw most report déjà vu when viewing a scene that appeared similar to a previous one — even if they could not directly recall the earlier scene or its similarity. They just found the new scene inexplicably familiar.

Other explanations for déjà vu are more speculative. One suggests that déjà vu occurs when a familiar object appears incongruously. Seeing known objects or people out of context or unexpectedly is when familiarity strikes us, not seeing them within the usual, expected context. For instance, seeing your building’s security guard at the gate wouldn’t feel familiar — it just is; but seeing him at a restaurant might bring feelings of familiarity, even if you can’t place him. In the moment of out-of-context perception, our brains process the familiarity of known things first, even if we don’t consciously recognize them, and that initial familiarity can color our perception of the whole otherwise-unfamiliar experience. But ultimately, the mechanisms behind the creeping been-here-done-this-before feeling are as mysterious as the sensation itself. But one thing scientists know for sure: déjà vu becomes more common when we are stressed and tired.

Question 37

Which of the following statements best expresses the overall argument of this passage?

Show Answer

Instruction for set :

Read the passage carefully and answer the following questions:

For traditional Darwinian natural selection to work, the entities in question must display some property or ability that can be inherited, and that results in their having more offspring than the competition. For instance, the first creatures with vision, however fuzzy, were presumably better at avoiding predators and finding mates than the sightless members of their population, and had more surviving progeny for that reason. In technical terms, then, selected entities must exist in populations showing heritable variation in fitness, greater fitness resulting in these entities’ differential reproduction.

Even if inherited properties are the result of undirected or ‘random’ mutation, repeating the selection process over generations will incrementally improve on them. This produces complex adaptations such as the vertebrate eye, with its highly sophisticated function. Lightsensitive areas acquired lenses for focusing and means for distinguishing colours step by advantageous step, ultimately producing modern eyes that are clearly for seeing. So even without an overall purpose, evolution, through selection,  creates something that behaves as if it has a goal.

Back in 1979, when Lovelock’s first popular book, Gaia: A New Look at Life on Earth, came out, the wider field of evolutionary biology was becoming a very reductionist discipline. Richard Dawkins’s The Selfish Gene had been published three years earlier, and it promoted a hardcore gene-centrism insisting that we look at genes as the fundamental units of selection - that is, the thing upon which natural selection operates. His claim was that genes were the reproducing entities par excellence, because they are the only things that always replicate and produce enduring lineages. Replication here means making fairly exact one-to-one copies, as genes (and asexual organisms such as bacteria) do. Reproduction, though, is a more inclusive and forgiving term - it’s what we humans and other sexual species do, when we make offspring that resemble both parents, but each only imperfectly. Still, this sloppy process exhibits heritable variation in fitness, and so supports evolution by natural selection.

In recent decades, many theorists have come to understand that there can be reproducing or even replicating entities evolving by natural selection at several levels of the biological hierarchy - not just in the domains of replicating genes and bacteria, or even sexual creatures such as ourselves. They have come to embrace something called multilevel selection theory: the idea that life can be represented as a hierarchy of entities nested together in larger entities, like Russian dolls. As the philosopher of science Peter Godfrey-Smith puts it, ‘genes, cells, social groups and species can all, in principle, enter into change of this kind’.

But to qualify as a thing on which natural selection can operate - a unit of selection - ‘they must be connected by parent-offspring relations; they must have the capacity to reproduce,’ Godfrey-Smith continues. It’s the requirement for reproduction and leaving parent-offspring lineages (lines of descent) we need to focus on here, because they remain essential in traditional formulations. Without reproduction, fitness is undefined, and heritability seems to make no sense. And without lines of descent, at some level, how can we even conceive of natural selection?

Question 38

All of the following statements can be inferred from the passage, EXCEPT;

Show Answer

Instruction for set :

Read the passage carefully and answer the following questions:

For traditional Darwinian natural selection to work, the entities in question must display some property or ability that can be inherited, and that results in their having more offspring than the competition. For instance, the first creatures with vision, however fuzzy, were presumably better at avoiding predators and finding mates than the sightless members of their population, and had more surviving progeny for that reason. In technical terms, then, selected entities must exist in populations showing heritable variation in fitness, greater fitness resulting in these entities’ differential reproduction.

Even if inherited properties are the result of undirected or ‘random’ mutation, repeating the selection process over generations will incrementally improve on them. This produces complex adaptations such as the vertebrate eye, with its highly sophisticated function. Lightsensitive areas acquired lenses for focusing and means for distinguishing colours step by advantageous step, ultimately producing modern eyes that are clearly for seeing. So even without an overall purpose, evolution, through selection,  creates something that behaves as if it has a goal.

Back in 1979, when Lovelock’s first popular book, Gaia: A New Look at Life on Earth, came out, the wider field of evolutionary biology was becoming a very reductionist discipline. Richard Dawkins’s The Selfish Gene had been published three years earlier, and it promoted a hardcore gene-centrism insisting that we look at genes as the fundamental units of selection - that is, the thing upon which natural selection operates. His claim was that genes were the reproducing entities par excellence, because they are the only things that always replicate and produce enduring lineages. Replication here means making fairly exact one-to-one copies, as genes (and asexual organisms such as bacteria) do. Reproduction, though, is a more inclusive and forgiving term - it’s what we humans and other sexual species do, when we make offspring that resemble both parents, but each only imperfectly. Still, this sloppy process exhibits heritable variation in fitness, and so supports evolution by natural selection.

In recent decades, many theorists have come to understand that there can be reproducing or even replicating entities evolving by natural selection at several levels of the biological hierarchy - not just in the domains of replicating genes and bacteria, or even sexual creatures such as ourselves. They have come to embrace something called multilevel selection theory: the idea that life can be represented as a hierarchy of entities nested together in larger entities, like Russian dolls. As the philosopher of science Peter Godfrey-Smith puts it, ‘genes, cells, social groups and species can all, in principle, enter into change of this kind’.

But to qualify as a thing on which natural selection can operate - a unit of selection - ‘they must be connected by parent-offspring relations; they must have the capacity to reproduce,’ Godfrey-Smith continues. It’s the requirement for reproduction and leaving parent-offspring lineages (lines of descent) we need to focus on here, because they remain essential in traditional formulations. Without reproduction, fitness is undefined, and heritability seems to make no sense. And without lines of descent, at some level, how can we even conceive of natural selection?

Question 39

Which of the following could be the reason why the author discusses the example of the vertebrate eye in the second paragraph?

Show Answer

Instruction for set :

Read the passage carefully and answer the following questions:

For traditional Darwinian natural selection to work, the entities in question must display some property or ability that can be inherited, and that results in their having more offspring than the competition. For instance, the first creatures with vision, however fuzzy, were presumably better at avoiding predators and finding mates than the sightless members of their population, and had more surviving progeny for that reason. In technical terms, then, selected entities must exist in populations showing heritable variation in fitness, greater fitness resulting in these entities’ differential reproduction.

Even if inherited properties are the result of undirected or ‘random’ mutation, repeating the selection process over generations will incrementally improve on them. This produces complex adaptations such as the vertebrate eye, with its highly sophisticated function. Lightsensitive areas acquired lenses for focusing and means for distinguishing colours step by advantageous step, ultimately producing modern eyes that are clearly for seeing. So even without an overall purpose, evolution, through selection,  creates something that behaves as if it has a goal.

Back in 1979, when Lovelock’s first popular book, Gaia: A New Look at Life on Earth, came out, the wider field of evolutionary biology was becoming a very reductionist discipline. Richard Dawkins’s The Selfish Gene had been published three years earlier, and it promoted a hardcore gene-centrism insisting that we look at genes as the fundamental units of selection - that is, the thing upon which natural selection operates. His claim was that genes were the reproducing entities par excellence, because they are the only things that always replicate and produce enduring lineages. Replication here means making fairly exact one-to-one copies, as genes (and asexual organisms such as bacteria) do. Reproduction, though, is a more inclusive and forgiving term - it’s what we humans and other sexual species do, when we make offspring that resemble both parents, but each only imperfectly. Still, this sloppy process exhibits heritable variation in fitness, and so supports evolution by natural selection.

In recent decades, many theorists have come to understand that there can be reproducing or even replicating entities evolving by natural selection at several levels of the biological hierarchy - not just in the domains of replicating genes and bacteria, or even sexual creatures such as ourselves. They have come to embrace something called multilevel selection theory: the idea that life can be represented as a hierarchy of entities nested together in larger entities, like Russian dolls. As the philosopher of science Peter Godfrey-Smith puts it, ‘genes, cells, social groups and species can all, in principle, enter into change of this kind’.

But to qualify as a thing on which natural selection can operate - a unit of selection - ‘they must be connected by parent-offspring relations; they must have the capacity to reproduce,’ Godfrey-Smith continues. It’s the requirement for reproduction and leaving parent-offspring lineages (lines of descent) we need to focus on here, because they remain essential in traditional formulations. Without reproduction, fitness is undefined, and heritability seems to make no sense. And without lines of descent, at some level, how can we even conceive of natural selection?

Question 40

Which of the following, if true, would strongly counter Peter Godfrey-Smith’s observations on natural selection?

Show Answer

Instruction for set :

Read the passage carefully and answer the following questions:

For traditional Darwinian natural selection to work, the entities in question must display some property or ability that can be inherited, and that results in their having more offspring than the competition. For instance, the first creatures with vision, however fuzzy, were presumably better at avoiding predators and finding mates than the sightless members of their population, and had more surviving progeny for that reason. In technical terms, then, selected entities must exist in populations showing heritable variation in fitness, greater fitness resulting in these entities’ differential reproduction.

Even if inherited properties are the result of undirected or ‘random’ mutation, repeating the selection process over generations will incrementally improve on them. This produces complex adaptations such as the vertebrate eye, with its highly sophisticated function. Lightsensitive areas acquired lenses for focusing and means for distinguishing colours step by advantageous step, ultimately producing modern eyes that are clearly for seeing. So even without an overall purpose, evolution, through selection,  creates something that behaves as if it has a goal.

Back in 1979, when Lovelock’s first popular book, Gaia: A New Look at Life on Earth, came out, the wider field of evolutionary biology was becoming a very reductionist discipline. Richard Dawkins’s The Selfish Gene had been published three years earlier, and it promoted a hardcore gene-centrism insisting that we look at genes as the fundamental units of selection - that is, the thing upon which natural selection operates. His claim was that genes were the reproducing entities par excellence, because they are the only things that always replicate and produce enduring lineages. Replication here means making fairly exact one-to-one copies, as genes (and asexual organisms such as bacteria) do. Reproduction, though, is a more inclusive and forgiving term - it’s what we humans and other sexual species do, when we make offspring that resemble both parents, but each only imperfectly. Still, this sloppy process exhibits heritable variation in fitness, and so supports evolution by natural selection.

In recent decades, many theorists have come to understand that there can be reproducing or even replicating entities evolving by natural selection at several levels of the biological hierarchy - not just in the domains of replicating genes and bacteria, or even sexual creatures such as ourselves. They have come to embrace something called multilevel selection theory: the idea that life can be represented as a hierarchy of entities nested together in larger entities, like Russian dolls. As the philosopher of science Peter Godfrey-Smith puts it, ‘genes, cells, social groups and species can all, in principle, enter into change of this kind’.

But to qualify as a thing on which natural selection can operate - a unit of selection - ‘they must be connected by parent-offspring relations; they must have the capacity to reproduce,’ Godfrey-Smith continues. It’s the requirement for reproduction and leaving parent-offspring lineages (lines of descent) we need to focus on here, because they remain essential in traditional formulations. Without reproduction, fitness is undefined, and heritability seems to make no sense. And without lines of descent, at some level, how can we even conceive of natural selection?

Question 41

Which of the following is definitely true according to the multi-level selection theory?

Show Answer

Instruction for set :

Read the passage carefully and answer the following questions:

For traditional Darwinian natural selection to work, the entities in question must display some property or ability that can be inherited, and that results in their having more offspring than the competition. For instance, the first creatures with vision, however fuzzy, were presumably better at avoiding predators and finding mates than the sightless members of their population, and had more surviving progeny for that reason. In technical terms, then, selected entities must exist in populations showing heritable variation in fitness, greater fitness resulting in these entities’ differential reproduction.

Even if inherited properties are the result of undirected or ‘random’ mutation, repeating the selection process over generations will incrementally improve on them. This produces complex adaptations such as the vertebrate eye, with its highly sophisticated function. Lightsensitive areas acquired lenses for focusing and means for distinguishing colours step by advantageous step, ultimately producing modern eyes that are clearly for seeing. So even without an overall purpose, evolution, through selection,  creates something that behaves as if it has a goal.

Back in 1979, when Lovelock’s first popular book, Gaia: A New Look at Life on Earth, came out, the wider field of evolutionary biology was becoming a very reductionist discipline. Richard Dawkins’s The Selfish Gene had been published three years earlier, and it promoted a hardcore gene-centrism insisting that we look at genes as the fundamental units of selection - that is, the thing upon which natural selection operates. His claim was that genes were the reproducing entities par excellence, because they are the only things that always replicate and produce enduring lineages. Replication here means making fairly exact one-to-one copies, as genes (and asexual organisms such as bacteria) do. Reproduction, though, is a more inclusive and forgiving term - it’s what we humans and other sexual species do, when we make offspring that resemble both parents, but each only imperfectly. Still, this sloppy process exhibits heritable variation in fitness, and so supports evolution by natural selection.

In recent decades, many theorists have come to understand that there can be reproducing or even replicating entities evolving by natural selection at several levels of the biological hierarchy - not just in the domains of replicating genes and bacteria, or even sexual creatures such as ourselves. They have come to embrace something called multilevel selection theory: the idea that life can be represented as a hierarchy of entities nested together in larger entities, like Russian dolls. As the philosopher of science Peter Godfrey-Smith puts it, ‘genes, cells, social groups and species can all, in principle, enter into change of this kind’.

But to qualify as a thing on which natural selection can operate - a unit of selection - ‘they must be connected by parent-offspring relations; they must have the capacity to reproduce,’ Godfrey-Smith continues. It’s the requirement for reproduction and leaving parent-offspring lineages (lines of descent) we need to focus on here, because they remain essential in traditional formulations. Without reproduction, fitness is undefined, and heritability seems to make no sense. And without lines of descent, at some level, how can we even conceive of natural selection?

Question 42

“So even without an overall purpose, evolution, through selection,  creates something that behaves as if it has a goal” Which of the following best captures the essence of this statement?

Show Answer

Question 43

Four sentences are given below. These sentences when arranged in a proper order, form a logical and meaningful paragraph. Rearrange the sentences and enter the correct order as your answer.

1. It is not possible to do so with any degree of finality, but by an intention of consciousness upon this juxtaposition of ideas.
2. The world war represents not the triumph, but the birth of democracy.
3. How then is it possible to consider or discuss an architecture of democracy without the shadow of a shade?
4. The true ideal of democracy—the rule of a people by the demos, or group soul—is a thing unrealized.

Show Answer

Question 44

The passage given below is followed by four summaries. Choose the option that best captures the author’s position.

Sea levels are rising globally as Earth’s ice sheets melt and as warming sea water expands. But on a local scale, subsidence, or sinking land, can dramatically aggravate the problem. Cities like New Orleans and Jakarta are experiencing very rapid sea level rise relative to their coastlines—the land itself is sinking as the water is rising. An international team of researchers has demonstrated that this one-two punch is more than a local problem. Sinking land makes coastal residents around the world disproportionately vulnerable to rising seas: The typical coastal inhabitant is experiencing a sea level rise rate three to four times higher than the global average.

Show Answer

Question 45

Five sentences related to a topic are given below in a jumbled order. Four of them form a coherent and unified paragraph. Identify the odd sentence that does not go with the four. Key in the number of the option that you choose.
1. ‘Stat’ signaled something measurable, while ‘matic’ advertised free labour; but ‘tron’, above all, indicated control.
2. It was a totem of high modernism, the intellectual and cultural mode that decreed no process or phenomenon was too complex to be grasped, managed and optimized.
3. Like the heraldic shields of ancient knights, these morphemes were painted onto the names of scientific technologies to proclaim one’s history and achievements to friends and enemies alike.
4. The historian Robert Proctor at Stanford University calls the suffix ‘-tron’, along with ‘-matic’ and ‘-stat’, embodied symbols.
5. To gain the suffix was to acquire a proud and optimistic emblem of the electronic and atomic age.


Question 46

There is a sentence that is missing in the paragraph below. Look at the paragraph and decide where (option 1, 2, 3, or 4) the following sentence would best fit.

Sentence : Many have had to leave their homes behind, with more than 1.3 million people being displaced due to the drought.

Passage : Somalia has been dealing with an enormous humanitarian catastrophe, driven by the longest and most severe drought the country has experienced in at least 40 years. ___(1)___. Five consecutive rainy seasons have failed, causing more than 8 million people - almost half of the country’s population - to experience acute food insecurity. ___(2)___. More than 43,000 people are believed to have lost their lives, with half of the lives lost likely being children under five. The damage the drought has caused is far-reaching. ___(3)___. Farmers have lost all their agricultural income, while pastoralists have lost more than 3 million livestock, impoverishing entire communities, and leaving them on the brink of famine. ___(4)___. Some, like the pastoralists, may never be able to go back as their livelihoods have been irreversibly wiped out.


Instruction for set :

Read the passage carefully and answer the following questions:

The history of sport is full of suffering. In 1973, the boxer Muhammad Ali fought with a broken jaw for at least four rounds during his first historic bout with Ken Norton. In 1993, the American footballer Emmitt Smith played the entire second half of an NFL game with a first-degree separated shoulder, his arm hanging limply at his side as he ran for a heroic 168 yards. And in 1997, the basketball player Michael Jordan was delirious with fever when he scored 38 points in Game 5 of the NBA Finals; after the final buzzer, Scottie Pippen had to carry Jordan off the court because he no longer seemed able to support his own body weight.

Why such a drive to suffer and endure? A study by the medical researcher Jonas Tesarz and colleagues at the University of Heidelberg in 2012 found that athletes had significantly higher pain tolerance than normally active people. And yet both groups had similar pain thresholds, the point at which a sensation is recognisable as pain. Training can’t make athletes numb to pain, but it can condition them to tolerate it. And that kind of self-overcoming seems somehow integral to sport itself. And of course, if you’re suffering, the chances are that your opponent is, too. Indifference to pain confers a tactical advantage.

‘I remember the best race I ever had where the pain was almost enjoyable because you see other people hurt more than you,’ one Olympic athlete admitted during a study of pain tolerance. ‘If nothing is going wrong and there are no mechanical problems during the race then sometimes you can just turn the volume up a little higher and then a little higher and other people suffer and you almost enjoy it, even though you are in pain.’

Japanese trainers have gone so far as to enshrine this marriage of pain and athletic discipline in the concept of taibatsu, which translates roughly as ‘corporal punishment’. In his piece on Japanese baseball for The Japan Times last year, Robert Whiting traces the concept to one Suishu Tobita, head coach of the fabled Waseda University team in the 1920s. Tobita advocated ‘a baseball of savage pain and a baseball practice of savage treatment’. Players nicknamed his practice sessions ‘death training’: ‘If the players do not try so hard as to vomit blood in practice,’ he said, ‘then they cannot hope to win games. One must suffer to be good.’ This ethos has survived into the present day.  The Japanese-born New York Yankees pitcher Hiroki Kuroda, has admitted that there were times in elementary school when his buttocks were beaten with a baseball bat until he couldn’t sit down.

In one sense, then, it appears that sport is largely about ignoring pain. And yet pain returns to assert itself in a strange and striking way when we look at the broader category of competitive play. In a way, pain is one of the first games we learn. We live in an inverse relation to it, claiming as ideal any form of civilisation in which the possibility of experiencing pain is minimised. It is the first and most fundamental rule we learn to follow through free will, something that roots our lives in an inescapable game-like quality. We are always ruled by pain, and those capable of breaking its hold for a few moments become our heroes, role models, and victors.

Question 47

Why does the author cite the examples of several sportspersons in the first paragraph?

Show Answer

Instruction for set :

Read the passage carefully and answer the following questions:

The history of sport is full of suffering. In 1973, the boxer Muhammad Ali fought with a broken jaw for at least four rounds during his first historic bout with Ken Norton. In 1993, the American footballer Emmitt Smith played the entire second half of an NFL game with a first-degree separated shoulder, his arm hanging limply at his side as he ran for a heroic 168 yards. And in 1997, the basketball player Michael Jordan was delirious with fever when he scored 38 points in Game 5 of the NBA Finals; after the final buzzer, Scottie Pippen had to carry Jordan off the court because he no longer seemed able to support his own body weight.

Why such a drive to suffer and endure? A study by the medical researcher Jonas Tesarz and colleagues at the University of Heidelberg in 2012 found that athletes had significantly higher pain tolerance than normally active people. And yet both groups had similar pain thresholds, the point at which a sensation is recognisable as pain. Training can’t make athletes numb to pain, but it can condition them to tolerate it. And that kind of self-overcoming seems somehow integral to sport itself. And of course, if you’re suffering, the chances are that your opponent is, too. Indifference to pain confers a tactical advantage.

‘I remember the best race I ever had where the pain was almost enjoyable because you see other people hurt more than you,’ one Olympic athlete admitted during a study of pain tolerance. ‘If nothing is going wrong and there are no mechanical problems during the race then sometimes you can just turn the volume up a little higher and then a little higher and other people suffer and you almost enjoy it, even though you are in pain.’

Japanese trainers have gone so far as to enshrine this marriage of pain and athletic discipline in the concept of taibatsu, which translates roughly as ‘corporal punishment’. In his piece on Japanese baseball for The Japan Times last year, Robert Whiting traces the concept to one Suishu Tobita, head coach of the fabled Waseda University team in the 1920s. Tobita advocated ‘a baseball of savage pain and a baseball practice of savage treatment’. Players nicknamed his practice sessions ‘death training’: ‘If the players do not try so hard as to vomit blood in practice,’ he said, ‘then they cannot hope to win games. One must suffer to be good.’ This ethos has survived into the present day.  The Japanese-born New York Yankees pitcher Hiroki Kuroda, has admitted that there were times in elementary school when his buttocks were beaten with a baseball bat until he couldn’t sit down.

In one sense, then, it appears that sport is largely about ignoring pain. And yet pain returns to assert itself in a strange and striking way when we look at the broader category of competitive play. In a way, pain is one of the first games we learn. We live in an inverse relation to it, claiming as ideal any form of civilisation in which the possibility of experiencing pain is minimised. It is the first and most fundamental rule we learn to follow through free will, something that roots our lives in an inescapable game-like quality. We are always ruled by pain, and those capable of breaking its hold for a few moments become our heroes, role models, and victors.

Question 48

Which of the following is NOT a valid inference based on the passage?

Show Answer

Instruction for set :

Read the passage carefully and answer the following questions:

The history of sport is full of suffering. In 1973, the boxer Muhammad Ali fought with a broken jaw for at least four rounds during his first historic bout with Ken Norton. In 1993, the American footballer Emmitt Smith played the entire second half of an NFL game with a first-degree separated shoulder, his arm hanging limply at his side as he ran for a heroic 168 yards. And in 1997, the basketball player Michael Jordan was delirious with fever when he scored 38 points in Game 5 of the NBA Finals; after the final buzzer, Scottie Pippen had to carry Jordan off the court because he no longer seemed able to support his own body weight.

Why such a drive to suffer and endure? A study by the medical researcher Jonas Tesarz and colleagues at the University of Heidelberg in 2012 found that athletes had significantly higher pain tolerance than normally active people. And yet both groups had similar pain thresholds, the point at which a sensation is recognisable as pain. Training can’t make athletes numb to pain, but it can condition them to tolerate it. And that kind of self-overcoming seems somehow integral to sport itself. And of course, if you’re suffering, the chances are that your opponent is, too. Indifference to pain confers a tactical advantage.

‘I remember the best race I ever had where the pain was almost enjoyable because you see other people hurt more than you,’ one Olympic athlete admitted during a study of pain tolerance. ‘If nothing is going wrong and there are no mechanical problems during the race then sometimes you can just turn the volume up a little higher and then a little higher and other people suffer and you almost enjoy it, even though you are in pain.’

Japanese trainers have gone so far as to enshrine this marriage of pain and athletic discipline in the concept of taibatsu, which translates roughly as ‘corporal punishment’. In his piece on Japanese baseball for The Japan Times last year, Robert Whiting traces the concept to one Suishu Tobita, head coach of the fabled Waseda University team in the 1920s. Tobita advocated ‘a baseball of savage pain and a baseball practice of savage treatment’. Players nicknamed his practice sessions ‘death training’: ‘If the players do not try so hard as to vomit blood in practice,’ he said, ‘then they cannot hope to win games. One must suffer to be good.’ This ethos has survived into the present day.  The Japanese-born New York Yankees pitcher Hiroki Kuroda, has admitted that there were times in elementary school when his buttocks were beaten with a baseball bat until he couldn’t sit down.

In one sense, then, it appears that sport is largely about ignoring pain. And yet pain returns to assert itself in a strange and striking way when we look at the broader category of competitive play. In a way, pain is one of the first games we learn. We live in an inverse relation to it, claiming as ideal any form of civilisation in which the possibility of experiencing pain is minimised. It is the first and most fundamental rule we learn to follow through free will, something that roots our lives in an inescapable game-like quality. We are always ruled by pain, and those capable of breaking its hold for a few moments become our heroes, role models, and victors.

Question 49

The author mentions Japanese-born New York Yankees pitcher Hiroki Kuroda to

Show Answer

Instruction for set :

Read the passage carefully and answer the following questions:

The history of sport is full of suffering. In 1973, the boxer Muhammad Ali fought with a broken jaw for at least four rounds during his first historic bout with Ken Norton. In 1993, the American footballer Emmitt Smith played the entire second half of an NFL game with a first-degree separated shoulder, his arm hanging limply at his side as he ran for a heroic 168 yards. And in 1997, the basketball player Michael Jordan was delirious with fever when he scored 38 points in Game 5 of the NBA Finals; after the final buzzer, Scottie Pippen had to carry Jordan off the court because he no longer seemed able to support his own body weight.

Why such a drive to suffer and endure? A study by the medical researcher Jonas Tesarz and colleagues at the University of Heidelberg in 2012 found that athletes had significantly higher pain tolerance than normally active people. And yet both groups had similar pain thresholds, the point at which a sensation is recognisable as pain. Training can’t make athletes numb to pain, but it can condition them to tolerate it. And that kind of self-overcoming seems somehow integral to sport itself. And of course, if you’re suffering, the chances are that your opponent is, too. Indifference to pain confers a tactical advantage.

‘I remember the best race I ever had where the pain was almost enjoyable because you see other people hurt more than you,’ one Olympic athlete admitted during a study of pain tolerance. ‘If nothing is going wrong and there are no mechanical problems during the race then sometimes you can just turn the volume up a little higher and then a little higher and other people suffer and you almost enjoy it, even though you are in pain.’

Japanese trainers have gone so far as to enshrine this marriage of pain and athletic discipline in the concept of taibatsu, which translates roughly as ‘corporal punishment’. In his piece on Japanese baseball for The Japan Times last year, Robert Whiting traces the concept to one Suishu Tobita, head coach of the fabled Waseda University team in the 1920s. Tobita advocated ‘a baseball of savage pain and a baseball practice of savage treatment’. Players nicknamed his practice sessions ‘death training’: ‘If the players do not try so hard as to vomit blood in practice,’ he said, ‘then they cannot hope to win games. One must suffer to be good.’ This ethos has survived into the present day.  The Japanese-born New York Yankees pitcher Hiroki Kuroda, has admitted that there were times in elementary school when his buttocks were beaten with a baseball bat until he couldn’t sit down.

In one sense, then, it appears that sport is largely about ignoring pain. And yet pain returns to assert itself in a strange and striking way when we look at the broader category of competitive play. In a way, pain is one of the first games we learn. We live in an inverse relation to it, claiming as ideal any form of civilisation in which the possibility of experiencing pain is minimised. It is the first and most fundamental rule we learn to follow through free will, something that roots our lives in an inescapable game-like quality. We are always ruled by pain, and those capable of breaking its hold for a few moments become our heroes, role models, and victors.

Question 50

The central idea of the passage is that

Show Answer

Instruction for set :

Read the passage carefully and answer the following questions:

The history of sport is full of suffering. In 1973, the boxer Muhammad Ali fought with a broken jaw for at least four rounds during his first historic bout with Ken Norton. In 1993, the American footballer Emmitt Smith played the entire second half of an NFL game with a first-degree separated shoulder, his arm hanging limply at his side as he ran for a heroic 168 yards. And in 1997, the basketball player Michael Jordan was delirious with fever when he scored 38 points in Game 5 of the NBA Finals; after the final buzzer, Scottie Pippen had to carry Jordan off the court because he no longer seemed able to support his own body weight.

Why such a drive to suffer and endure? A study by the medical researcher Jonas Tesarz and colleagues at the University of Heidelberg in 2012 found that athletes had significantly higher pain tolerance than normally active people. And yet both groups had similar pain thresholds, the point at which a sensation is recognisable as pain. Training can’t make athletes numb to pain, but it can condition them to tolerate it. And that kind of self-overcoming seems somehow integral to sport itself. And of course, if you’re suffering, the chances are that your opponent is, too. Indifference to pain confers a tactical advantage.

‘I remember the best race I ever had where the pain was almost enjoyable because you see other people hurt more than you,’ one Olympic athlete admitted during a study of pain tolerance. ‘If nothing is going wrong and there are no mechanical problems during the race then sometimes you can just turn the volume up a little higher and then a little higher and other people suffer and you almost enjoy it, even though you are in pain.’

Japanese trainers have gone so far as to enshrine this marriage of pain and athletic discipline in the concept of taibatsu, which translates roughly as ‘corporal punishment’. In his piece on Japanese baseball for The Japan Times last year, Robert Whiting traces the concept to one Suishu Tobita, head coach of the fabled Waseda University team in the 1920s. Tobita advocated ‘a baseball of savage pain and a baseball practice of savage treatment’. Players nicknamed his practice sessions ‘death training’: ‘If the players do not try so hard as to vomit blood in practice,’ he said, ‘then they cannot hope to win games. One must suffer to be good.’ This ethos has survived into the present day.  The Japanese-born New York Yankees pitcher Hiroki Kuroda, has admitted that there were times in elementary school when his buttocks were beaten with a baseball bat until he couldn’t sit down.

In one sense, then, it appears that sport is largely about ignoring pain. And yet pain returns to assert itself in a strange and striking way when we look at the broader category of competitive play. In a way, pain is one of the first games we learn. We live in an inverse relation to it, claiming as ideal any form of civilisation in which the possibility of experiencing pain is minimised. It is the first and most fundamental rule we learn to follow through free will, something that roots our lives in an inescapable game-like quality. We are always ruled by pain, and those capable of breaking its hold for a few moments become our heroes, role models, and victors.

Question 51

What is the primary purpose of the final paragraph?

Show Answer

Instruction for set :

Read the passage carefully and answer the following questions:

Mythology remains important in Western culture. Take, for instance, the role model of the hero, of contemporary revolutionaries, martyrs and dictators. These ideal figures exemplify models of human achievement. Similarly, notions of salvation, progress and ethics are so constitutive of our notions of reality that they’re often communicated through the format of mythology. There’s a surfeit of cultural products that fulfil the function of myth whereby characters and stories give us the means to understand the world we live in. Through superhero comic books, to the obscure immanence of modern art, from visions of paradisiacal vacations, to computer games and the self-mythologising of social media production, we seek a higher ground beyond the banal and the profane. We’ve even replaced the effervescent experience of sacred rites...in our engagement with art, drugs, cinema, rock music and all-night dance parties. Lastly, individuals have developed their own ways to create self-narratives that include mythical transitions in pilgrimages or personal quests to their ancestral lands. Likewise, some seek inner spaces wherein faith and meaning can be transformed into experience.

To prepare for our exploration of contemporary mythology, we can look back at civilisations and consider the function of the stories they told. The story of the flood, for example, recurs in early urban societies, marking a crisis in human-divine relations and man’s experience of gradual self-reliance and separation from nature. Whereas during the Axial Age (800-200 BCE), faith developed in an environment of early trade economies, at which time we observe a concern with individual conscience, morality, compassion and a tendency to look within. According to Karen Armstrong’s A Short History of Myth (2005), these Axial myths of interiority indicate that people felt they no longer shared the same nature as the gods, and that the supreme reality had become impossibly difficult to access. These myths were a response to the loss of previous notions of social order, cosmology and human good, and represented ways to portray these social transformations in macrocosmic stories, and were reflections of how people tried to make sense of their rapidly changing world.

What constitutes a mythology? It’s an organised canon of beliefs that explains the state of the world. It also delivers an origin story - such as the Hindu Laws of Manu or the Biblical creation story - that creates a setting for how we experience the world. In fact, for Eliade, all myths provided an explanation of the world by virtue of giving an account of where things came from. If all mythologies are origin stories in this sense, what are the origin stories suggested by psychology? Two original elements of human nature are explained in its lore: the story of personhood - that is, what it means to be an individual and have an identity - and, secondly, the story of our physical constitution in the brain.

Contemporary psychology is a form of mythology insofar as it is an attempt to succor our need to believe in stories that provide a sense of value and signification in the context of secular modernity. The ways in which psychology is used - for example in experiments or self-help literature or personality tests or brain scans - are means of providing rituals to enact the myths of personhood and materialism.

Question 52

Which of the following statements about mythology cannot be inferred from the passage?

Show Answer

Instruction for set :

Read the passage carefully and answer the following questions:

Mythology remains important in Western culture. Take, for instance, the role model of the hero, of contemporary revolutionaries, martyrs and dictators. These ideal figures exemplify models of human achievement. Similarly, notions of salvation, progress and ethics are so constitutive of our notions of reality that they’re often communicated through the format of mythology. There’s a surfeit of cultural products that fulfil the function of myth whereby characters and stories give us the means to understand the world we live in. Through superhero comic books, to the obscure immanence of modern art, from visions of paradisiacal vacations, to computer games and the self-mythologising of social media production, we seek a higher ground beyond the banal and the profane. We’ve even replaced the effervescent experience of sacred rites...in our engagement with art, drugs, cinema, rock music and all-night dance parties. Lastly, individuals have developed their own ways to create self-narratives that include mythical transitions in pilgrimages or personal quests to their ancestral lands. Likewise, some seek inner spaces wherein faith and meaning can be transformed into experience.

To prepare for our exploration of contemporary mythology, we can look back at civilisations and consider the function of the stories they told. The story of the flood, for example, recurs in early urban societies, marking a crisis in human-divine relations and man’s experience of gradual self-reliance and separation from nature. Whereas during the Axial Age (800-200 BCE), faith developed in an environment of early trade economies, at which time we observe a concern with individual conscience, morality, compassion and a tendency to look within. According to Karen Armstrong’s A Short History of Myth (2005), these Axial myths of interiority indicate that people felt they no longer shared the same nature as the gods, and that the supreme reality had become impossibly difficult to access. These myths were a response to the loss of previous notions of social order, cosmology and human good, and represented ways to portray these social transformations in macrocosmic stories, and were reflections of how people tried to make sense of their rapidly changing world.

What constitutes a mythology? It’s an organised canon of beliefs that explains the state of the world. It also delivers an origin story - such as the Hindu Laws of Manu or the Biblical creation story - that creates a setting for how we experience the world. In fact, for Eliade, all myths provided an explanation of the world by virtue of giving an account of where things came from. If all mythologies are origin stories in this sense, what are the origin stories suggested by psychology? Two original elements of human nature are explained in its lore: the story of personhood - that is, what it means to be an individual and have an identity - and, secondly, the story of our physical constitution in the brain.

Contemporary psychology is a form of mythology insofar as it is an attempt to succor our need to believe in stories that provide a sense of value and signification in the context of secular modernity. The ways in which psychology is used - for example in experiments or self-help literature or personality tests or brain scans - are means of providing rituals to enact the myths of personhood and materialism.

Question 53

The author cites the examples of the story of the flood and myth of interiority to drive home the point that

Show Answer

Instruction for set :

Read the passage carefully and answer the following questions:

Mythology remains important in Western culture. Take, for instance, the role model of the hero, of contemporary revolutionaries, martyrs and dictators. These ideal figures exemplify models of human achievement. Similarly, notions of salvation, progress and ethics are so constitutive of our notions of reality that they’re often communicated through the format of mythology. There’s a surfeit of cultural products that fulfil the function of myth whereby characters and stories give us the means to understand the world we live in. Through superhero comic books, to the obscure immanence of modern art, from visions of paradisiacal vacations, to computer games and the self-mythologising of social media production, we seek a higher ground beyond the banal and the profane. We’ve even replaced the effervescent experience of sacred rites...in our engagement with art, drugs, cinema, rock music and all-night dance parties. Lastly, individuals have developed their own ways to create self-narratives that include mythical transitions in pilgrimages or personal quests to their ancestral lands. Likewise, some seek inner spaces wherein faith and meaning can be transformed into experience.

To prepare for our exploration of contemporary mythology, we can look back at civilisations and consider the function of the stories they told. The story of the flood, for example, recurs in early urban societies, marking a crisis in human-divine relations and man’s experience of gradual self-reliance and separation from nature. Whereas during the Axial Age (800-200 BCE), faith developed in an environment of early trade economies, at which time we observe a concern with individual conscience, morality, compassion and a tendency to look within. According to Karen Armstrong’s A Short History of Myth (2005), these Axial myths of interiority indicate that people felt they no longer shared the same nature as the gods, and that the supreme reality had become impossibly difficult to access. These myths were a response to the loss of previous notions of social order, cosmology and human good, and represented ways to portray these social transformations in macrocosmic stories, and were reflections of how people tried to make sense of their rapidly changing world.

What constitutes a mythology? It’s an organised canon of beliefs that explains the state of the world. It also delivers an origin story - such as the Hindu Laws of Manu or the Biblical creation story - that creates a setting for how we experience the world. In fact, for Eliade, all myths provided an explanation of the world by virtue of giving an account of where things came from. If all mythologies are origin stories in this sense, what are the origin stories suggested by psychology? Two original elements of human nature are explained in its lore: the story of personhood - that is, what it means to be an individual and have an identity - and, secondly, the story of our physical constitution in the brain.

Contemporary psychology is a form of mythology insofar as it is an attempt to succor our need to believe in stories that provide a sense of value and signification in the context of secular modernity. The ways in which psychology is used - for example in experiments or self-help literature or personality tests or brain scans - are means of providing rituals to enact the myths of personhood and materialism.

Question 54

Why does the author refer to contemporary psychology as a form of mythology?

Show Answer

Instruction for set :

Read the passage carefully and answer the following questions:

Mythology remains important in Western culture. Take, for instance, the role model of the hero, of contemporary revolutionaries, martyrs and dictators. These ideal figures exemplify models of human achievement. Similarly, notions of salvation, progress and ethics are so constitutive of our notions of reality that they’re often communicated through the format of mythology. There’s a surfeit of cultural products that fulfil the function of myth whereby characters and stories give us the means to understand the world we live in. Through superhero comic books, to the obscure immanence of modern art, from visions of paradisiacal vacations, to computer games and the self-mythologising of social media production, we seek a higher ground beyond the banal and the profane. We’ve even replaced the effervescent experience of sacred rites...in our engagement with art, drugs, cinema, rock music and all-night dance parties. Lastly, individuals have developed their own ways to create self-narratives that include mythical transitions in pilgrimages or personal quests to their ancestral lands. Likewise, some seek inner spaces wherein faith and meaning can be transformed into experience.

To prepare for our exploration of contemporary mythology, we can look back at civilisations and consider the function of the stories they told. The story of the flood, for example, recurs in early urban societies, marking a crisis in human-divine relations and man’s experience of gradual self-reliance and separation from nature. Whereas during the Axial Age (800-200 BCE), faith developed in an environment of early trade economies, at which time we observe a concern with individual conscience, morality, compassion and a tendency to look within. According to Karen Armstrong’s A Short History of Myth (2005), these Axial myths of interiority indicate that people felt they no longer shared the same nature as the gods, and that the supreme reality had become impossibly difficult to access. These myths were a response to the loss of previous notions of social order, cosmology and human good, and represented ways to portray these social transformations in macrocosmic stories, and were reflections of how people tried to make sense of their rapidly changing world.

What constitutes a mythology? It’s an organised canon of beliefs that explains the state of the world. It also delivers an origin story - such as the Hindu Laws of Manu or the Biblical creation story - that creates a setting for how we experience the world. In fact, for Eliade, all myths provided an explanation of the world by virtue of giving an account of where things came from. If all mythologies are origin stories in this sense, what are the origin stories suggested by psychology? Two original elements of human nature are explained in its lore: the story of personhood - that is, what it means to be an individual and have an identity - and, secondly, the story of our physical constitution in the brain.

Contemporary psychology is a form of mythology insofar as it is an attempt to succor our need to believe in stories that provide a sense of value and signification in the context of secular modernity. The ways in which psychology is used - for example in experiments or self-help literature or personality tests or brain scans - are means of providing rituals to enact the myths of personhood and materialism.

Question 55

Which of the following statements about human behaviour cannot be inferred from the first paragraph?

Show Answer

Instruction for set :

Read the passage carefully and answer the following questions:

Mythology remains important in Western culture. Take, for instance, the role model of the hero, of contemporary revolutionaries, martyrs and dictators. These ideal figures exemplify models of human achievement. Similarly, notions of salvation, progress and ethics are so constitutive of our notions of reality that they’re often communicated through the format of mythology. There’s a surfeit of cultural products that fulfil the function of myth whereby characters and stories give us the means to understand the world we live in. Through superhero comic books, to the obscure immanence of modern art, from visions of paradisiacal vacations, to computer games and the self-mythologising of social media production, we seek a higher ground beyond the banal and the profane. We’ve even replaced the effervescent experience of sacred rites...in our engagement with art, drugs, cinema, rock music and all-night dance parties. Lastly, individuals have developed their own ways to create self-narratives that include mythical transitions in pilgrimages or personal quests to their ancestral lands. Likewise, some seek inner spaces wherein faith and meaning can be transformed into experience.

To prepare for our exploration of contemporary mythology, we can look back at civilisations and consider the function of the stories they told. The story of the flood, for example, recurs in early urban societies, marking a crisis in human-divine relations and man’s experience of gradual self-reliance and separation from nature. Whereas during the Axial Age (800-200 BCE), faith developed in an environment of early trade economies, at which time we observe a concern with individual conscience, morality, compassion and a tendency to look within. According to Karen Armstrong’s A Short History of Myth (2005), these Axial myths of interiority indicate that people felt they no longer shared the same nature as the gods, and that the supreme reality had become impossibly difficult to access. These myths were a response to the loss of previous notions of social order, cosmology and human good, and represented ways to portray these social transformations in macrocosmic stories, and were reflections of how people tried to make sense of their rapidly changing world.

What constitutes a mythology? It’s an organised canon of beliefs that explains the state of the world. It also delivers an origin story - such as the Hindu Laws of Manu or the Biblical creation story - that creates a setting for how we experience the world. In fact, for Eliade, all myths provided an explanation of the world by virtue of giving an account of where things came from. If all mythologies are origin stories in this sense, what are the origin stories suggested by psychology? Two original elements of human nature are explained in its lore: the story of personhood - that is, what it means to be an individual and have an identity - and, secondly, the story of our physical constitution in the brain.

Contemporary psychology is a form of mythology insofar as it is an attempt to succor our need to believe in stories that provide a sense of value and signification in the context of secular modernity. The ways in which psychology is used - for example in experiments or self-help literature or personality tests or brain scans - are means of providing rituals to enact the myths of personhood and materialism.

Question 56

The author cites the examples of psychological experiments, self-help literature, brain scans and personality tests because

Show Answer

Instruction for set :

Read the passage carefully and answer the questions that follow:

Jesus said the truth will set us free. Francis Bacon said knowledge is power. Yet to recognise something as true is to be influenced by evidence or affected by the world - and in many ways knowledge limits our freedom. Climate change science calls us to alter our way of life and make sacrifices in order to avoid disaster. The urge to reject this science is strong: people try to find weak points in theories and look away from empirical evidence to maintain their freedom to eat meat or drive to work. Of course, critical scrutiny is crucial in science and an intrinsic part of scientific progress. The pursuit of knowledge goes hand in hand with doubt. The more intent a person is on obtaining knowledge, the less likely she is to have firm beliefs. That is why philosophy - the love of truth - often makes us more sceptical rather than knowledgeable. Socrates liked to say that he knew nothing except the fact that he knew nothing.

This doesn’t mean that all doubt is sound or that all scepticism is motivated by a love of knowledge. There is a form of scepticism which cannot be taken as a scientific or otherwise theoretical attitude at all. So although climate science denial presents itself as a kind of doubt, based on an alternative interpretation of evidence, much of it should be understood as a practical attitude. It is not interested in knowledge to begin with. Instead, it seeks freedom from knowledge.

That is how the 19th-century Danish philosopher Søren Kierkegaard understood Socrates’s self-proclaimed ignorance: it was a rebellion against knowledge and an assertion of himself against objective reality. Kierkegaard called this attitude “irony” and he distinguished it carefully from doubt. Doubt is something we suffer insofar as we are invested in the truth but find it hard to verify our beliefs. The doubter feels alienated from reality and wants to get back in touch with it. The ironist, on the other hand, triumphs in this alienation. He does not love the truth; he loves the freedom that comes from not believing.

This notion of irony as a practical attitude different from doubt can be applied to a lot of climate science scepticism, such as cherry-picking a few dozen failed scientific predictions in order to reject all ecological science. Of course, irony manifests itself in a great deal of political thought, which has a thorny relation to facts, nature and culture. We see it at work when advocates of free immigration assert that national borders are purely imaginary and then infer from this that it is illegitimate to deny anyone entry into a territory.

Consider too the fierce resistance of some feminists to research in evolutionary biology that finds an innate basis for psychological and behavioural differences between the sexes. Chanda Prescod-Weinstein, for example, casts doubt on all of science by way of discrediting such findings. The discipline of biology, she points out, has a history of “encoding and justifying bias” by purporting to prove that women and non-Europeans are intellectually inferior to white men. But on the issue of gender differences, Prescod-Weinstein is, like Kierkegaard’s Socrates, happy to trade knowledge for freedom.

Question 57

Which of the following statements is the author least likely to agree with?

Show Answer

Instruction for set :

Read the passage carefully and answer the questions that follow:

Jesus said the truth will set us free. Francis Bacon said knowledge is power. Yet to recognise something as true is to be influenced by evidence or affected by the world - and in many ways knowledge limits our freedom. Climate change science calls us to alter our way of life and make sacrifices in order to avoid disaster. The urge to reject this science is strong: people try to find weak points in theories and look away from empirical evidence to maintain their freedom to eat meat or drive to work. Of course, critical scrutiny is crucial in science and an intrinsic part of scientific progress. The pursuit of knowledge goes hand in hand with doubt. The more intent a person is on obtaining knowledge, the less likely she is to have firm beliefs. That is why philosophy - the love of truth - often makes us more sceptical rather than knowledgeable. Socrates liked to say that he knew nothing except the fact that he knew nothing.

This doesn’t mean that all doubt is sound or that all scepticism is motivated by a love of knowledge. There is a form of scepticism which cannot be taken as a scientific or otherwise theoretical attitude at all. So although climate science denial presents itself as a kind of doubt, based on an alternative interpretation of evidence, much of it should be understood as a practical attitude. It is not interested in knowledge to begin with. Instead, it seeks freedom from knowledge.

That is how the 19th-century Danish philosopher Søren Kierkegaard understood Socrates’s self-proclaimed ignorance: it was a rebellion against knowledge and an assertion of himself against objective reality. Kierkegaard called this attitude “irony” and he distinguished it carefully from doubt. Doubt is something we suffer insofar as we are invested in the truth but find it hard to verify our beliefs. The doubter feels alienated from reality and wants to get back in touch with it. The ironist, on the other hand, triumphs in this alienation. He does not love the truth; he loves the freedom that comes from not believing.

This notion of irony as a practical attitude different from doubt can be applied to a lot of climate science scepticism, such as cherry-picking a few dozen failed scientific predictions in order to reject all ecological science. Of course, irony manifests itself in a great deal of political thought, which has a thorny relation to facts, nature and culture. We see it at work when advocates of free immigration assert that national borders are purely imaginary and then infer from this that it is illegitimate to deny anyone entry into a territory.

Consider too the fierce resistance of some feminists to research in evolutionary biology that finds an innate basis for psychological and behavioural differences between the sexes. Chanda Prescod-Weinstein, for example, casts doubt on all of science by way of discrediting such findings. The discipline of biology, she points out, has a history of “encoding and justifying bias” by purporting to prove that women and non-Europeans are intellectually inferior to white men. But on the issue of gender differences, Prescod-Weinstein is, like Kierkegaard’s Socrates, happy to trade knowledge for freedom.

Question 58

Who among the following would the author not tag as 'Kierkegaard's Socrates'?

Show Answer

Instruction for set :

Read the passage carefully and answer the questions that follow:

Jesus said the truth will set us free. Francis Bacon said knowledge is power. Yet to recognise something as true is to be influenced by evidence or affected by the world - and in many ways knowledge limits our freedom. Climate change science calls us to alter our way of life and make sacrifices in order to avoid disaster. The urge to reject this science is strong: people try to find weak points in theories and look away from empirical evidence to maintain their freedom to eat meat or drive to work. Of course, critical scrutiny is crucial in science and an intrinsic part of scientific progress. The pursuit of knowledge goes hand in hand with doubt. The more intent a person is on obtaining knowledge, the less likely she is to have firm beliefs. That is why philosophy - the love of truth - often makes us more sceptical rather than knowledgeable. Socrates liked to say that he knew nothing except the fact that he knew nothing.

This doesn’t mean that all doubt is sound or that all scepticism is motivated by a love of knowledge. There is a form of scepticism which cannot be taken as a scientific or otherwise theoretical attitude at all. So although climate science denial presents itself as a kind of doubt, based on an alternative interpretation of evidence, much of it should be understood as a practical attitude. It is not interested in knowledge to begin with. Instead, it seeks freedom from knowledge.

That is how the 19th-century Danish philosopher Søren Kierkegaard understood Socrates’s self-proclaimed ignorance: it was a rebellion against knowledge and an assertion of himself against objective reality. Kierkegaard called this attitude “irony” and he distinguished it carefully from doubt. Doubt is something we suffer insofar as we are invested in the truth but find it hard to verify our beliefs. The doubter feels alienated from reality and wants to get back in touch with it. The ironist, on the other hand, triumphs in this alienation. He does not love the truth; he loves the freedom that comes from not believing.

This notion of irony as a practical attitude different from doubt can be applied to a lot of climate science scepticism, such as cherry-picking a few dozen failed scientific predictions in order to reject all ecological science. Of course, irony manifests itself in a great deal of political thought, which has a thorny relation to facts, nature and culture. We see it at work when advocates of free immigration assert that national borders are purely imaginary and then infer from this that it is illegitimate to deny anyone entry into a territory.

Consider too the fierce resistance of some feminists to research in evolutionary biology that finds an innate basis for psychological and behavioural differences between the sexes. Chanda Prescod-Weinstein, for example, casts doubt on all of science by way of discrediting such findings. The discipline of biology, she points out, has a history of “encoding and justifying bias” by purporting to prove that women and non-Europeans are intellectually inferior to white men. But on the issue of gender differences, Prescod-Weinstein is, like Kierkegaard’s Socrates, happy to trade knowledge for freedom.

Question 59

A doubter and an ironist differ in all of the following ways, EXCEPT

Show Answer

Instruction for set :

Read the passage carefully and answer the questions that follow:

Jesus said the truth will set us free. Francis Bacon said knowledge is power. Yet to recognise something as true is to be influenced by evidence or affected by the world - and in many ways knowledge limits our freedom. Climate change science calls us to alter our way of life and make sacrifices in order to avoid disaster. The urge to reject this science is strong: people try to find weak points in theories and look away from empirical evidence to maintain their freedom to eat meat or drive to work. Of course, critical scrutiny is crucial in science and an intrinsic part of scientific progress. The pursuit of knowledge goes hand in hand with doubt. The more intent a person is on obtaining knowledge, the less likely she is to have firm beliefs. That is why philosophy - the love of truth - often makes us more sceptical rather than knowledgeable. Socrates liked to say that he knew nothing except the fact that he knew nothing.

This doesn’t mean that all doubt is sound or that all scepticism is motivated by a love of knowledge. There is a form of scepticism which cannot be taken as a scientific or otherwise theoretical attitude at all. So although climate science denial presents itself as a kind of doubt, based on an alternative interpretation of evidence, much of it should be understood as a practical attitude. It is not interested in knowledge to begin with. Instead, it seeks freedom from knowledge.

That is how the 19th-century Danish philosopher Søren Kierkegaard understood Socrates’s self-proclaimed ignorance: it was a rebellion against knowledge and an assertion of himself against objective reality. Kierkegaard called this attitude “irony” and he distinguished it carefully from doubt. Doubt is something we suffer insofar as we are invested in the truth but find it hard to verify our beliefs. The doubter feels alienated from reality and wants to get back in touch with it. The ironist, on the other hand, triumphs in this alienation. He does not love the truth; he loves the freedom that comes from not believing.

This notion of irony as a practical attitude different from doubt can be applied to a lot of climate science scepticism, such as cherry-picking a few dozen failed scientific predictions in order to reject all ecological science. Of course, irony manifests itself in a great deal of political thought, which has a thorny relation to facts, nature and culture. We see it at work when advocates of free immigration assert that national borders are purely imaginary and then infer from this that it is illegitimate to deny anyone entry into a territory.

Consider too the fierce resistance of some feminists to research in evolutionary biology that finds an innate basis for psychological and behavioural differences between the sexes. Chanda Prescod-Weinstein, for example, casts doubt on all of science by way of discrediting such findings. The discipline of biology, she points out, has a history of “encoding and justifying bias” by purporting to prove that women and non-Europeans are intellectually inferior to white men. But on the issue of gender differences, Prescod-Weinstein is, like Kierkegaard’s Socrates, happy to trade knowledge for freedom.

Question 60

The purpose of the first two paragraphs is

Show Answer

Instruction for set :

Read the passage carefully and answer the questions that follow:

Jesus said the truth will set us free. Francis Bacon said knowledge is power. Yet to recognise something as true is to be influenced by evidence or affected by the world - and in many ways knowledge limits our freedom. Climate change science calls us to alter our way of life and make sacrifices in order to avoid disaster. The urge to reject this science is strong: people try to find weak points in theories and look away from empirical evidence to maintain their freedom to eat meat or drive to work. Of course, critical scrutiny is crucial in science and an intrinsic part of scientific progress. The pursuit of knowledge goes hand in hand with doubt. The more intent a person is on obtaining knowledge, the less likely she is to have firm beliefs. That is why philosophy - the love of truth - often makes us more sceptical rather than knowledgeable. Socrates liked to say that he knew nothing except the fact that he knew nothing.

This doesn’t mean that all doubt is sound or that all scepticism is motivated by a love of knowledge. There is a form of scepticism which cannot be taken as a scientific or otherwise theoretical attitude at all. So although climate science denial presents itself as a kind of doubt, based on an alternative interpretation of evidence, much of it should be understood as a practical attitude. It is not interested in knowledge to begin with. Instead, it seeks freedom from knowledge.

That is how the 19th-century Danish philosopher Søren Kierkegaard understood Socrates’s self-proclaimed ignorance: it was a rebellion against knowledge and an assertion of himself against objective reality. Kierkegaard called this attitude “irony” and he distinguished it carefully from doubt. Doubt is something we suffer insofar as we are invested in the truth but find it hard to verify our beliefs. The doubter feels alienated from reality and wants to get back in touch with it. The ironist, on the other hand, triumphs in this alienation. He does not love the truth; he loves the freedom that comes from not believing.

This notion of irony as a practical attitude different from doubt can be applied to a lot of climate science scepticism, such as cherry-picking a few dozen failed scientific predictions in order to reject all ecological science. Of course, irony manifests itself in a great deal of political thought, which has a thorny relation to facts, nature and culture. We see it at work when advocates of free immigration assert that national borders are purely imaginary and then infer from this that it is illegitimate to deny anyone entry into a territory.

Consider too the fierce resistance of some feminists to research in evolutionary biology that finds an innate basis for psychological and behavioural differences between the sexes. Chanda Prescod-Weinstein, for example, casts doubt on all of science by way of discrediting such findings. The discipline of biology, she points out, has a history of “encoding and justifying bias” by purporting to prove that women and non-Europeans are intellectually inferior to white men. But on the issue of gender differences, Prescod-Weinstein is, like Kierkegaard’s Socrates, happy to trade knowledge for freedom.

Question 61

Which of the following would cast the most doubt on the label of 'ironist' on Prescod-Weinstein?

Show Answer

Instruction for set :

Read the passage carefully and answer the following questions:

When investigating witchcraft, one needs to differentiate between real and imaginary magic in the early modern period. If we want to understand the connection between the imaginary magic of the witches and economic behaviour, we need to deal with the connection between the economy and the real magic practised by ‘common’ people. In pre-industrial Europe, magic was a part of everyday life, very much like religion. People didn’t just believe in the efficacy of magic, they actively tried to use magic themselves. Simple forms of divination and healing magic were common, as was magic related to agriculture. The peasant household used divination to find out if the time was right for certain agricultural activities. Charms were supposed to keep the livestock in good health. Urban artisans and merchants also used economic magic to increase their wealth.

Of all the forms of magic, magical treasure-hunting had the greatest economic significance. Treasure hunters drew on a vast magical arsenal. They had spell books of any description, divining rods available in any kind of wood, amulets to protect them against evil spirits, and lead tablets etched with magical signs.  To the utter horror of the ecclesiastical authorities, they invoked angels and saints. Treasure hunters talked to ghosts. Some of them even tried to conjure up demons. However,  common people simply didn’t see treasure magic as witchcraft, and most of the judges agreed.

Separate from these real forms of magic, there was the imaginary magic of the witches. Nobody was ever (or could ever be) guilty of witchcraft in the full sense of the word, which was defined by the late Middle Ages as a crime that consisted of five elements: a pact with the devil; sexual intercourse with demons; the magical flight (on a broomstick or a similar device); the witches’ dance (often referred to by the antisemitic term ‘witches’ sabbath’); and malevolent magic. Early modern Europe and Britain treated witchcraft as a capital crime.

At first glance, the relation between the economy and the imaginary magic of the witches seems to be entirely negative. Witches were often accused of attacking livestock. They magically made frost, storm and hail, and thereby caused crop failure. Indeed, their weather magic was said to endanger the economy of entire regions. Still, at least in the majority of the witch trials on the European continent, the witches didn’t profit from their magic. Weather magic especially looked like a strange form of auto-aggression because the hailstorms the witches supposedly conjured up damaged their own fields as well. As a rule, the pact with the devil as it appears in trial records was not a contract like that of Goethe’s Faust, which was mostly about the wishes of the magician. Rather, it stated simply that the witch submitted to the will of the demon. She did what a demon told her and became the instrument of the demon’s abyssal hatred of all creation. Witchcraft was mostly about destruction for destruction’s sake, not about the personal interests and wishes of the witches, let alone their economic advantage.

Question 62

Which one of the following best describes what the passage is trying to do?

Show Answer

Instruction for set :

Read the passage carefully and answer the following questions:

When investigating witchcraft, one needs to differentiate between real and imaginary magic in the early modern period. If we want to understand the connection between the imaginary magic of the witches and economic behaviour, we need to deal with the connection between the economy and the real magic practised by ‘common’ people. In pre-industrial Europe, magic was a part of everyday life, very much like religion. People didn’t just believe in the efficacy of magic, they actively tried to use magic themselves. Simple forms of divination and healing magic were common, as was magic related to agriculture. The peasant household used divination to find out if the time was right for certain agricultural activities. Charms were supposed to keep the livestock in good health. Urban artisans and merchants also used economic magic to increase their wealth.

Of all the forms of magic, magical treasure-hunting had the greatest economic significance. Treasure hunters drew on a vast magical arsenal. They had spell books of any description, divining rods available in any kind of wood, amulets to protect them against evil spirits, and lead tablets etched with magical signs.  To the utter horror of the ecclesiastical authorities, they invoked angels and saints. Treasure hunters talked to ghosts. Some of them even tried to conjure up demons. However,  common people simply didn’t see treasure magic as witchcraft, and most of the judges agreed.

Separate from these real forms of magic, there was the imaginary magic of the witches. Nobody was ever (or could ever be) guilty of witchcraft in the full sense of the word, which was defined by the late Middle Ages as a crime that consisted of five elements: a pact with the devil; sexual intercourse with demons; the magical flight (on a broomstick or a similar device); the witches’ dance (often referred to by the antisemitic term ‘witches’ sabbath’); and malevolent magic. Early modern Europe and Britain treated witchcraft as a capital crime.

At first glance, the relation between the economy and the imaginary magic of the witches seems to be entirely negative. Witches were often accused of attacking livestock. They magically made frost, storm and hail, and thereby caused crop failure. Indeed, their weather magic was said to endanger the economy of entire regions. Still, at least in the majority of the witch trials on the European continent, the witches didn’t profit from their magic. Weather magic especially looked like a strange form of auto-aggression because the hailstorms the witches supposedly conjured up damaged their own fields as well. As a rule, the pact with the devil as it appears in trial records was not a contract like that of Goethe’s Faust, which was mostly about the wishes of the magician. Rather, it stated simply that the witch submitted to the will of the demon. She did what a demon told her and became the instrument of the demon’s abyssal hatred of all creation. Witchcraft was mostly about destruction for destruction’s sake, not about the personal interests and wishes of the witches, let alone their economic advantage.

Question 63

Which of the following statements about magical treasure hunting and/or treasure hunters cannot be inferred?

Show Answer

Instruction for set :

Read the passage carefully and answer the following questions:

When investigating witchcraft, one needs to differentiate between real and imaginary magic in the early modern period. If we want to understand the connection between the imaginary magic of the witches and economic behaviour, we need to deal with the connection between the economy and the real magic practised by ‘common’ people. In pre-industrial Europe, magic was a part of everyday life, very much like religion. People didn’t just believe in the efficacy of magic, they actively tried to use magic themselves. Simple forms of divination and healing magic were common, as was magic related to agriculture. The peasant household used divination to find out if the time was right for certain agricultural activities. Charms were supposed to keep the livestock in good health. Urban artisans and merchants also used economic magic to increase their wealth.

Of all the forms of magic, magical treasure-hunting had the greatest economic significance. Treasure hunters drew on a vast magical arsenal. They had spell books of any description, divining rods available in any kind of wood, amulets to protect them against evil spirits, and lead tablets etched with magical signs.  To the utter horror of the ecclesiastical authorities, they invoked angels and saints. Treasure hunters talked to ghosts. Some of them even tried to conjure up demons. However,  common people simply didn’t see treasure magic as witchcraft, and most of the judges agreed.

Separate from these real forms of magic, there was the imaginary magic of the witches. Nobody was ever (or could ever be) guilty of witchcraft in the full sense of the word, which was defined by the late Middle Ages as a crime that consisted of five elements: a pact with the devil; sexual intercourse with demons; the magical flight (on a broomstick or a similar device); the witches’ dance (often referred to by the antisemitic term ‘witches’ sabbath’); and malevolent magic. Early modern Europe and Britain treated witchcraft as a capital crime.

At first glance, the relation between the economy and the imaginary magic of the witches seems to be entirely negative. Witches were often accused of attacking livestock. They magically made frost, storm and hail, and thereby caused crop failure. Indeed, their weather magic was said to endanger the economy of entire regions. Still, at least in the majority of the witch trials on the European continent, the witches didn’t profit from their magic. Weather magic especially looked like a strange form of auto-aggression because the hailstorms the witches supposedly conjured up damaged their own fields as well. As a rule, the pact with the devil as it appears in trial records was not a contract like that of Goethe’s Faust, which was mostly about the wishes of the magician. Rather, it stated simply that the witch submitted to the will of the demon. She did what a demon told her and became the instrument of the demon’s abyssal hatred of all creation. Witchcraft was mostly about destruction for destruction’s sake, not about the personal interests and wishes of the witches, let alone their economic advantage.

Question 64

"Nobody was ever (or could ever be) guilty of witchcraft in the full sense of the word." Which of the following is the most likely reason for the author making this claim?

Show Answer

Instruction for set :

Read the passage carefully and answer the following questions:

When investigating witchcraft, one needs to differentiate between real and imaginary magic in the early modern period. If we want to understand the connection between the imaginary magic of the witches and economic behaviour, we need to deal with the connection between the economy and the real magic practised by ‘common’ people. In pre-industrial Europe, magic was a part of everyday life, very much like religion. People didn’t just believe in the efficacy of magic, they actively tried to use magic themselves. Simple forms of divination and healing magic were common, as was magic related to agriculture. The peasant household used divination to find out if the time was right for certain agricultural activities. Charms were supposed to keep the livestock in good health. Urban artisans and merchants also used economic magic to increase their wealth.

Of all the forms of magic, magical treasure-hunting had the greatest economic significance. Treasure hunters drew on a vast magical arsenal. They had spell books of any description, divining rods available in any kind of wood, amulets to protect them against evil spirits, and lead tablets etched with magical signs.  To the utter horror of the ecclesiastical authorities, they invoked angels and saints. Treasure hunters talked to ghosts. Some of them even tried to conjure up demons. However,  common people simply didn’t see treasure magic as witchcraft, and most of the judges agreed.

Separate from these real forms of magic, there was the imaginary magic of the witches. Nobody was ever (or could ever be) guilty of witchcraft in the full sense of the word, which was defined by the late Middle Ages as a crime that consisted of five elements: a pact with the devil; sexual intercourse with demons; the magical flight (on a broomstick or a similar device); the witches’ dance (often referred to by the antisemitic term ‘witches’ sabbath’); and malevolent magic. Early modern Europe and Britain treated witchcraft as a capital crime.

At first glance, the relation between the economy and the imaginary magic of the witches seems to be entirely negative. Witches were often accused of attacking livestock. They magically made frost, storm and hail, and thereby caused crop failure. Indeed, their weather magic was said to endanger the economy of entire regions. Still, at least in the majority of the witch trials on the European continent, the witches didn’t profit from their magic. Weather magic especially looked like a strange form of auto-aggression because the hailstorms the witches supposedly conjured up damaged their own fields as well. As a rule, the pact with the devil as it appears in trial records was not a contract like that of Goethe’s Faust, which was mostly about the wishes of the magician. Rather, it stated simply that the witch submitted to the will of the demon. She did what a demon told her and became the instrument of the demon’s abyssal hatred of all creation. Witchcraft was mostly about destruction for destruction’s sake, not about the personal interests and wishes of the witches, let alone their economic advantage.

Question 65

In pre-industrial Europe, magic was practised for all of the following purposes, EXCEPT:

Show Answer

Instruction for set :

Read the passage carefully and answer the following questions:

When investigating witchcraft, one needs to differentiate between real and imaginary magic in the early modern period. If we want to understand the connection between the imaginary magic of the witches and economic behaviour, we need to deal with the connection between the economy and the real magic practised by ‘common’ people. In pre-industrial Europe, magic was a part of everyday life, very much like religion. People didn’t just believe in the efficacy of magic, they actively tried to use magic themselves. Simple forms of divination and healing magic were common, as was magic related to agriculture. The peasant household used divination to find out if the time was right for certain agricultural activities. Charms were supposed to keep the livestock in good health. Urban artisans and merchants also used economic magic to increase their wealth.

Of all the forms of magic, magical treasure-hunting had the greatest economic significance. Treasure hunters drew on a vast magical arsenal. They had spell books of any description, divining rods available in any kind of wood, amulets to protect them against evil spirits, and lead tablets etched with magical signs.  To the utter horror of the ecclesiastical authorities, they invoked angels and saints. Treasure hunters talked to ghosts. Some of them even tried to conjure up demons. However,  common people simply didn’t see treasure magic as witchcraft, and most of the judges agreed.

Separate from these real forms of magic, there was the imaginary magic of the witches. Nobody was ever (or could ever be) guilty of witchcraft in the full sense of the word, which was defined by the late Middle Ages as a crime that consisted of five elements: a pact with the devil; sexual intercourse with demons; the magical flight (on a broomstick or a similar device); the witches’ dance (often referred to by the antisemitic term ‘witches’ sabbath’); and malevolent magic. Early modern Europe and Britain treated witchcraft as a capital crime.

At first glance, the relation between the economy and the imaginary magic of the witches seems to be entirely negative. Witches were often accused of attacking livestock. They magically made frost, storm and hail, and thereby caused crop failure. Indeed, their weather magic was said to endanger the economy of entire regions. Still, at least in the majority of the witch trials on the European continent, the witches didn’t profit from their magic. Weather magic especially looked like a strange form of auto-aggression because the hailstorms the witches supposedly conjured up damaged their own fields as well. As a rule, the pact with the devil as it appears in trial records was not a contract like that of Goethe’s Faust, which was mostly about the wishes of the magician. Rather, it stated simply that the witch submitted to the will of the demon. She did what a demon told her and became the instrument of the demon’s abyssal hatred of all creation. Witchcraft was mostly about destruction for destruction’s sake, not about the personal interests and wishes of the witches, let alone their economic advantage.

Question 66

Which of the following could be the reason why the author uses the 'Goethe's Faust' example in the last paragraph?

Show Answer

Question 67

The four sentences (labelled 1, 2, 3 and 4) below, when properly sequenced, would yield a coherent paragraph. Decide on the proper sequencing of the order of the sentences and key in the sequence of the four numbers as your answer:

1. Various industrial sectors including retail, transit systems, enterprises, educational institutions, event organizing, finance, travel etc. have now started leveraging these beacons solutions to track and communicate with their customers.
2. A beacon fixed on to a shop wall enables the retailer to assess the proximity of the customer, and come up with a much targeted or personalized communication like offers, discounts and combos on products in each shelf.
3. Smartphones or other mobile devices can capture the beacon signals, and distance can be estimated by measuring received signal strength.
4. Beacons are tiny and inexpensive, micro-location-based technology devices that can send radio frequency signals and notify nearby Bluetooth devices of their presence and transmit information.


Question 68

The passage given below is followed by four alternate summaries. Choose the option that best captures the essence of the passage.

Aesthetic political representation urges us to realize that ‘the representative has autonomy with regard to the people represented’ but autonomy then is not an excuse to abandon one’s responsibility. Aesthetic autonomy requires cultivation of ‘disinterestedness’ on the part of actors which is not indifference. To have disinterestedness, that is, to have comportment towards the beautiful that is devoid of all ulterior references to use - requires a kind of aesthetic commitment; it is the liberation of ourselves for the release of what has proper worth only in itself.


Question 69

Five sentences related to a topic are given below in a jumbled order. Four of them form a coherent and unified paragraph. Identify the odd sentence that does not go with the four. Key in the number of the option that you choose.
1. Socrates told us that ‘the unexamined life is not worth living’ and that to ‘know thyself’ is the path to true wisdom
2. It suggests that you should adopt an ancient rhetorical method favored by the likes of Julius Caesar and known as ‘illeism’ - or speaking about yourself in the third person.
3. Research has shown that people who are prone to rumination also often suffer from impaired decision making under pressure and are at a substantially increased risk of depression.
4. Simple rumination - the process of churning your concerns around in your head - is not the way to achieve self-realization.
5. The idea is that this small change in perspective can clear your emotional fog, allowing you to see past your biases.


Question 70

There is a sentence that is missing in the paragraph below. Look at the paragraph and decide where (option 1, 2, 3, or 4) the following sentence would best fit.

Sentence : Comprehending a wide range of emotions, Renaissance music nevertheless portrayed all emotions in a balanced and moderate fashion.

Paragraph : A volume of translated Italian madrigals were published in London during the year of 1588. This sudden public interest facilitated a surge of English Madrigal writing as well as a spurt of other secular music writing and publication. ___(1)___. This music boom lasted for thirty years and was as much a golden age of music as British literature was with Shakespeare and Queen Elizabeth I. ___(2)___. The rebirth in both literature and music originated in Italy and migrated to England; the English madrigal became more humorous and lighter in England as compared to Italy. Renaissance music was mostly polyphonic in texture. ___(3)___. Extreme use of and contrasts in dynamics, rhythm, and tone colour do not occur. ___(4)___. The rhythms in Renaissance music tend to have a smooth, soft flow instead of a sharp, well-defined pulse of accents.


Instruction for set :

Read the passage carefully and answer the following questions:

Authenticity, which in its modern sense dates back to the Romantics of the late 18th century, has never had a single meaning. In much of our everyday usage, the term means something more or less analogous to the way that we speak of an object being authentic - as the genuine article, not a copy or a fake. We think of people as authentic when they’re being themselves, consistent with their own personality and without pretence or pretending.

But, as an ethical ideal...authenticity means more than self-consistency or a lack of pretentiousness. It also concerns features of the inner life that define us. While there is no one ‘essence’ of authenticity, the ideal has often been expressed as a commitment to being true to yourself, and ordering your soul and living your life so as to give faithful expression to your individuality, cherished projects and deepest convictions. Authenticity in this ethical sense also had a critical edge, standing against and challenging the utilitarian practices and conformist tendencies of the conventional social and economic order. Society erects barriers that the authentic person must break through. Finding your true self means self-reflection, engaging in candid self-appraisal and seeking ‘genuine self-knowledge’, in the words of the American philosopher Charles Guignon.

In his book The Society of Singularities (2017), the German social theorist Andreas Reckwitz argues that a larger ‘authenticity revolution’ has swept the world during the past 40 years. Authenticity has become an obligation. Reckwitz captures this conundrum with the paradoxical concept of ‘performative authenticity’. Authenticity, in this sense, is the way to be because to be ‘somebody’ is to develop your unique self, your differentness from others and your noninterchangeable life.

Performative authenticity is tied to economic success and social prestige, which means - and this is a further paradoxical feature - that your specialness and self-realisation have to be performed. In order for people to distinguish themselves, they must seek attention and visibility, and positively affect others with their self-representations, personal characteristics and quality of life. In doing so, they have to take great care that their performance isn’t perceived as staged.

Performative authenticity shares with older, inner conceptions of authenticity the notion that each of us has our own unique way of being in the world. But the concepts otherwise diverge. The inner ideal aims at a way of being that is unfeigned and without illusions. It resists the cultivation of an affirming audience, because being a ‘whole’ person, with a noninstrumental relation to self and others, is often at odds with the demands of society. In the performative mode, by contrast, this tension between self and society disappears. Self-elaboration still requires self-examination, but not necessarily of any inner or even aesthetic kind.

Performing your difference isn’t necessarily a zero-sum game. Markets and digital technologies have greatly expanded the infrastructure of possibilities. It is, however, a competition for scarce attention that requires continuous assessment and feedback, and offers little respite. Even if you pull off a good performance, there’s a need to be flexible, to be ready to reinvent your difference. There’s always the danger of becoming inconspicuous.

Question 71

Which of the following best describes the reason why the author terms the concept 'performative authenticity' paradoxical?

Show Answer

Instruction for set :

Read the passage carefully and answer the following questions:

Authenticity, which in its modern sense dates back to the Romantics of the late 18th century, has never had a single meaning. In much of our everyday usage, the term means something more or less analogous to the way that we speak of an object being authentic - as the genuine article, not a copy or a fake. We think of people as authentic when they’re being themselves, consistent with their own personality and without pretence or pretending.

But, as an ethical ideal...authenticity means more than self-consistency or a lack of pretentiousness. It also concerns features of the inner life that define us. While there is no one ‘essence’ of authenticity, the ideal has often been expressed as a commitment to being true to yourself, and ordering your soul and living your life so as to give faithful expression to your individuality, cherished projects and deepest convictions. Authenticity in this ethical sense also had a critical edge, standing against and challenging the utilitarian practices and conformist tendencies of the conventional social and economic order. Society erects barriers that the authentic person must break through. Finding your true self means self-reflection, engaging in candid self-appraisal and seeking ‘genuine self-knowledge’, in the words of the American philosopher Charles Guignon.

In his book The Society of Singularities (2017), the German social theorist Andreas Reckwitz argues that a larger ‘authenticity revolution’ has swept the world during the past 40 years. Authenticity has become an obligation. Reckwitz captures this conundrum with the paradoxical concept of ‘performative authenticity’. Authenticity, in this sense, is the way to be because to be ‘somebody’ is to develop your unique self, your differentness from others and your noninterchangeable life.

Performative authenticity is tied to economic success and social prestige, which means - and this is a further paradoxical feature - that your specialness and self-realisation have to be performed. In order for people to distinguish themselves, they must seek attention and visibility, and positively affect others with their self-representations, personal characteristics and quality of life. In doing so, they have to take great care that their performance isn’t perceived as staged.

Performative authenticity shares with older, inner conceptions of authenticity the notion that each of us has our own unique way of being in the world. But the concepts otherwise diverge. The inner ideal aims at a way of being that is unfeigned and without illusions. It resists the cultivation of an affirming audience, because being a ‘whole’ person, with a noninstrumental relation to self and others, is often at odds with the demands of society. In the performative mode, by contrast, this tension between self and society disappears. Self-elaboration still requires self-examination, but not necessarily of any inner or even aesthetic kind.

Performing your difference isn’t necessarily a zero-sum game. Markets and digital technologies have greatly expanded the infrastructure of possibilities. It is, however, a competition for scarce attention that requires continuous assessment and feedback, and offers little respite. Even if you pull off a good performance, there’s a need to be flexible, to be ready to reinvent your difference. There’s always the danger of becoming inconspicuous.

Question 72

According to Charles Guignon, finding one's true self involves all of the following, EXCEPT:

Show Answer

Instruction for set :

Read the passage carefully and answer the following questions:

Authenticity, which in its modern sense dates back to the Romantics of the late 18th century, has never had a single meaning. In much of our everyday usage, the term means something more or less analogous to the way that we speak of an object being authentic - as the genuine article, not a copy or a fake. We think of people as authentic when they’re being themselves, consistent with their own personality and without pretence or pretending.

But, as an ethical ideal...authenticity means more than self-consistency or a lack of pretentiousness. It also concerns features of the inner life that define us. While there is no one ‘essence’ of authenticity, the ideal has often been expressed as a commitment to being true to yourself, and ordering your soul and living your life so as to give faithful expression to your individuality, cherished projects and deepest convictions. Authenticity in this ethical sense also had a critical edge, standing against and challenging the utilitarian practices and conformist tendencies of the conventional social and economic order. Society erects barriers that the authentic person must break through. Finding your true self means self-reflection, engaging in candid self-appraisal and seeking ‘genuine self-knowledge’, in the words of the American philosopher Charles Guignon.

In his book The Society of Singularities (2017), the German social theorist Andreas Reckwitz argues that a larger ‘authenticity revolution’ has swept the world during the past 40 years. Authenticity has become an obligation. Reckwitz captures this conundrum with the paradoxical concept of ‘performative authenticity’. Authenticity, in this sense, is the way to be because to be ‘somebody’ is to develop your unique self, your differentness from others and your noninterchangeable life.

Performative authenticity is tied to economic success and social prestige, which means - and this is a further paradoxical feature - that your specialness and self-realisation have to be performed. In order for people to distinguish themselves, they must seek attention and visibility, and positively affect others with their self-representations, personal characteristics and quality of life. In doing so, they have to take great care that their performance isn’t perceived as staged.

Performative authenticity shares with older, inner conceptions of authenticity the notion that each of us has our own unique way of being in the world. But the concepts otherwise diverge. The inner ideal aims at a way of being that is unfeigned and without illusions. It resists the cultivation of an affirming audience, because being a ‘whole’ person, with a noninstrumental relation to self and others, is often at odds with the demands of society. In the performative mode, by contrast, this tension between self and society disappears. Self-elaboration still requires self-examination, but not necessarily of any inner or even aesthetic kind.

Performing your difference isn’t necessarily a zero-sum game. Markets and digital technologies have greatly expanded the infrastructure of possibilities. It is, however, a competition for scarce attention that requires continuous assessment and feedback, and offers little respite. Even if you pull off a good performance, there’s a need to be flexible, to be ready to reinvent your difference. There’s always the danger of becoming inconspicuous.

Question 73

Which of the following statements is the author LEAST likely to agree with?

Show Answer

Instruction for set :

Read the passage carefully and answer the following questions:

Authenticity, which in its modern sense dates back to the Romantics of the late 18th century, has never had a single meaning. In much of our everyday usage, the term means something more or less analogous to the way that we speak of an object being authentic - as the genuine article, not a copy or a fake. We think of people as authentic when they’re being themselves, consistent with their own personality and without pretence or pretending.

But, as an ethical ideal...authenticity means more than self-consistency or a lack of pretentiousness. It also concerns features of the inner life that define us. While there is no one ‘essence’ of authenticity, the ideal has often been expressed as a commitment to being true to yourself, and ordering your soul and living your life so as to give faithful expression to your individuality, cherished projects and deepest convictions. Authenticity in this ethical sense also had a critical edge, standing against and challenging the utilitarian practices and conformist tendencies of the conventional social and economic order. Society erects barriers that the authentic person must break through. Finding your true self means self-reflection, engaging in candid self-appraisal and seeking ‘genuine self-knowledge’, in the words of the American philosopher Charles Guignon.

In his book The Society of Singularities (2017), the German social theorist Andreas Reckwitz argues that a larger ‘authenticity revolution’ has swept the world during the past 40 years. Authenticity has become an obligation. Reckwitz captures this conundrum with the paradoxical concept of ‘performative authenticity’. Authenticity, in this sense, is the way to be because to be ‘somebody’ is to develop your unique self, your differentness from others and your noninterchangeable life.

Performative authenticity is tied to economic success and social prestige, which means - and this is a further paradoxical feature - that your specialness and self-realisation have to be performed. In order for people to distinguish themselves, they must seek attention and visibility, and positively affect others with their self-representations, personal characteristics and quality of life. In doing so, they have to take great care that their performance isn’t perceived as staged.

Performative authenticity shares with older, inner conceptions of authenticity the notion that each of us has our own unique way of being in the world. But the concepts otherwise diverge. The inner ideal aims at a way of being that is unfeigned and without illusions. It resists the cultivation of an affirming audience, because being a ‘whole’ person, with a noninstrumental relation to self and others, is often at odds with the demands of society. In the performative mode, by contrast, this tension between self and society disappears. Self-elaboration still requires self-examination, but not necessarily of any inner or even aesthetic kind.

Performing your difference isn’t necessarily a zero-sum game. Markets and digital technologies have greatly expanded the infrastructure of possibilities. It is, however, a competition for scarce attention that requires continuous assessment and feedback, and offers little respite. Even if you pull off a good performance, there’s a need to be flexible, to be ready to reinvent your difference. There’s always the danger of becoming inconspicuous.

Question 74

The inner and performative modes of authenticity differ in all of the following ways, EXCEPT:

Show Answer

Instruction for set :

Read the passage carefully and answer the following questions:

Authenticity, which in its modern sense dates back to the Romantics of the late 18th century, has never had a single meaning. In much of our everyday usage, the term means something more or less analogous to the way that we speak of an object being authentic - as the genuine article, not a copy or a fake. We think of people as authentic when they’re being themselves, consistent with their own personality and without pretence or pretending.

But, as an ethical ideal...authenticity means more than self-consistency or a lack of pretentiousness. It also concerns features of the inner life that define us. While there is no one ‘essence’ of authenticity, the ideal has often been expressed as a commitment to being true to yourself, and ordering your soul and living your life so as to give faithful expression to your individuality, cherished projects and deepest convictions. Authenticity in this ethical sense also had a critical edge, standing against and challenging the utilitarian practices and conformist tendencies of the conventional social and economic order. Society erects barriers that the authentic person must break through. Finding your true self means self-reflection, engaging in candid self-appraisal and seeking ‘genuine self-knowledge’, in the words of the American philosopher Charles Guignon.

In his book The Society of Singularities (2017), the German social theorist Andreas Reckwitz argues that a larger ‘authenticity revolution’ has swept the world during the past 40 years. Authenticity has become an obligation. Reckwitz captures this conundrum with the paradoxical concept of ‘performative authenticity’. Authenticity, in this sense, is the way to be because to be ‘somebody’ is to develop your unique self, your differentness from others and your noninterchangeable life.

Performative authenticity is tied to economic success and social prestige, which means - and this is a further paradoxical feature - that your specialness and self-realisation have to be performed. In order for people to distinguish themselves, they must seek attention and visibility, and positively affect others with their self-representations, personal characteristics and quality of life. In doing so, they have to take great care that their performance isn’t perceived as staged.

Performative authenticity shares with older, inner conceptions of authenticity the notion that each of us has our own unique way of being in the world. But the concepts otherwise diverge. The inner ideal aims at a way of being that is unfeigned and without illusions. It resists the cultivation of an affirming audience, because being a ‘whole’ person, with a noninstrumental relation to self and others, is often at odds with the demands of society. In the performative mode, by contrast, this tension between self and society disappears. Self-elaboration still requires self-examination, but not necessarily of any inner or even aesthetic kind.

Performing your difference isn’t necessarily a zero-sum game. Markets and digital technologies have greatly expanded the infrastructure of possibilities. It is, however, a competition for scarce attention that requires continuous assessment and feedback, and offers little respite. Even if you pull off a good performance, there’s a need to be flexible, to be ready to reinvent your difference. There’s always the danger of becoming inconspicuous.

Question 75

Which of the following statements about the performative mode of authenticity can be inferred from the passage?

I. The uniqueness associated with the performative mode may not be durable.

II. Performative authenticity focuses on differentiating traits rather than useful personal traits.

III. Performative authenticity is closely linked to social status.

IV. In order to achieve differentiation, people must strive to be conspicuous.

Show Answer

Instruction for set :

Read the passage carefully and answer the following questions:

Charles Darwin thought the mental capacities of animals and people differed only in degree, not kind—a natural conclusion to reach when armed with the radical new belief that the one evolved from the other. His last great book, “The Expression of Emotions in Man and Animals”, examined joy, love and grief in birds, domestic animals and primates as well as in various human races. But Darwin’s attitude to animals—easily shared by people in everyday contact with dogs, horses, even mice—ran contrary to a long tradition in European thought which held that animals had no minds at all. This way of thinking stemmed from the argument of René Descartes, a great 17th-century philosopher, that people were creatures of reason, linked to the mind of God, while animals were merely machines made of flesh—living robots which, in the words of Nicolas Malebranche, one of his followers, “eat without pleasure, cry without pain, grow without knowing it: they desire nothing, fear nothing, know nothing.”

For much of the 20th century biology cleaved closer to Descartes than to Darwin. Students of animal behaviour did not rule out the possibility that animals had minds but thought the question almost irrelevant since it was impossible to answer. One could study an organism’s inputs (such as food or the environment) or outputs (its behaviour). But the organism itself remained a black box: unobservable things such as emotions or thoughts were beyond the scope of objective inquiry.

In the past 40 years, however, a wide range of work both in the field and the lab has pushed the consensus away from strict behaviourism and towards that Darwin-friendly view. Progress has not been easy or quick; as the behaviourists warned, both sorts of evidence can be misleading. Laboratory tests can be rigorous, but are inevitably based on animals which may not behave as they do in the wild. Field observations can be dismissed as anecdotal. Running them for years or decades and on a large scale goes some way to guarding against that problem, but such studies are rare.

Nevertheless, most scientists...say with confidence that some animals process information and express emotions in ways that are accompanied by conscious mental experience. They agree that animals...have complex mental capacities; that a few species have attributes once thought to be unique to people, such as the ability to give objects names and use tools; and that a handful of animals—primates, corvids (the crow family) and cetaceans (whales and dolphins)— have something close to what in humans is seen as culture, in that they develop distinctive ways of doing things which are passed down by imitation and example. Dolphins have been found to imitate the behaviour of other dolphins, in their group. No animals have all the attributes of human minds; but almost all the attributes of human minds are found in some animal or other.

Brain mapping reveals that the neurological processes underlying what look like emotions in rats are similar to those behind what clearly are emotions in humans. As a group of neuroscientists seeking to sum the field up put it in 2012, “Humans are not unique in possessing the neurological substrates that generate consciousness. Non-human animals, including all mammals and birds, and many other creatures...also possess these neurological substrates.”

Question 76

Which of the following statements can be inferred from the passage?

I. There is now a consensus among most scientists that some animals exhibit most of the attributes characteristic of human minds.

II. Some animals are self-aware and are also conscious of their social milieu.

III. Some animal minds are capable of imitative behaviour.

IV. People who rarely came in contact with animals disregarded Darwin's views on animal minds.

Show Answer

Instruction for set :

Read the passage carefully and answer the following questions:

Charles Darwin thought the mental capacities of animals and people differed only in degree, not kind—a natural conclusion to reach when armed with the radical new belief that the one evolved from the other. His last great book, “The Expression of Emotions in Man and Animals”, examined joy, love and grief in birds, domestic animals and primates as well as in various human races. But Darwin’s attitude to animals—easily shared by people in everyday contact with dogs, horses, even mice—ran contrary to a long tradition in European thought which held that animals had no minds at all. This way of thinking stemmed from the argument of René Descartes, a great 17th-century philosopher, that people were creatures of reason, linked to the mind of God, while animals were merely machines made of flesh—living robots which, in the words of Nicolas Malebranche, one of his followers, “eat without pleasure, cry without pain, grow without knowing it: they desire nothing, fear nothing, know nothing.”

For much of the 20th century biology cleaved closer to Descartes than to Darwin. Students of animal behaviour did not rule out the possibility that animals had minds but thought the question almost irrelevant since it was impossible to answer. One could study an organism’s inputs (such as food or the environment) or outputs (its behaviour). But the organism itself remained a black box: unobservable things such as emotions or thoughts were beyond the scope of objective inquiry.

In the past 40 years, however, a wide range of work both in the field and the lab has pushed the consensus away from strict behaviourism and towards that Darwin-friendly view. Progress has not been easy or quick; as the behaviourists warned, both sorts of evidence can be misleading. Laboratory tests can be rigorous, but are inevitably based on animals which may not behave as they do in the wild. Field observations can be dismissed as anecdotal. Running them for years or decades and on a large scale goes some way to guarding against that problem, but such studies are rare.

Nevertheless, most scientists...say with confidence that some animals process information and express emotions in ways that are accompanied by conscious mental experience. They agree that animals...have complex mental capacities; that a few species have attributes once thought to be unique to people, such as the ability to give objects names and use tools; and that a handful of animals—primates, corvids (the crow family) and cetaceans (whales and dolphins)— have something close to what in humans is seen as culture, in that they develop distinctive ways of doing things which are passed down by imitation and example. Dolphins have been found to imitate the behaviour of other dolphins, in their group. No animals have all the attributes of human minds; but almost all the attributes of human minds are found in some animal or other.

Brain mapping reveals that the neurological processes underlying what look like emotions in rats are similar to those behind what clearly are emotions in humans. As a group of neuroscientists seeking to sum the field up put it in 2012, “Humans are not unique in possessing the neurological substrates that generate consciousness. Non-human animals, including all mammals and birds, and many other creatures...also possess these neurological substrates.”

Question 77

According to Darwin, humans and animals both

Show Answer

Instruction for set :

Read the passage carefully and answer the following questions:

Charles Darwin thought the mental capacities of animals and people differed only in degree, not kind—a natural conclusion to reach when armed with the radical new belief that the one evolved from the other. His last great book, “The Expression of Emotions in Man and Animals”, examined joy, love and grief in birds, domestic animals and primates as well as in various human races. But Darwin’s attitude to animals—easily shared by people in everyday contact with dogs, horses, even mice—ran contrary to a long tradition in European thought which held that animals had no minds at all. This way of thinking stemmed from the argument of René Descartes, a great 17th-century philosopher, that people were creatures of reason, linked to the mind of God, while animals were merely machines made of flesh—living robots which, in the words of Nicolas Malebranche, one of his followers, “eat without pleasure, cry without pain, grow without knowing it: they desire nothing, fear nothing, know nothing.”

For much of the 20th century biology cleaved closer to Descartes than to Darwin. Students of animal behaviour did not rule out the possibility that animals had minds but thought the question almost irrelevant since it was impossible to answer. One could study an organism’s inputs (such as food or the environment) or outputs (its behaviour). But the organism itself remained a black box: unobservable things such as emotions or thoughts were beyond the scope of objective inquiry.

In the past 40 years, however, a wide range of work both in the field and the lab has pushed the consensus away from strict behaviourism and towards that Darwin-friendly view. Progress has not been easy or quick; as the behaviourists warned, both sorts of evidence can be misleading. Laboratory tests can be rigorous, but are inevitably based on animals which may not behave as they do in the wild. Field observations can be dismissed as anecdotal. Running them for years or decades and on a large scale goes some way to guarding against that problem, but such studies are rare.

Nevertheless, most scientists...say with confidence that some animals process information and express emotions in ways that are accompanied by conscious mental experience. They agree that animals...have complex mental capacities; that a few species have attributes once thought to be unique to people, such as the ability to give objects names and use tools; and that a handful of animals—primates, corvids (the crow family) and cetaceans (whales and dolphins)— have something close to what in humans is seen as culture, in that they develop distinctive ways of doing things which are passed down by imitation and example. Dolphins have been found to imitate the behaviour of other dolphins, in their group. No animals have all the attributes of human minds; but almost all the attributes of human minds are found in some animal or other.

Brain mapping reveals that the neurological processes underlying what look like emotions in rats are similar to those behind what clearly are emotions in humans. As a group of neuroscientists seeking to sum the field up put it in 2012, “Humans are not unique in possessing the neurological substrates that generate consciousness. Non-human animals, including all mammals and birds, and many other creatures...also possess these neurological substrates.”

Question 78

Which of the following views of Descartes and/or his followers cannot be inferred from the passage?

Show Answer

Instruction for set :

Read the passage carefully and answer the following questions:

Charles Darwin thought the mental capacities of animals and people differed only in degree, not kind—a natural conclusion to reach when armed with the radical new belief that the one evolved from the other. His last great book, “The Expression of Emotions in Man and Animals”, examined joy, love and grief in birds, domestic animals and primates as well as in various human races. But Darwin’s attitude to animals—easily shared by people in everyday contact with dogs, horses, even mice—ran contrary to a long tradition in European thought which held that animals had no minds at all. This way of thinking stemmed from the argument of René Descartes, a great 17th-century philosopher, that people were creatures of reason, linked to the mind of God, while animals were merely machines made of flesh—living robots which, in the words of Nicolas Malebranche, one of his followers, “eat without pleasure, cry without pain, grow without knowing it: they desire nothing, fear nothing, know nothing.”

For much of the 20th century biology cleaved closer to Descartes than to Darwin. Students of animal behaviour did not rule out the possibility that animals had minds but thought the question almost irrelevant since it was impossible to answer. One could study an organism’s inputs (such as food or the environment) or outputs (its behaviour). But the organism itself remained a black box: unobservable things such as emotions or thoughts were beyond the scope of objective inquiry.

In the past 40 years, however, a wide range of work both in the field and the lab has pushed the consensus away from strict behaviourism and towards that Darwin-friendly view. Progress has not been easy or quick; as the behaviourists warned, both sorts of evidence can be misleading. Laboratory tests can be rigorous, but are inevitably based on animals which may not behave as they do in the wild. Field observations can be dismissed as anecdotal. Running them for years or decades and on a large scale goes some way to guarding against that problem, but such studies are rare.

Nevertheless, most scientists...say with confidence that some animals process information and express emotions in ways that are accompanied by conscious mental experience. They agree that animals...have complex mental capacities; that a few species have attributes once thought to be unique to people, such as the ability to give objects names and use tools; and that a handful of animals—primates, corvids (the crow family) and cetaceans (whales and dolphins)— have something close to what in humans is seen as culture, in that they develop distinctive ways of doing things which are passed down by imitation and example. Dolphins have been found to imitate the behaviour of other dolphins, in their group. No animals have all the attributes of human minds; but almost all the attributes of human minds are found in some animal or other.

Brain mapping reveals that the neurological processes underlying what look like emotions in rats are similar to those behind what clearly are emotions in humans. As a group of neuroscientists seeking to sum the field up put it in 2012, “Humans are not unique in possessing the neurological substrates that generate consciousness. Non-human animals, including all mammals and birds, and many other creatures...also possess these neurological substrates.”

Question 79

For much of the 20th century, students of animal behaviour opined that

Show Answer

Instruction for set :

Read the passage carefully and answer the following questions:

Charles Darwin thought the mental capacities of animals and people differed only in degree, not kind—a natural conclusion to reach when armed with the radical new belief that the one evolved from the other. His last great book, “The Expression of Emotions in Man and Animals”, examined joy, love and grief in birds, domestic animals and primates as well as in various human races. But Darwin’s attitude to animals—easily shared by people in everyday contact with dogs, horses, even mice—ran contrary to a long tradition in European thought which held that animals had no minds at all. This way of thinking stemmed from the argument of René Descartes, a great 17th-century philosopher, that people were creatures of reason, linked to the mind of God, while animals were merely machines made of flesh—living robots which, in the words of Nicolas Malebranche, one of his followers, “eat without pleasure, cry without pain, grow without knowing it: they desire nothing, fear nothing, know nothing.”

For much of the 20th century biology cleaved closer to Descartes than to Darwin. Students of animal behaviour did not rule out the possibility that animals had minds but thought the question almost irrelevant since it was impossible to answer. One could study an organism’s inputs (such as food or the environment) or outputs (its behaviour). But the organism itself remained a black box: unobservable things such as emotions or thoughts were beyond the scope of objective inquiry.

In the past 40 years, however, a wide range of work both in the field and the lab has pushed the consensus away from strict behaviourism and towards that Darwin-friendly view. Progress has not been easy or quick; as the behaviourists warned, both sorts of evidence can be misleading. Laboratory tests can be rigorous, but are inevitably based on animals which may not behave as they do in the wild. Field observations can be dismissed as anecdotal. Running them for years or decades and on a large scale goes some way to guarding against that problem, but such studies are rare.

Nevertheless, most scientists...say with confidence that some animals process information and express emotions in ways that are accompanied by conscious mental experience. They agree that animals...have complex mental capacities; that a few species have attributes once thought to be unique to people, such as the ability to give objects names and use tools; and that a handful of animals—primates, corvids (the crow family) and cetaceans (whales and dolphins)— have something close to what in humans is seen as culture, in that they develop distinctive ways of doing things which are passed down by imitation and example. Dolphins have been found to imitate the behaviour of other dolphins, in their group. No animals have all the attributes of human minds; but almost all the attributes of human minds are found in some animal or other.

Brain mapping reveals that the neurological processes underlying what look like emotions in rats are similar to those behind what clearly are emotions in humans. As a group of neuroscientists seeking to sum the field up put it in 2012, “Humans are not unique in possessing the neurological substrates that generate consciousness. Non-human animals, including all mammals and birds, and many other creatures...also possess these neurological substrates.”

Question 80

Which of the following is a reason why the behaviourists are concerned about the evidence supporting the Darwin-friendly view?

Show Answer

Instruction for set :

Read the passage carefully and answer the following questions:

Fantasy politics starts from the expectation that wishes should come true, that the best outcome imaginable is not just possible but overwhelmingly likely. The great appeal of fantasy politics is that it puts you in complete control. Using the power of your imagination, you get to control not only what you will do but also how everyone else will react. Everyone recognises your awesomeness and competes to serve your interests, whether motivated by admiration or fear.

Why is fantasy politics so popular these days? One reason is that it is so much easier than real politics. In real politics, we try to address multiple, intersecting, complicated collective action problems - like the high cost of housing or sexism in the labour market - while at the same time grappling with the deep diversity of beliefs, values, and interests within and between societies. Real politics is a difficult and time-consuming activity that usually requires dissatisfactory compromise with reality and what other people want. It is much easier to make-believe our way to our favoured outcomes.

Fantasy politics is also much more inherently satisfying than real politics. It gives us the opportunity to express our political values and loyalties and this is something that feels good in itself and has an immediate psychic payoff, regardless of whether anything we are doing is actually contributing to bringing about the outcome we claim to want. Raising the stakes in our imagination, for example by elevating a mundane legislative election to a decisive battle between good and evil, immediately makes us feel more vital and significant. Conspiracy theorising similarly raises the stakes, casting us in the role of a band of heroes, such as QAnon followers, fighting to bring to light and bring down depraved evil. All this contrasts with the meagre psychic rewards of participating in real politics, as merely one voice among millions of equals no more special than anyone else.

The psychic benefits of fantasy politics seem especially attractive to those who feel neglected and unheard by the political system, such as the white working class in towns left behind by the modern economy. For these losers, animated by grievance, fantasy politics offers their only way to feel politically significant. Moreover, like the victim’s dreams of revenge against their bully, these resentment driven fantasies are not kind. In the mid-term, the failure of populist fantasies like Brexit (a classic example of fantasy politics) will no doubt reinforce their followers' cynicism and alienation.

It should also be mentioned that fantasy politics is everywhere these days because fantasy itself is so popular. The kookiness of America’s gun rights movement, for example, has a lot to do with its animating hero fantasy of the regular guy standing up against the bad guys or evil government. In these movie screenplays that they write themselves, the good guy never misses and the bad guys never manage to shoot straight; and when the police arrive they can immediately tell who the good guys are.

Finally, demand creates its own supply. Political entrepreneurs like Trump or Farage or Boris come out of the woodwork and start pitching more fantasy products for voters to buy- so long as large numbers of our fellow citizens are disinterested in outcomes and prefer wallowing in fantasy, populist politicians will make hay.

Question 81

The author ascribes the pervasiveness of fantasy politics today to all of the following factors EXCEPT:

Show Answer

Instruction for set :

Read the passage carefully and answer the following questions:

Fantasy politics starts from the expectation that wishes should come true, that the best outcome imaginable is not just possible but overwhelmingly likely. The great appeal of fantasy politics is that it puts you in complete control. Using the power of your imagination, you get to control not only what you will do but also how everyone else will react. Everyone recognises your awesomeness and competes to serve your interests, whether motivated by admiration or fear.

Why is fantasy politics so popular these days? One reason is that it is so much easier than real politics. In real politics, we try to address multiple, intersecting, complicated collective action problems - like the high cost of housing or sexism in the labour market - while at the same time grappling with the deep diversity of beliefs, values, and interests within and between societies. Real politics is a difficult and time-consuming activity that usually requires dissatisfactory compromise with reality and what other people want. It is much easier to make-believe our way to our favoured outcomes.

Fantasy politics is also much more inherently satisfying than real politics. It gives us the opportunity to express our political values and loyalties and this is something that feels good in itself and has an immediate psychic payoff, regardless of whether anything we are doing is actually contributing to bringing about the outcome we claim to want. Raising the stakes in our imagination, for example by elevating a mundane legislative election to a decisive battle between good and evil, immediately makes us feel more vital and significant. Conspiracy theorising similarly raises the stakes, casting us in the role of a band of heroes, such as QAnon followers, fighting to bring to light and bring down depraved evil. All this contrasts with the meagre psychic rewards of participating in real politics, as merely one voice among millions of equals no more special than anyone else.

The psychic benefits of fantasy politics seem especially attractive to those who feel neglected and unheard by the political system, such as the white working class in towns left behind by the modern economy. For these losers, animated by grievance, fantasy politics offers their only way to feel politically significant. Moreover, like the victim’s dreams of revenge against their bully, these resentment driven fantasies are not kind. In the mid-term, the failure of populist fantasies like Brexit (a classic example of fantasy politics) will no doubt reinforce their followers' cynicism and alienation.

It should also be mentioned that fantasy politics is everywhere these days because fantasy itself is so popular. The kookiness of America’s gun rights movement, for example, has a lot to do with its animating hero fantasy of the regular guy standing up against the bad guys or evil government. In these movie screenplays that they write themselves, the good guy never misses and the bad guys never manage to shoot straight; and when the police arrive they can immediately tell who the good guys are.

Finally, demand creates its own supply. Political entrepreneurs like Trump or Farage or Boris come out of the woodwork and start pitching more fantasy products for voters to buy- so long as large numbers of our fellow citizens are disinterested in outcomes and prefer wallowing in fantasy, populist politicians will make hay.

Question 82

The author is likely to agree with all of the following statements, EXCEPT

Show Answer

Instruction for set :

Read the passage carefully and answer the following questions:

Fantasy politics starts from the expectation that wishes should come true, that the best outcome imaginable is not just possible but overwhelmingly likely. The great appeal of fantasy politics is that it puts you in complete control. Using the power of your imagination, you get to control not only what you will do but also how everyone else will react. Everyone recognises your awesomeness and competes to serve your interests, whether motivated by admiration or fear.

Why is fantasy politics so popular these days? One reason is that it is so much easier than real politics. In real politics, we try to address multiple, intersecting, complicated collective action problems - like the high cost of housing or sexism in the labour market - while at the same time grappling with the deep diversity of beliefs, values, and interests within and between societies. Real politics is a difficult and time-consuming activity that usually requires dissatisfactory compromise with reality and what other people want. It is much easier to make-believe our way to our favoured outcomes.

Fantasy politics is also much more inherently satisfying than real politics. It gives us the opportunity to express our political values and loyalties and this is something that feels good in itself and has an immediate psychic payoff, regardless of whether anything we are doing is actually contributing to bringing about the outcome we claim to want. Raising the stakes in our imagination, for example by elevating a mundane legislative election to a decisive battle between good and evil, immediately makes us feel more vital and significant. Conspiracy theorising similarly raises the stakes, casting us in the role of a band of heroes, such as QAnon followers, fighting to bring to light and bring down depraved evil. All this contrasts with the meagre psychic rewards of participating in real politics, as merely one voice among millions of equals no more special than anyone else.

The psychic benefits of fantasy politics seem especially attractive to those who feel neglected and unheard by the political system, such as the white working class in towns left behind by the modern economy. For these losers, animated by grievance, fantasy politics offers their only way to feel politically significant. Moreover, like the victim’s dreams of revenge against their bully, these resentment driven fantasies are not kind. In the mid-term, the failure of populist fantasies like Brexit (a classic example of fantasy politics) will no doubt reinforce their followers' cynicism and alienation.

It should also be mentioned that fantasy politics is everywhere these days because fantasy itself is so popular. The kookiness of America’s gun rights movement, for example, has a lot to do with its animating hero fantasy of the regular guy standing up against the bad guys or evil government. In these movie screenplays that they write themselves, the good guy never misses and the bad guys never manage to shoot straight; and when the police arrive they can immediately tell who the good guys are.

Finally, demand creates its own supply. Political entrepreneurs like Trump or Farage or Boris come out of the woodwork and start pitching more fantasy products for voters to buy- so long as large numbers of our fellow citizens are disinterested in outcomes and prefer wallowing in fantasy, populist politicians will make hay.

Question 83

Which of the following is a valid inference from the passage?

Show Answer

Instruction for set :

Read the passage carefully and answer the following questions:

Fantasy politics starts from the expectation that wishes should come true, that the best outcome imaginable is not just possible but overwhelmingly likely. The great appeal of fantasy politics is that it puts you in complete control. Using the power of your imagination, you get to control not only what you will do but also how everyone else will react. Everyone recognises your awesomeness and competes to serve your interests, whether motivated by admiration or fear.

Why is fantasy politics so popular these days? One reason is that it is so much easier than real politics. In real politics, we try to address multiple, intersecting, complicated collective action problems - like the high cost of housing or sexism in the labour market - while at the same time grappling with the deep diversity of beliefs, values, and interests within and between societies. Real politics is a difficult and time-consuming activity that usually requires dissatisfactory compromise with reality and what other people want. It is much easier to make-believe our way to our favoured outcomes.

Fantasy politics is also much more inherently satisfying than real politics. It gives us the opportunity to express our political values and loyalties and this is something that feels good in itself and has an immediate psychic payoff, regardless of whether anything we are doing is actually contributing to bringing about the outcome we claim to want. Raising the stakes in our imagination, for example by elevating a mundane legislative election to a decisive battle between good and evil, immediately makes us feel more vital and significant. Conspiracy theorising similarly raises the stakes, casting us in the role of a band of heroes, such as QAnon followers, fighting to bring to light and bring down depraved evil. All this contrasts with the meagre psychic rewards of participating in real politics, as merely one voice among millions of equals no more special than anyone else.

The psychic benefits of fantasy politics seem especially attractive to those who feel neglected and unheard by the political system, such as the white working class in towns left behind by the modern economy. For these losers, animated by grievance, fantasy politics offers their only way to feel politically significant. Moreover, like the victim’s dreams of revenge against their bully, these resentment driven fantasies are not kind. In the mid-term, the failure of populist fantasies like Brexit (a classic example of fantasy politics) will no doubt reinforce their followers' cynicism and alienation.

It should also be mentioned that fantasy politics is everywhere these days because fantasy itself is so popular. The kookiness of America’s gun rights movement, for example, has a lot to do with its animating hero fantasy of the regular guy standing up against the bad guys or evil government. In these movie screenplays that they write themselves, the good guy never misses and the bad guys never manage to shoot straight; and when the police arrive they can immediately tell who the good guys are.

Finally, demand creates its own supply. Political entrepreneurs like Trump or Farage or Boris come out of the woodwork and start pitching more fantasy products for voters to buy- so long as large numbers of our fellow citizens are disinterested in outcomes and prefer wallowing in fantasy, populist politicians will make hay.

Question 84

The author’s tone towards followers of fantasy politics can best be described as being:

Show Answer

Instruction for set :

Read the passage carefully and answer the following questions:

Fantasy politics starts from the expectation that wishes should come true, that the best outcome imaginable is not just possible but overwhelmingly likely. The great appeal of fantasy politics is that it puts you in complete control. Using the power of your imagination, you get to control not only what you will do but also how everyone else will react. Everyone recognises your awesomeness and competes to serve your interests, whether motivated by admiration or fear.

Why is fantasy politics so popular these days? One reason is that it is so much easier than real politics. In real politics, we try to address multiple, intersecting, complicated collective action problems - like the high cost of housing or sexism in the labour market - while at the same time grappling with the deep diversity of beliefs, values, and interests within and between societies. Real politics is a difficult and time-consuming activity that usually requires dissatisfactory compromise with reality and what other people want. It is much easier to make-believe our way to our favoured outcomes.

Fantasy politics is also much more inherently satisfying than real politics. It gives us the opportunity to express our political values and loyalties and this is something that feels good in itself and has an immediate psychic payoff, regardless of whether anything we are doing is actually contributing to bringing about the outcome we claim to want. Raising the stakes in our imagination, for example by elevating a mundane legislative election to a decisive battle between good and evil, immediately makes us feel more vital and significant. Conspiracy theorising similarly raises the stakes, casting us in the role of a band of heroes, such as QAnon followers, fighting to bring to light and bring down depraved evil. All this contrasts with the meagre psychic rewards of participating in real politics, as merely one voice among millions of equals no more special than anyone else.

The psychic benefits of fantasy politics seem especially attractive to those who feel neglected and unheard by the political system, such as the white working class in towns left behind by the modern economy. For these losers, animated by grievance, fantasy politics offers their only way to feel politically significant. Moreover, like the victim’s dreams of revenge against their bully, these resentment driven fantasies are not kind. In the mid-term, the failure of populist fantasies like Brexit (a classic example of fantasy politics) will no doubt reinforce their followers' cynicism and alienation.

It should also be mentioned that fantasy politics is everywhere these days because fantasy itself is so popular. The kookiness of America’s gun rights movement, for example, has a lot to do with its animating hero fantasy of the regular guy standing up against the bad guys or evil government. In these movie screenplays that they write themselves, the good guy never misses and the bad guys never manage to shoot straight; and when the police arrive they can immediately tell who the good guys are.

Finally, demand creates its own supply. Political entrepreneurs like Trump or Farage or Boris come out of the woodwork and start pitching more fantasy products for voters to buy- so long as large numbers of our fellow citizens are disinterested in outcomes and prefer wallowing in fantasy, populist politicians will make hay.

Question 85

Why does the author cite QAnon followers in the passage?

Show Answer

Instruction for set :

The passage below is accompanied by a set of questions. Choose the best answer to each question.

The word ‘anarchy’ comes from the Greek ' anarkhia ', meaning contrary to authority or without a ruler, and was used in a derogatory sense until 1840, when it was adopted by Pierre-Joseph Proudhon to describe his political and social ideology. Proudhon argued that organization without government was both possible and desirable. In the evolution of political ideas, anarchism can be seen as an ultimate projection of both liberalism and socialism, and the differing strands of anarchist thought can be related to their emphasis on one or the other of these.

Historically, anarchism arose not only as an explanation of the gulf between the rich and the poor in any community, and of the reason why the poor have been obliged to fight for their share of a common inheritance, but as a radical answer to the question ‘What went wrong?’ that followed the ultimate outcome of the French Revolution. It had ended not only with a reign of terror and the emergence of a newly rich ruling caste, but with a new adored emperor, Napoleon Bonaparte, strutting through his conquered territories.

The anarchists and their precursors were unique on the political Left in affirming that workers and peasants, grasping the chance that arose to bring an end to centuries of exploitation and tyranny, were inevitably betrayed by the new class of politicians, whose first priority was to re-establish a centralized state power. After every revolutionary uprising, usually won at a heavy cost for ordinary populations, the new rulers had no hesitation in applying violence and terror, a secret police, and a professional army to maintain their control.

For anarchists the state itself is the enemy, and they have applied the same interpretation to the outcome of every revolution of the 19th and 20th centuries. This is not merely because every state keeps a watchful and sometimes punitive eye on its dissidents, but because every state protects the privileges of the powerful.

The mainstream of anarchist propaganda for more than a century has been anarchist- communism, which argues that property in land, natural resources, and the means of production should be held in mutual control by local communities, federating for innumerable joint purposes with other communes. It differs from state socialism in opposing the concept of any central authority. Some anarchists prefer to distinguish between anarchist-communism and collectivist anarchism in order to stress the obviously desirable freedom of an individual or family to possess the resources needed for living, while not implying the right to own the resources needed by others. . . .

There are, unsurprisingly, several traditions of individualist anarchism, one of them deriving from the ‘conscious egoism’ of the German writer Max Stirner (1806-56), and another from a remarkable series of 19th-century American figures who argued that in protecting our own autonomy and associating with others for common advantages, we are promoting the good of all. These thinkers differed from free-market liberals in their absolute mistrust of American capitalism, and in their emphasis on mutualism.

Question 86

The author makes all of the following arguments in the passage, EXCEPT:


Instruction for set :

The passage below is accompanied by a set of questions. Choose the best answer to each question.

The word ‘anarchy’ comes from the Greek ' anarkhia ', meaning contrary to authority or without a ruler, and was used in a derogatory sense until 1840, when it was adopted by Pierre-Joseph Proudhon to describe his political and social ideology. Proudhon argued that organization without government was both possible and desirable. In the evolution of political ideas, anarchism can be seen as an ultimate projection of both liberalism and socialism, and the differing strands of anarchist thought can be related to their emphasis on one or the other of these.

Historically, anarchism arose not only as an explanation of the gulf between the rich and the poor in any community, and of the reason why the poor have been obliged to fight for their share of a common inheritance, but as a radical answer to the question ‘What went wrong?’ that followed the ultimate outcome of the French Revolution. It had ended not only with a reign of terror and the emergence of a newly rich ruling caste, but with a new adored emperor, Napoleon Bonaparte, strutting through his conquered territories.

The anarchists and their precursors were unique on the political Left in affirming that workers and peasants, grasping the chance that arose to bring an end to centuries of exploitation and tyranny, were inevitably betrayed by the new class of politicians, whose first priority was to re-establish a centralized state power. After every revolutionary uprising, usually won at a heavy cost for ordinary populations, the new rulers had no hesitation in applying violence and terror, a secret police, and a professional army to maintain their control.

For anarchists the state itself is the enemy, and they have applied the same interpretation to the outcome of every revolution of the 19th and 20th centuries. This is not merely because every state keeps a watchful and sometimes punitive eye on its dissidents, but because every state protects the privileges of the powerful.

The mainstream of anarchist propaganda for more than a century has been anarchist- communism, which argues that property in land, natural resources, and the means of production should be held in mutual control by local communities, federating for innumerable joint purposes with other communes. It differs from state socialism in opposing the concept of any central authority. Some anarchists prefer to distinguish between anarchist-communism and collectivist anarchism in order to stress the obviously desirable freedom of an individual or family to possess the resources needed for living, while not implying the right to own the resources needed by others. . . .

There are, unsurprisingly, several traditions of individualist anarchism, one of them deriving from the ‘conscious egoism’ of the German writer Max Stirner (1806-56), and another from a remarkable series of 19th-century American figures who argued that in protecting our own autonomy and associating with others for common advantages, we are promoting the good of all. These thinkers differed from free-market liberals in their absolute mistrust of American capitalism, and in their emphasis on mutualism.

Question 87

The author believes that the new ruling class of politicians betrayed the principles of the French Revolution, but does not specify in what way. In the context of the passage, which statement below is the likeliest explanation of that betrayal?


Instruction for set :

The passage below is accompanied by a set of questions. Choose the best answer to each question.

The word ‘anarchy’ comes from the Greek ' anarkhia ', meaning contrary to authority or without a ruler, and was used in a derogatory sense until 1840, when it was adopted by Pierre-Joseph Proudhon to describe his political and social ideology. Proudhon argued that organization without government was both possible and desirable. In the evolution of political ideas, anarchism can be seen as an ultimate projection of both liberalism and socialism, and the differing strands of anarchist thought can be related to their emphasis on one or the other of these.

Historically, anarchism arose not only as an explanation of the gulf between the rich and the poor in any community, and of the reason why the poor have been obliged to fight for their share of a common inheritance, but as a radical answer to the question ‘What went wrong?’ that followed the ultimate outcome of the French Revolution. It had ended not only with a reign of terror and the emergence of a newly rich ruling caste, but with a new adored emperor, Napoleon Bonaparte, strutting through his conquered territories.

The anarchists and their precursors were unique on the political Left in affirming that workers and peasants, grasping the chance that arose to bring an end to centuries of exploitation and tyranny, were inevitably betrayed by the new class of politicians, whose first priority was to re-establish a centralized state power. After every revolutionary uprising, usually won at a heavy cost for ordinary populations, the new rulers had no hesitation in applying violence and terror, a secret police, and a professional army to maintain their control.

For anarchists the state itself is the enemy, and they have applied the same interpretation to the outcome of every revolution of the 19th and 20th centuries. This is not merely because every state keeps a watchful and sometimes punitive eye on its dissidents, but because every state protects the privileges of the powerful.

The mainstream of anarchist propaganda for more than a century has been anarchist- communism, which argues that property in land, natural resources, and the means of production should be held in mutual control by local communities, federating for innumerable joint purposes with other communes. It differs from state socialism in opposing the concept of any central authority. Some anarchists prefer to distinguish between anarchist-communism and collectivist anarchism in order to stress the obviously desirable freedom of an individual or family to possess the resources needed for living, while not implying the right to own the resources needed by others. . . .

There are, unsurprisingly, several traditions of individualist anarchism, one of them deriving from the ‘conscious egoism’ of the German writer Max Stirner (1806-56), and another from a remarkable series of 19th-century American figures who argued that in protecting our own autonomy and associating with others for common advantages, we are promoting the good of all. These thinkers differed from free-market liberals in their absolute mistrust of American capitalism, and in their emphasis on mutualism.

Question 88

Which one of the following best expresses the similarity between American individualist anarchists and free-market liberals as well as the difference between the former and the latter?


Instruction for set :

The passage below is accompanied by a set of questions. Choose the best answer to each question.

The word ‘anarchy’ comes from the Greek ' anarkhia ', meaning contrary to authority or without a ruler, and was used in a derogatory sense until 1840, when it was adopted by Pierre-Joseph Proudhon to describe his political and social ideology. Proudhon argued that organization without government was both possible and desirable. In the evolution of political ideas, anarchism can be seen as an ultimate projection of both liberalism and socialism, and the differing strands of anarchist thought can be related to their emphasis on one or the other of these.

Historically, anarchism arose not only as an explanation of the gulf between the rich and the poor in any community, and of the reason why the poor have been obliged to fight for their share of a common inheritance, but as a radical answer to the question ‘What went wrong?’ that followed the ultimate outcome of the French Revolution. It had ended not only with a reign of terror and the emergence of a newly rich ruling caste, but with a new adored emperor, Napoleon Bonaparte, strutting through his conquered territories.

The anarchists and their precursors were unique on the political Left in affirming that workers and peasants, grasping the chance that arose to bring an end to centuries of exploitation and tyranny, were inevitably betrayed by the new class of politicians, whose first priority was to re-establish a centralized state power. After every revolutionary uprising, usually won at a heavy cost for ordinary populations, the new rulers had no hesitation in applying violence and terror, a secret police, and a professional army to maintain their control.

For anarchists the state itself is the enemy, and they have applied the same interpretation to the outcome of every revolution of the 19th and 20th centuries. This is not merely because every state keeps a watchful and sometimes punitive eye on its dissidents, but because every state protects the privileges of the powerful.

The mainstream of anarchist propaganda for more than a century has been anarchist- communism, which argues that property in land, natural resources, and the means of production should be held in mutual control by local communities, federating for innumerable joint purposes with other communes. It differs from state socialism in opposing the concept of any central authority. Some anarchists prefer to distinguish between anarchist-communism and collectivist anarchism in order to stress the obviously desirable freedom of an individual or family to possess the resources needed for living, while not implying the right to own the resources needed by others. . . .

There are, unsurprisingly, several traditions of individualist anarchism, one of them deriving from the ‘conscious egoism’ of the German writer Max Stirner (1806-56), and another from a remarkable series of 19th-century American figures who argued that in protecting our own autonomy and associating with others for common advantages, we are promoting the good of all. These thinkers differed from free-market liberals in their absolute mistrust of American capitalism, and in their emphasis on mutualism.

Question 89

Of the following sets of concepts, identify the set that is conceptually closest to the concerns of the passage.


Instruction for set :

The passage below is accompanied by a set of questions. Choose the best answer to each question.

The word ‘anarchy’ comes from the Greek ' anarkhia ', meaning contrary to authority or without a ruler, and was used in a derogatory sense until 1840, when it was adopted by Pierre-Joseph Proudhon to describe his political and social ideology. Proudhon argued that organization without government was both possible and desirable. In the evolution of political ideas, anarchism can be seen as an ultimate projection of both liberalism and socialism, and the differing strands of anarchist thought can be related to their emphasis on one or the other of these.

Historically, anarchism arose not only as an explanation of the gulf between the rich and the poor in any community, and of the reason why the poor have been obliged to fight for their share of a common inheritance, but as a radical answer to the question ‘What went wrong?’ that followed the ultimate outcome of the French Revolution. It had ended not only with a reign of terror and the emergence of a newly rich ruling caste, but with a new adored emperor, Napoleon Bonaparte, strutting through his conquered territories.

The anarchists and their precursors were unique on the political Left in affirming that workers and peasants, grasping the chance that arose to bring an end to centuries of exploitation and tyranny, were inevitably betrayed by the new class of politicians, whose first priority was to re-establish a centralized state power. After every revolutionary uprising, usually won at a heavy cost for ordinary populations, the new rulers had no hesitation in applying violence and terror, a secret police, and a professional army to maintain their control.

For anarchists the state itself is the enemy, and they have applied the same interpretation to the outcome of every revolution of the 19th and 20th centuries. This is not merely because every state keeps a watchful and sometimes punitive eye on its dissidents, but because every state protects the privileges of the powerful.

The mainstream of anarchist propaganda for more than a century has been anarchist- communism, which argues that property in land, natural resources, and the means of production should be held in mutual control by local communities, federating for innumerable joint purposes with other communes. It differs from state socialism in opposing the concept of any central authority. Some anarchists prefer to distinguish between anarchist-communism and collectivist anarchism in order to stress the obviously desirable freedom of an individual or family to possess the resources needed for living, while not implying the right to own the resources needed by others. . . .

There are, unsurprisingly, several traditions of individualist anarchism, one of them deriving from the ‘conscious egoism’ of the German writer Max Stirner (1806-56), and another from a remarkable series of 19th-century American figures who argued that in protecting our own autonomy and associating with others for common advantages, we are promoting the good of all. These thinkers differed from free-market liberals in their absolute mistrust of American capitalism, and in their emphasis on mutualism.

Question 90

According to the passage, what is the one idea that is common to all forms of anarchism?


Question 91

Four sentences are given below. These sentences, when rearranged in proper order, form a logical and meaningful paragraph. Rearrange the sentences and enter the correct order as the answer.

1. He was compelled to stick his nose above the surface in order to breathe or “blow,” and then down he would go again as quick as possible.
2. I was obliged to light the basement with gas, and that frightened the sea-monster to such an extent that he kept at the bottom of the tank.
3. I succeeded in placing it, “in good condition,” in a large tank, fifty feet long, and supplied with salt water, in the basement of the American Museum.
4. Several years ago, I purchased a living white whale.

Show Answer

Question 92

The passage given below is followed by four summaries. Choose the option that best captures the author’s position.

An organization’s core capabilities are those activities that, when performed at the highest level, enable the organization to bring its where-to-play and how-to-win choices to life. They are best understood as operating as a system of reinforcing activities— a concept first articulated by Harvard Business School’s Michael Porter. Porter noted that powerful and sustainable competitive advantage is unlikely to arise from any one capability (e.g., having the best sales force in the industry or the best technology in the industry), but rather from a set of capabilities that both fit with one another (i.e., that don’t conflict with one another) and actually reinforce one another (i.e., that make each other stronger than they would be alone).

Show Answer

Question 93

Five sentences are given below. Four of these, when rearranged properly, form a logical and meaningful paragraph. Identify the sentence which does not belong to this paragraph and then enter its number as the answer.
1. Of course, this was not so clear then.
2. In the discharge of all these duties and in all his relations with men, whether above him in office or under his command, he had shown himself trustworthy and efficient, a man of clear mind and decisive action—one who commanded men’s respect, obedience, and even love.
3. In electing George Washington commander-in-chief of the Continental army, the Continental Congress probably made the very wisest choice possible.
4. But they had learned enough about his wonderful power over men and his great skill as a leader in time of war to believe that he was the man to whom they might trust the great work of directing the army in this momentous crisis.
5. For even leaders like Samuel Adams and John Adams and Patrick Henry did not know Washington’s ability as we have come to know it now.

Show Answer

Question 94

There is a sentence that is missing in the paragraph below. Look at the paragraph and decide where (option 1, 2, 3, or 4) the following sentence would best fit.

Sentence : Even in the 21st century, despite the remarkable increase in global wealth and technological advancements, famines persist

Paragraph : …….1…… Throughout history, famines have represented one of the harshest manifestations of human vulnerabilities and destitution …….2……. In the current context, marked by a deterioration of food insecurity, we find ourselves moving away from the “zero hunger goal” of the sustainable development goals (SDG) ……3..….. “Hunger” and “famine” represent distinct phases in the complex process of escalating extreme human vulnerabilities. Hunger signifies the insufficiency of micronutrients in the body, often becoming a chronic issue in certain societies whereas famine is a humanitarian emergency characterized by extreme mass starvation, leading to increased mortality among destitute families …...4…...

Show Answer

Instruction for set :

The passage below is accompanied by a set of questions. Choose the best answer to each question.

Starting in 1957, [Noam Chomsky] proclaimed a new doctrine: Language, that most human of all attributes, was innate. The grammatical faculty was built into the infant brain, and your average 3-year-old was not a mere apprentice in the great enterprise of absorbing English from his or her parents, but a “linguistic genius.” Since this message was couched in terms of Chomskyan theoretical linguistics, in discourse so opaque that it was nearly incomprehensible even to some scholars, many people did not hear it. Now, in a brilliant, witty and altogether satisfying book, Mr. Chomsky's colleague Steven Pinker . . . has brought Mr. Chomsky's findings to everyman. In “The Language Instinct” he has gathered persuasive data from such diverse fields as cognitive neuroscience, developmental psychology and speech therapy to make his points, and when he disagrees with Mr. Chomsky he tells you so. . . .

For Mr. Chomsky and Mr. Pinker, somewhere in the human brain there is a complex set of neural circuits that have been programmed with “super-rules” (making up what Mr. Chomsky calls “universal grammar”), and that these rules are unconscious and instinctive. A half-century ago, this would have been pooh-poohed as a “black box” theory, since one could not actually pinpoint this grammatical faculty in a specific part of the brain, or describe its functioning. But now things are different. Neurosurgeons [have now found that this] “black box” is situated in and around Broca’s area, on the left side of the forebrain. . . .

Unlike Mr. Chomsky, Mr. Pinker firmly places the wiring of the brain for language within the framework of Darwinian natural selection and evolution. He effectively disposes of all claims that intelligent nonhuman primates like chimps have any abilities to learn and use language. It is not that chimps lack the vocal apparatus to speak; it is just that their brains are unable to produce or use grammar. On the other hand, the “language instinct,” when it first appeared among our most distant hominid ancestors, must have given them a selective reproductive advantage over their competitors (including the ancestral chimps). . . .

So according to Mr. Pinker, the roots of language must be in the genes, but there cannot be a “grammar gene” any more than there can be a gene for the heart or any other complex body structure. This proposition will undoubtedly raise the hackles of some behavioral psychologists and anthropologists, for it apparently contradicts the liberal idea that human behavior may be changed for the better by improvements in culture and environment, and it might seem to invite the twin bugaboos of biological determinism and racism. Yet Mr. Pinker stresses one point that should allay such fears. Even though there are 4,000 to 6,000 languages today, they are all sufficiently alike to be considered one language by an extraterrestrial observer. In other words, most of the diversity of the world’s cultures, so beloved to anthropologists, is superficial and minor compared to the similarities. Racial differences are literally only “skin deep.” The fundamental unity of humanity is the theme of Mr. Chomsky's universal grammar, and of this exciting book.

Question 95

Which one of the following statements best summarises the author’s position about Pinker’s book?


Instruction for set :

The passage below is accompanied by a set of questions. Choose the best answer to each question.

Starting in 1957, [Noam Chomsky] proclaimed a new doctrine: Language, that most human of all attributes, was innate. The grammatical faculty was built into the infant brain, and your average 3-year-old was not a mere apprentice in the great enterprise of absorbing English from his or her parents, but a “linguistic genius.” Since this message was couched in terms of Chomskyan theoretical linguistics, in discourse so opaque that it was nearly incomprehensible even to some scholars, many people did not hear it. Now, in a brilliant, witty and altogether satisfying book, Mr. Chomsky's colleague Steven Pinker . . . has brought Mr. Chomsky's findings to everyman. In “The Language Instinct” he has gathered persuasive data from such diverse fields as cognitive neuroscience, developmental psychology and speech therapy to make his points, and when he disagrees with Mr. Chomsky he tells you so. . . .

For Mr. Chomsky and Mr. Pinker, somewhere in the human brain there is a complex set of neural circuits that have been programmed with “super-rules” (making up what Mr. Chomsky calls “universal grammar”), and that these rules are unconscious and instinctive. A half-century ago, this would have been pooh-poohed as a “black box” theory, since one could not actually pinpoint this grammatical faculty in a specific part of the brain, or describe its functioning. But now things are different. Neurosurgeons [have now found that this] “black box” is situated in and around Broca’s area, on the left side of the forebrain. . . .

Unlike Mr. Chomsky, Mr. Pinker firmly places the wiring of the brain for language within the framework of Darwinian natural selection and evolution. He effectively disposes of all claims that intelligent nonhuman primates like chimps have any abilities to learn and use language. It is not that chimps lack the vocal apparatus to speak; it is just that their brains are unable to produce or use grammar. On the other hand, the “language instinct,” when it first appeared among our most distant hominid ancestors, must have given them a selective reproductive advantage over their competitors (including the ancestral chimps). . . .

So according to Mr. Pinker, the roots of language must be in the genes, but there cannot be a “grammar gene” any more than there can be a gene for the heart or any other complex body structure. This proposition will undoubtedly raise the hackles of some behavioral psychologists and anthropologists, for it apparently contradicts the liberal idea that human behavior may be changed for the better by improvements in culture and environment, and it might seem to invite the twin bugaboos of biological determinism and racism. Yet Mr. Pinker stresses one point that should allay such fears. Even though there are 4,000 to 6,000 languages today, they are all sufficiently alike to be considered one language by an extraterrestrial observer. In other words, most of the diversity of the world’s cultures, so beloved to anthropologists, is superficial and minor compared to the similarities. Racial differences are literally only “skin deep.” The fundamental unity of humanity is the theme of Mr. Chomsky's universal grammar, and of this exciting book.

Question 96

According to the passage, all of the following are true about the language instinct EXCEPT that:


Instruction for set :

The passage below is accompanied by a set of questions. Choose the best answer to each question.

Starting in 1957, [Noam Chomsky] proclaimed a new doctrine: Language, that most human of all attributes, was innate. The grammatical faculty was built into the infant brain, and your average 3-year-old was not a mere apprentice in the great enterprise of absorbing English from his or her parents, but a “linguistic genius.” Since this message was couched in terms of Chomskyan theoretical linguistics, in discourse so opaque that it was nearly incomprehensible even to some scholars, many people did not hear it. Now, in a brilliant, witty and altogether satisfying book, Mr. Chomsky's colleague Steven Pinker . . . has brought Mr. Chomsky's findings to everyman. In “The Language Instinct” he has gathered persuasive data from such diverse fields as cognitive neuroscience, developmental psychology and speech therapy to make his points, and when he disagrees with Mr. Chomsky he tells you so. . . .

For Mr. Chomsky and Mr. Pinker, somewhere in the human brain there is a complex set of neural circuits that have been programmed with “super-rules” (making up what Mr. Chomsky calls “universal grammar”), and that these rules are unconscious and instinctive. A half-century ago, this would have been pooh-poohed as a “black box” theory, since one could not actually pinpoint this grammatical faculty in a specific part of the brain, or describe its functioning. But now things are different. Neurosurgeons [have now found that this] “black box” is situated in and around Broca’s area, on the left side of the forebrain. . . .

Unlike Mr. Chomsky, Mr. Pinker firmly places the wiring of the brain for language within the framework of Darwinian natural selection and evolution. He effectively disposes of all claims that intelligent nonhuman primates like chimps have any abilities to learn and use language. It is not that chimps lack the vocal apparatus to speak; it is just that their brains are unable to produce or use grammar. On the other hand, the “language instinct,” when it first appeared among our most distant hominid ancestors, must have given them a selective reproductive advantage over their competitors (including the ancestral chimps). . . .

So according to Mr. Pinker, the roots of language must be in the genes, but there cannot be a “grammar gene” any more than there can be a gene for the heart or any other complex body structure. This proposition will undoubtedly raise the hackles of some behavioral psychologists and anthropologists, for it apparently contradicts the liberal idea that human behavior may be changed for the better by improvements in culture and environment, and it might seem to invite the twin bugaboos of biological determinism and racism. Yet Mr. Pinker stresses one point that should allay such fears. Even though there are 4,000 to 6,000 languages today, they are all sufficiently alike to be considered one language by an extraterrestrial observer. In other words, most of the diversity of the world’s cultures, so beloved to anthropologists, is superficial and minor compared to the similarities. Racial differences are literally only “skin deep.” The fundamental unity of humanity is the theme of Mr. Chomsky's universal grammar, and of this exciting book.

Question 97

On the basis of the information in the passage, Pinker and Chomsky may disagree with each other on which one of the following points?


Instruction for set :

The passage below is accompanied by a set of questions. Choose the best answer to each question.

Starting in 1957, [Noam Chomsky] proclaimed a new doctrine: Language, that most human of all attributes, was innate. The grammatical faculty was built into the infant brain, and your average 3-year-old was not a mere apprentice in the great enterprise of absorbing English from his or her parents, but a “linguistic genius.” Since this message was couched in terms of Chomskyan theoretical linguistics, in discourse so opaque that it was nearly incomprehensible even to some scholars, many people did not hear it. Now, in a brilliant, witty and altogether satisfying book, Mr. Chomsky's colleague Steven Pinker . . . has brought Mr. Chomsky's findings to everyman. In “The Language Instinct” he has gathered persuasive data from such diverse fields as cognitive neuroscience, developmental psychology and speech therapy to make his points, and when he disagrees with Mr. Chomsky he tells you so. . . .

For Mr. Chomsky and Mr. Pinker, somewhere in the human brain there is a complex set of neural circuits that have been programmed with “super-rules” (making up what Mr. Chomsky calls “universal grammar”), and that these rules are unconscious and instinctive. A half-century ago, this would have been pooh-poohed as a “black box” theory, since one could not actually pinpoint this grammatical faculty in a specific part of the brain, or describe its functioning. But now things are different. Neurosurgeons [have now found that this] “black box” is situated in and around Broca’s area, on the left side of the forebrain. . . .

Unlike Mr. Chomsky, Mr. Pinker firmly places the wiring of the brain for language within the framework of Darwinian natural selection and evolution. He effectively disposes of all claims that intelligent nonhuman primates like chimps have any abilities to learn and use language. It is not that chimps lack the vocal apparatus to speak; it is just that their brains are unable to produce or use grammar. On the other hand, the “language instinct,” when it first appeared among our most distant hominid ancestors, must have given them a selective reproductive advantage over their competitors (including the ancestral chimps). . . .

So according to Mr. Pinker, the roots of language must be in the genes, but there cannot be a “grammar gene” any more than there can be a gene for the heart or any other complex body structure. This proposition will undoubtedly raise the hackles of some behavioral psychologists and anthropologists, for it apparently contradicts the liberal idea that human behavior may be changed for the better by improvements in culture and environment, and it might seem to invite the twin bugaboos of biological determinism and racism. Yet Mr. Pinker stresses one point that should allay such fears. Even though there are 4,000 to 6,000 languages today, they are all sufficiently alike to be considered one language by an extraterrestrial observer. In other words, most of the diversity of the world’s cultures, so beloved to anthropologists, is superficial and minor compared to the similarities. Racial differences are literally only “skin deep.” The fundamental unity of humanity is the theme of Mr. Chomsky's universal grammar, and of this exciting book.

Question 98

From the passage, it can be inferred that all of the following are true about Pinker’s book, “The Language Instinct”, EXCEPT that Pinker:


Instruction for set :

The passage below is accompanied by a set of six questions. Choose the best answer to each question.

Understanding where you are in the world is a basic survival skill, which is why we, like most species come hard-wired with specialised brain areas to create cognitive maps of our surroundings. Where humans are unique, though, with the possible exception of honeybees, is that we try to communicate this understanding of the world with others. We have a long history of doing this by drawing maps — the earliest versions yet discovered were scrawled on cave walls 14,000 years ago. Human cultures have been drawing them on stone tablets, papyrus, paper and now computer screens ever since.

Given such a long history of human map-making, it is perhaps surprising that it is only within the last few hundred years that north has been consistently considered to be at the top. In fact, for much of human history, north almost never appeared at the top, according to Jerry Brotton, a map historian... "North was rarely put at the top for the simple fact that north is where darkness comes from," he says. "West is also very unlikely to be put at the top because west is where the sun disappears."

Confusingly, early Chinese maps seem to buck this trend. But, Brotton, says, even though they did have compasses at the time, that isn't the reason that they placed north at the top. Early Chinese compasses were actually oriented to point south, which was considered to be more desirable than deepest darkest north. But in Chinese maps, the Emperor, who lived in the north of the country was always put at the top of the map, with everyone else, his loyal subjects, looking up towards him. "In Chinese culture the Emperor looks south because it's where the winds come from, it's a good direction. North is not very good but you are in a position of subjection to the emperor, so you look up to him," says Brotton.

Given that each culture has a very different idea of who, or what, they should look up to it's perhaps not surprising that there is very little consistency in which way early maps pointed. In ancient Egyptian times the top of the world was east, the position of sunrise. Early Islamic maps favoured south at the top because most of the early Muslim cultures were north of Mecca, so they imagined looking up (south) towards it. Christian maps from the same era (called Mappa Mundi) put east at the top, towards the Garden of Eden and with Jerusalem in the centre.

So when did everyone get together and decide that north was the top? It's tempting to put it down to European explorers like Christopher Columbus and Ferdinand Megellan, who were navigating by the North Star. But Brotton argues that these early explorers didn't think of the world like that at all. "When Columbus describes the world it is in accordance with east being at the top, he says. "Columbus says he is going towards paradise, so his mentality is from a medieval mappa mundi." We've got to remember, adds Brotton, that at the time, "no one knows what they are doing and where they are going."

Question 99

Which one of the following best describes what the passage is trying to do?


Instruction for set :

The passage below is accompanied by a set of six questions. Choose the best answer to each question.

Understanding where you are in the world is a basic survival skill, which is why we, like most species come hard-wired with specialised brain areas to create cognitive maps of our surroundings. Where humans are unique, though, with the possible exception of honeybees, is that we try to communicate this understanding of the world with others. We have a long history of doing this by drawing maps — the earliest versions yet discovered were scrawled on cave walls 14,000 years ago. Human cultures have been drawing them on stone tablets, papyrus, paper and now computer screens ever since.

Given such a long history of human map-making, it is perhaps surprising that it is only within the last few hundred years that north has been consistently considered to be at the top. In fact, for much of human history, north almost never appeared at the top, according to Jerry Brotton, a map historian... "North was rarely put at the top for the simple fact that north is where darkness comes from," he says. "West is also very unlikely to be put at the top because west is where the sun disappears."

Confusingly, early Chinese maps seem to buck this trend. But, Brotton, says, even though they did have compasses at the time, that isn't the reason that they placed north at the top. Early Chinese compasses were actually oriented to point south, which was considered to be more desirable than deepest darkest north. But in Chinese maps, the Emperor, who lived in the north of the country was always put at the top of the map, with everyone else, his loyal subjects, looking up towards him. "In Chinese culture the Emperor looks south because it's where the winds come from, it's a good direction. North is not very good but you are in a position of subjection to the emperor, so you look up to him," says Brotton.

Given that each culture has a very different idea of who, or what, they should look up to it's perhaps not surprising that there is very little consistency in which way early maps pointed. In ancient Egyptian times the top of the world was east, the position of sunrise. Early Islamic maps favoured south at the top because most of the early Muslim cultures were north of Mecca, so they imagined looking up (south) towards it. Christian maps from the same era (called Mappa Mundi) put east at the top, towards the Garden of Eden and with Jerusalem in the centre.

So when did everyone get together and decide that north was the top? It's tempting to put it down to European explorers like Christopher Columbus and Ferdinand Megellan, who were navigating by the North Star. But Brotton argues that these early explorers didn't think of the world like that at all. "When Columbus describes the world it is in accordance with east being at the top, he says. "Columbus says he is going towards paradise, so his mentality is from a medieval mappa mundi." We've got to remember, adds Brotton, that at the time, "no one knows what they are doing and where they are going."

Question 100

Early maps did NOT put north at the top for all the following reasons EXCEPT


Instruction for set :

The passage below is accompanied by a set of six questions. Choose the best answer to each question.

Understanding where you are in the world is a basic survival skill, which is why we, like most species come hard-wired with specialised brain areas to create cognitive maps of our surroundings. Where humans are unique, though, with the possible exception of honeybees, is that we try to communicate this understanding of the world with others. We have a long history of doing this by drawing maps — the earliest versions yet discovered were scrawled on cave walls 14,000 years ago. Human cultures have been drawing them on stone tablets, papyrus, paper and now computer screens ever since.

Given such a long history of human map-making, it is perhaps surprising that it is only within the last few hundred years that north has been consistently considered to be at the top. In fact, for much of human history, north almost never appeared at the top, according to Jerry Brotton, a map historian... "North was rarely put at the top for the simple fact that north is where darkness comes from," he says. "West is also very unlikely to be put at the top because west is where the sun disappears."

Confusingly, early Chinese maps seem to buck this trend. But, Brotton, says, even though they did have compasses at the time, that isn't the reason that they placed north at the top. Early Chinese compasses were actually oriented to point south, which was considered to be more desirable than deepest darkest north. But in Chinese maps, the Emperor, who lived in the north of the country was always put at the top of the map, with everyone else, his loyal subjects, looking up towards him. "In Chinese culture the Emperor looks south because it's where the winds come from, it's a good direction. North is not very good but you are in a position of subjection to the emperor, so you look up to him," says Brotton.

Given that each culture has a very different idea of who, or what, they should look up to it's perhaps not surprising that there is very little consistency in which way early maps pointed. In ancient Egyptian times the top of the world was east, the position of sunrise. Early Islamic maps favoured south at the top because most of the early Muslim cultures were north of Mecca, so they imagined looking up (south) towards it. Christian maps from the same era (called Mappa Mundi) put east at the top, towards the Garden of Eden and with Jerusalem in the centre.

So when did everyone get together and decide that north was the top? It's tempting to put it down to European explorers like Christopher Columbus and Ferdinand Megellan, who were navigating by the North Star. But Brotton argues that these early explorers didn't think of the world like that at all. "When Columbus describes the world it is in accordance with east being at the top, he says. "Columbus says he is going towards paradise, so his mentality is from a medieval mappa mundi." We've got to remember, adds Brotton, that at the time, "no one knows what they are doing and where they are going."

Question 101

According to the passage, early Chinese maps placed north at the top because


Instruction for set :

The passage below is accompanied by a set of six questions. Choose the best answer to each question.

Understanding where you are in the world is a basic survival skill, which is why we, like most species come hard-wired with specialised brain areas to create cognitive maps of our surroundings. Where humans are unique, though, with the possible exception of honeybees, is that we try to communicate this understanding of the world with others. We have a long history of doing this by drawing maps — the earliest versions yet discovered were scrawled on cave walls 14,000 years ago. Human cultures have been drawing them on stone tablets, papyrus, paper and now computer screens ever since.

Given such a long history of human map-making, it is perhaps surprising that it is only within the last few hundred years that north has been consistently considered to be at the top. In fact, for much of human history, north almost never appeared at the top, according to Jerry Brotton, a map historian... "North was rarely put at the top for the simple fact that north is where darkness comes from," he says. "West is also very unlikely to be put at the top because west is where the sun disappears."

Confusingly, early Chinese maps seem to buck this trend. But, Brotton, says, even though they did have compasses at the time, that isn't the reason that they placed north at the top. Early Chinese compasses were actually oriented to point south, which was considered to be more desirable than deepest darkest north. But in Chinese maps, the Emperor, who lived in the north of the country was always put at the top of the map, with everyone else, his loyal subjects, looking up towards him. "In Chinese culture the Emperor looks south because it's where the winds come from, it's a good direction. North is not very good but you are in a position of subjection to the emperor, so you look up to him," says Brotton.

Given that each culture has a very different idea of who, or what, they should look up to it's perhaps not surprising that there is very little consistency in which way early maps pointed. In ancient Egyptian times the top of the world was east, the position of sunrise. Early Islamic maps favoured south at the top because most of the early Muslim cultures were north of Mecca, so they imagined looking up (south) towards it. Christian maps from the same era (called Mappa Mundi) put east at the top, towards the Garden of Eden and with Jerusalem in the centre.

So when did everyone get together and decide that north was the top? It's tempting to put it down to European explorers like Christopher Columbus and Ferdinand Megellan, who were navigating by the North Star. But Brotton argues that these early explorers didn't think of the world like that at all. "When Columbus describes the world it is in accordance with east being at the top, he says. "Columbus says he is going towards paradise, so his mentality is from a medieval mappa mundi." We've got to remember, adds Brotton, that at the time, "no one knows what they are doing and where they are going."

Question 102

It can be inferred from the passage that European explorers like Columbus and Megellan


Instruction for set :

The passage below is accompanied by a set of six questions. Choose the best answer to each question.

Understanding where you are in the world is a basic survival skill, which is why we, like most species come hard-wired with specialised brain areas to create cognitive maps of our surroundings. Where humans are unique, though, with the possible exception of honeybees, is that we try to communicate this understanding of the world with others. We have a long history of doing this by drawing maps — the earliest versions yet discovered were scrawled on cave walls 14,000 years ago. Human cultures have been drawing them on stone tablets, papyrus, paper and now computer screens ever since.

Given such a long history of human map-making, it is perhaps surprising that it is only within the last few hundred years that north has been consistently considered to be at the top. In fact, for much of human history, north almost never appeared at the top, according to Jerry Brotton, a map historian... "North was rarely put at the top for the simple fact that north is where darkness comes from," he says. "West is also very unlikely to be put at the top because west is where the sun disappears."

Confusingly, early Chinese maps seem to buck this trend. But, Brotton, says, even though they did have compasses at the time, that isn't the reason that they placed north at the top. Early Chinese compasses were actually oriented to point south, which was considered to be more desirable than deepest darkest north. But in Chinese maps, the Emperor, who lived in the north of the country was always put at the top of the map, with everyone else, his loyal subjects, looking up towards him. "In Chinese culture the Emperor looks south because it's where the winds come from, it's a good direction. North is not very good but you are in a position of subjection to the emperor, so you look up to him," says Brotton.

Given that each culture has a very different idea of who, or what, they should look up to it's perhaps not surprising that there is very little consistency in which way early maps pointed. In ancient Egyptian times the top of the world was east, the position of sunrise. Early Islamic maps favoured south at the top because most of the early Muslim cultures were north of Mecca, so they imagined looking up (south) towards it. Christian maps from the same era (called Mappa Mundi) put east at the top, towards the Garden of Eden and with Jerusalem in the centre.

So when did everyone get together and decide that north was the top? It's tempting to put it down to European explorers like Christopher Columbus and Ferdinand Megellan, who were navigating by the North Star. But Brotton argues that these early explorers didn't think of the world like that at all. "When Columbus describes the world it is in accordance with east being at the top, he says. "Columbus says he is going towards paradise, so his mentality is from a medieval mappa mundi." We've got to remember, adds Brotton, that at the time, "no one knows what they are doing and where they are going."

Question 103

Which one of the following about the northern orientation of modern maps is asserted in the passage?


Instruction for set :

The passage below is accompanied by a set of six questions. Choose the best answer to each question.

Understanding where you are in the world is a basic survival skill, which is why we, like most species come hard-wired with specialised brain areas to create cognitive maps of our surroundings. Where humans are unique, though, with the possible exception of honeybees, is that we try to communicate this understanding of the world with others. We have a long history of doing this by drawing maps — the earliest versions yet discovered were scrawled on cave walls 14,000 years ago. Human cultures have been drawing them on stone tablets, papyrus, paper and now computer screens ever since.

Given such a long history of human map-making, it is perhaps surprising that it is only within the last few hundred years that north has been consistently considered to be at the top. In fact, for much of human history, north almost never appeared at the top, according to Jerry Brotton, a map historian... "North was rarely put at the top for the simple fact that north is where darkness comes from," he says. "West is also very unlikely to be put at the top because west is where the sun disappears."

Confusingly, early Chinese maps seem to buck this trend. But, Brotton, says, even though they did have compasses at the time, that isn't the reason that they placed north at the top. Early Chinese compasses were actually oriented to point south, which was considered to be more desirable than deepest darkest north. But in Chinese maps, the Emperor, who lived in the north of the country was always put at the top of the map, with everyone else, his loyal subjects, looking up towards him. "In Chinese culture the Emperor looks south because it's where the winds come from, it's a good direction. North is not very good but you are in a position of subjection to the emperor, so you look up to him," says Brotton.

Given that each culture has a very different idea of who, or what, they should look up to it's perhaps not surprising that there is very little consistency in which way early maps pointed. In ancient Egyptian times the top of the world was east, the position of sunrise. Early Islamic maps favoured south at the top because most of the early Muslim cultures were north of Mecca, so they imagined looking up (south) towards it. Christian maps from the same era (called Mappa Mundi) put east at the top, towards the Garden of Eden and with Jerusalem in the centre.

So when did everyone get together and decide that north was the top? It's tempting to put it down to European explorers like Christopher Columbus and Ferdinand Megellan, who were navigating by the North Star. But Brotton argues that these early explorers didn't think of the world like that at all. "When Columbus describes the world it is in accordance with east being at the top, he says. "Columbus says he is going towards paradise, so his mentality is from a medieval mappa mundi." We've got to remember, adds Brotton, that at the time, "no one knows what they are doing and where they are going."

Question 104

The role of natural phenomena in influencing map-making conventions is seen most clearly in


Instruction for set :

The passage below is accompanied by a set of questions. Choose the best answer to each question.

Nature has all along yielded her flesh to humans. First, we took nature’s materials as food, fibers, and shelter. Then we learned to extract raw materials from her biosphere to create our own new synthetic materials. Now Bios is yielding us her mind—we are taking her logic.

Clockwork logic—the logic of the machines—will only build simple contraptions. Truly complex systems such as a cell, a meadow, an economy, or a brain (natural or artificial) require a rigorous nontechnological logic. We now see that no logic except bio-logic can assemble a thinking device, or even a workable system of any magnitude.

It is an astounding discovery that one can extract the logic of Bios out of biology and have something useful. Although many philosophers in the past have suspected one could abstract the laws of life and apply them elsewhere, it wasn’t until the complexity of computers and human-made systems became as complicated as living things, that it was possible to prove this. It’s eerie how much of life can be transferred. So far, some of the traits of the living that have successfully been transported to mechanical systems are: self-replication, self-governance, limited self-repair, mild evolution, and partial learning.

We have reason to believe yet more can be synthesized and made into something new. Yet at the same time that the logic of Bios is being imported into machines, the logic of Technos is being imported into life. The root of bioengineering is the desire to control the organic long enough to improve it. Domesticated plants and animals are examples of technos-logic applied to life. The wild aromatic root of the Queen Anne’s lace weed has been fine-tuned over generations by selective herb gatherers until it has evolved into a sweet carrot of the garden; the udders of wild bovines have been selectively enlarged in an “unnatural” way to satisfy humans rather than calves. Milk cows and carrots, therefore, are human inventions as much as steam engines and gunpowder are. But milk cows and carrots are more indicative of the kind of inventions humans will make in the future: products that are grown rather than manufactured.

Genetic engineering is precisely what cattle breeders do when they select better strains ofHolsteins, only bioengineers employ more precise and powerful control. While carrot and milk cow breeders had to rely on diffuse organic evolution, modern genetic engineers can use directed artificial evolution—purposeful design—which greatly accelerates improvements.

The overlap of the mechanical and the lifelike increases year by year. Part of this bionic convergence is a matter of words. The meanings of “mechanical” and “life” are both stretching until all complicated things can be perceived as machines, and all self-sustaining machines can be perceived as alive. Yet beyond semantics, two concrete trends are happening: (1)Human-made things are behaving more lifelike, and (2) Life is becoming more engineered. The apparent veil between the organic and the manufactured has crumpled to reveal that the two really are, and have always been, of one being.

Question 105

Which one of the following sets of words/phrases best serves as keywords to thepassage?


Instruction for set :

The passage below is accompanied by a set of questions. Choose the best answer to each question.

Nature has all along yielded her flesh to humans. First, we took nature’s materials as food, fibers, and shelter. Then we learned to extract raw materials from her biosphere to create our own new synthetic materials. Now Bios is yielding us her mind—we are taking her logic.

Clockwork logic—the logic of the machines—will only build simple contraptions. Truly complex systems such as a cell, a meadow, an economy, or a brain (natural or artificial) require a rigorous nontechnological logic. We now see that no logic except bio-logic can assemble a thinking device, or even a workable system of any magnitude.

It is an astounding discovery that one can extract the logic of Bios out of biology and have something useful. Although many philosophers in the past have suspected one could abstract the laws of life and apply them elsewhere, it wasn’t until the complexity of computers and human-made systems became as complicated as living things, that it was possible to prove this. It’s eerie how much of life can be transferred. So far, some of the traits of the living that have successfully been transported to mechanical systems are: self-replication, self-governance, limited self-repair, mild evolution, and partial learning.

We have reason to believe yet more can be synthesized and made into something new. Yet at the same time that the logic of Bios is being imported into machines, the logic of Technos is being imported into life. The root of bioengineering is the desire to control the organic long enough to improve it. Domesticated plants and animals are examples of technos-logic applied to life. The wild aromatic root of the Queen Anne’s lace weed has been fine-tuned over generations by selective herb gatherers until it has evolved into a sweet carrot of the garden; the udders of wild bovines have been selectively enlarged in an “unnatural” way to satisfy humans rather than calves. Milk cows and carrots, therefore, are human inventions as much as steam engines and gunpowder are. But milk cows and carrots are more indicative of the kind of inventions humans will make in the future: products that are grown rather than manufactured.

Genetic engineering is precisely what cattle breeders do when they select better strains ofHolsteins, only bioengineers employ more precise and powerful control. While carrot and milk cow breeders had to rely on diffuse organic evolution, modern genetic engineers can use directed artificial evolution—purposeful design—which greatly accelerates improvements.

The overlap of the mechanical and the lifelike increases year by year. Part of this bionic convergence is a matter of words. The meanings of “mechanical” and “life” are both stretching until all complicated things can be perceived as machines, and all self-sustaining machines can be perceived as alive. Yet beyond semantics, two concrete trends are happening: (1)Human-made things are behaving more lifelike, and (2) Life is becoming more engineered. The apparent veil between the organic and the manufactured has crumpled to reveal that the two really are, and have always been, of one being.

Question 106

The author claims that, “Part of this bionic convergence is a matter of words”. Which one of the following statements best expresses the point being made by the author?


Instruction for set :

The passage below is accompanied by a set of questions. Choose the best answer to each question.

Nature has all along yielded her flesh to humans. First, we took nature’s materials as food, fibers, and shelter. Then we learned to extract raw materials from her biosphere to create our own new synthetic materials. Now Bios is yielding us her mind—we are taking her logic.

Clockwork logic—the logic of the machines—will only build simple contraptions. Truly complex systems such as a cell, a meadow, an economy, or a brain (natural or artificial) require a rigorous nontechnological logic. We now see that no logic except bio-logic can assemble a thinking device, or even a workable system of any magnitude.

It is an astounding discovery that one can extract the logic of Bios out of biology and have something useful. Although many philosophers in the past have suspected one could abstract the laws of life and apply them elsewhere, it wasn’t until the complexity of computers and human-made systems became as complicated as living things, that it was possible to prove this. It’s eerie how much of life can be transferred. So far, some of the traits of the living that have successfully been transported to mechanical systems are: self-replication, self-governance, limited self-repair, mild evolution, and partial learning.

We have reason to believe yet more can be synthesized and made into something new. Yet at the same time that the logic of Bios is being imported into machines, the logic of Technos is being imported into life. The root of bioengineering is the desire to control the organic long enough to improve it. Domesticated plants and animals are examples of technos-logic applied to life. The wild aromatic root of the Queen Anne’s lace weed has been fine-tuned over generations by selective herb gatherers until it has evolved into a sweet carrot of the garden; the udders of wild bovines have been selectively enlarged in an “unnatural” way to satisfy humans rather than calves. Milk cows and carrots, therefore, are human inventions as much as steam engines and gunpowder are. But milk cows and carrots are more indicative of the kind of inventions humans will make in the future: products that are grown rather than manufactured.

Genetic engineering is precisely what cattle breeders do when they select better strains ofHolsteins, only bioengineers employ more precise and powerful control. While carrot and milk cow breeders had to rely on diffuse organic evolution, modern genetic engineers can use directed artificial evolution—purposeful design—which greatly accelerates improvements.

The overlap of the mechanical and the lifelike increases year by year. Part of this bionic convergence is a matter of words. The meanings of “mechanical” and “life” are both stretching until all complicated things can be perceived as machines, and all self-sustaining machines can be perceived as alive. Yet beyond semantics, two concrete trends are happening: (1)Human-made things are behaving more lifelike, and (2) Life is becoming more engineered. The apparent veil between the organic and the manufactured has crumpled to reveal that the two really are, and have always been, of one being.

Question 107

The author claims that, “The apparent veil between the organic and the manufactured has crumpled to reveal that the two really are, and have always been, of one being.”Which one of the following statements best expresses the point being made by the author here?


Instruction for set :

The passage below is accompanied by a set of questions. Choose the best answer to each question.

Nature has all along yielded her flesh to humans. First, we took nature’s materials as food, fibers, and shelter. Then we learned to extract raw materials from her biosphere to create our own new synthetic materials. Now Bios is yielding us her mind—we are taking her logic.

Clockwork logic—the logic of the machines—will only build simple contraptions. Truly complex systems such as a cell, a meadow, an economy, or a brain (natural or artificial) require a rigorous nontechnological logic. We now see that no logic except bio-logic can assemble a thinking device, or even a workable system of any magnitude.

It is an astounding discovery that one can extract the logic of Bios out of biology and have something useful. Although many philosophers in the past have suspected one could abstract the laws of life and apply them elsewhere, it wasn’t until the complexity of computers and human-made systems became as complicated as living things, that it was possible to prove this. It’s eerie how much of life can be transferred. So far, some of the traits of the living that have successfully been transported to mechanical systems are: self-replication, self-governance, limited self-repair, mild evolution, and partial learning.

We have reason to believe yet more can be synthesized and made into something new. Yet at the same time that the logic of Bios is being imported into machines, the logic of Technos is being imported into life. The root of bioengineering is the desire to control the organic long enough to improve it. Domesticated plants and animals are examples of technos-logic applied to life. The wild aromatic root of the Queen Anne’s lace weed has been fine-tuned over generations by selective herb gatherers until it has evolved into a sweet carrot of the garden; the udders of wild bovines have been selectively enlarged in an “unnatural” way to satisfy humans rather than calves. Milk cows and carrots, therefore, are human inventions as much as steam engines and gunpowder are. But milk cows and carrots are more indicative of the kind of inventions humans will make in the future: products that are grown rather than manufactured.

Genetic engineering is precisely what cattle breeders do when they select better strains ofHolsteins, only bioengineers employ more precise and powerful control. While carrot and milk cow breeders had to rely on diffuse organic evolution, modern genetic engineers can use directed artificial evolution—purposeful design—which greatly accelerates improvements.

The overlap of the mechanical and the lifelike increases year by year. Part of this bionic convergence is a matter of words. The meanings of “mechanical” and “life” are both stretching until all complicated things can be perceived as machines, and all self-sustaining machines can be perceived as alive. Yet beyond semantics, two concrete trends are happening: (1)Human-made things are behaving more lifelike, and (2) Life is becoming more engineered. The apparent veil between the organic and the manufactured has crumpled to reveal that the two really are, and have always been, of one being.

Question 108

None of the following statements is implied by the arguments of the passage, EXCEPT:


Instruction for set :

The passage below is accompanied by a set of questions. Choose the best answer to each question.

We cannot travel outside our neighbourhood without passports. We must wear the same plain clothes. We must exchange our houses every ten years. We cannot avoid labour. We all go to bed at the same time . . . We have religious freedom, but we cannot deny that the soul dies with the body, since ‘but for the fear of punishment, they would have nothing but contempt for the laws and customs of society'. . . . In More’s time, for much of the population, given the plenty and security on offer, such restraints would not have seemed overly unreasonable. For modern readers, however, Utopia appears to rely upon relentless transparency, the repression of variety, and the curtailment of privacy. Utopia provides security: but at what price? In both its external and internal relations, indeed, it seems perilously dystopian.

Such a conclusion might be fortified by examining selectively the tradition which follows More on these points. This often portrays societies where . . . 'it would be almost impossible for man to be depraved, or wicked'. . . . This is achieved both through institutions and mores, which underpin the common life. . . . The passions are regulated and inequalities of wealth and distinction are minimized. Needs, vanity, and emulation are restrained, often by prizing equality and holding riches in contempt. The desire for public power is curbed. Marriage and sexual intercourse are often controlled: in Tommaso Campanella’s The City of the Sun (1623), the first great literary utopia after More’s, relations are forbidden to men before the age of twenty-one and women before nineteen. Communal child-rearing is normal; for Campanella, this commences at age two. Greater simplicity of life, ‘living according to nature’, is often a result: the desire for simplicity and purity are closely related. People become more alike in appearance, opinion, and outlook than they often have been. Unity, order, and homogeneity thus prevail at the cost of individuality and diversity. This model, as J. C. Davis demonstrates, dominated early modern utopianism. . . . And utopian homogeneity remains a familiar theme well into the twentieth century.

Given these considerations, it is not unreasonable to take as our starting point here the hypothesis that utopia and dystopia evidently share more in common than is often supposed. Indeed, they might be twins, the progeny of the same parents. Insofar as this proves to be the case, my linkage of both here will be uncomfortably close for some readers. Yet we should not mistake this argument for the assertion that all utopias are, or tend to produce, dystopias. Those who defend this proposition will find that their association here is not nearly close enough. For we have only to acknowledge the existence of thousands of successful intentional communities in which a cooperative ethos predominates and where harmony without coercion is the rule to set aside such an assertion. Here the individual’s submersion in the group is consensual (though this concept is not unproblematic). It results not in enslavement but voluntary submission to group norms. Harmony is achieved without . . . harming others.

Question 109

Following from the passage, which one of the following may be seen as a characteristic of a utopian society?


Instruction for set :

The passage below is accompanied by a set of questions. Choose the best answer to each question.

We cannot travel outside our neighbourhood without passports. We must wear the same plain clothes. We must exchange our houses every ten years. We cannot avoid labour. We all go to bed at the same time . . . We have religious freedom, but we cannot deny that the soul dies with the body, since ‘but for the fear of punishment, they would have nothing but contempt for the laws and customs of society'. . . . In More’s time, for much of the population, given the plenty and security on offer, such restraints would not have seemed overly unreasonable. For modern readers, however, Utopia appears to rely upon relentless transparency, the repression of variety, and the curtailment of privacy. Utopia provides security: but at what price? In both its external and internal relations, indeed, it seems perilously dystopian.

Such a conclusion might be fortified by examining selectively the tradition which follows More on these points. This often portrays societies where . . . 'it would be almost impossible for man to be depraved, or wicked'. . . . This is achieved both through institutions and mores, which underpin the common life. . . . The passions are regulated and inequalities of wealth and distinction are minimized. Needs, vanity, and emulation are restrained, often by prizing equality and holding riches in contempt. The desire for public power is curbed. Marriage and sexual intercourse are often controlled: in Tommaso Campanella’s The City of the Sun (1623), the first great literary utopia after More’s, relations are forbidden to men before the age of twenty-one and women before nineteen. Communal child-rearing is normal; for Campanella, this commences at age two. Greater simplicity of life, ‘living according to nature’, is often a result: the desire for simplicity and purity are closely related. People become more alike in appearance, opinion, and outlook than they often have been. Unity, order, and homogeneity thus prevail at the cost of individuality and diversity. This model, as J. C. Davis demonstrates, dominated early modern utopianism. . . . And utopian homogeneity remains a familiar theme well into the twentieth century.

Given these considerations, it is not unreasonable to take as our starting point here the hypothesis that utopia and dystopia evidently share more in common than is often supposed. Indeed, they might be twins, the progeny of the same parents. Insofar as this proves to be the case, my linkage of both here will be uncomfortably close for some readers. Yet we should not mistake this argument for the assertion that all utopias are, or tend to produce, dystopias. Those who defend this proposition will find that their association here is not nearly close enough. For we have only to acknowledge the existence of thousands of successful intentional communities in which a cooperative ethos predominates and where harmony without coercion is the rule to set aside such an assertion. Here the individual’s submersion in the group is consensual (though this concept is not unproblematic). It results not in enslavement but voluntary submission to group norms. Harmony is achieved without . . . harming others.

Question 110

All of the following arguments are made in the passage EXCEPT that:


Instruction for set :

The passage below is accompanied by a set of questions. Choose the best answer to each question.

We cannot travel outside our neighbourhood without passports. We must wear the same plain clothes. We must exchange our houses every ten years. We cannot avoid labour. We all go to bed at the same time . . . We have religious freedom, but we cannot deny that the soul dies with the body, since ‘but for the fear of punishment, they would have nothing but contempt for the laws and customs of society'. . . . In More’s time, for much of the population, given the plenty and security on offer, such restraints would not have seemed overly unreasonable. For modern readers, however, Utopia appears to rely upon relentless transparency, the repression of variety, and the curtailment of privacy. Utopia provides security: but at what price? In both its external and internal relations, indeed, it seems perilously dystopian.

Such a conclusion might be fortified by examining selectively the tradition which follows More on these points. This often portrays societies where . . . 'it would be almost impossible for man to be depraved, or wicked'. . . . This is achieved both through institutions and mores, which underpin the common life. . . . The passions are regulated and inequalities of wealth and distinction are minimized. Needs, vanity, and emulation are restrained, often by prizing equality and holding riches in contempt. The desire for public power is curbed. Marriage and sexual intercourse are often controlled: in Tommaso Campanella’s The City of the Sun (1623), the first great literary utopia after More’s, relations are forbidden to men before the age of twenty-one and women before nineteen. Communal child-rearing is normal; for Campanella, this commences at age two. Greater simplicity of life, ‘living according to nature’, is often a result: the desire for simplicity and purity are closely related. People become more alike in appearance, opinion, and outlook than they often have been. Unity, order, and homogeneity thus prevail at the cost of individuality and diversity. This model, as J. C. Davis demonstrates, dominated early modern utopianism. . . . And utopian homogeneity remains a familiar theme well into the twentieth century.

Given these considerations, it is not unreasonable to take as our starting point here the hypothesis that utopia and dystopia evidently share more in common than is often supposed. Indeed, they might be twins, the progeny of the same parents. Insofar as this proves to be the case, my linkage of both here will be uncomfortably close for some readers. Yet we should not mistake this argument for the assertion that all utopias are, or tend to produce, dystopias. Those who defend this proposition will find that their association here is not nearly close enough. For we have only to acknowledge the existence of thousands of successful intentional communities in which a cooperative ethos predominates and where harmony without coercion is the rule to set aside such an assertion. Here the individual’s submersion in the group is consensual (though this concept is not unproblematic). It results not in enslavement but voluntary submission to group norms. Harmony is achieved without . . . harming others.

Question 111

Which sequence of words below best captures the narrative of the passage?


Instruction for set :

The passage below is accompanied by a set of questions. Choose the best answer to each question.

We cannot travel outside our neighbourhood without passports. We must wear the same plain clothes. We must exchange our houses every ten years. We cannot avoid labour. We all go to bed at the same time . . . We have religious freedom, but we cannot deny that the soul dies with the body, since ‘but for the fear of punishment, they would have nothing but contempt for the laws and customs of society'. . . . In More’s time, for much of the population, given the plenty and security on offer, such restraints would not have seemed overly unreasonable. For modern readers, however, Utopia appears to rely upon relentless transparency, the repression of variety, and the curtailment of privacy. Utopia provides security: but at what price? In both its external and internal relations, indeed, it seems perilously dystopian.

Such a conclusion might be fortified by examining selectively the tradition which follows More on these points. This often portrays societies where . . . 'it would be almost impossible for man to be depraved, or wicked'. . . . This is achieved both through institutions and mores, which underpin the common life. . . . The passions are regulated and inequalities of wealth and distinction are minimized. Needs, vanity, and emulation are restrained, often by prizing equality and holding riches in contempt. The desire for public power is curbed. Marriage and sexual intercourse are often controlled: in Tommaso Campanella’s The City of the Sun (1623), the first great literary utopia after More’s, relations are forbidden to men before the age of twenty-one and women before nineteen. Communal child-rearing is normal; for Campanella, this commences at age two. Greater simplicity of life, ‘living according to nature’, is often a result: the desire for simplicity and purity are closely related. People become more alike in appearance, opinion, and outlook than they often have been. Unity, order, and homogeneity thus prevail at the cost of individuality and diversity. This model, as J. C. Davis demonstrates, dominated early modern utopianism. . . . And utopian homogeneity remains a familiar theme well into the twentieth century.

Given these considerations, it is not unreasonable to take as our starting point here the hypothesis that utopia and dystopia evidently share more in common than is often supposed. Indeed, they might be twins, the progeny of the same parents. Insofar as this proves to be the case, my linkage of both here will be uncomfortably close for some readers. Yet we should not mistake this argument for the assertion that all utopias are, or tend to produce, dystopias. Those who defend this proposition will find that their association here is not nearly close enough. For we have only to acknowledge the existence of thousands of successful intentional communities in which a cooperative ethos predominates and where harmony without coercion is the rule to set aside such an assertion. Here the individual’s submersion in the group is consensual (though this concept is not unproblematic). It results not in enslavement but voluntary submission to group norms. Harmony is achieved without . . . harming others.

Question 112

All of the following statements can be inferred from the passage EXCEPT that:


Question 113

Choose the most logical order of sentences from among the given choices to construct a coherent paragraph.

1. One big reason is that the certainty of on-then-off is a lot easier for them to navigate than a thoughtful approach to transitions.

2. Bad drivers do this often, everywhere I’ve ever been in the world.

3. If you’re going to have to stop soon, perhaps you should start coasting now.

4. Instead of gracefully and safely slowing for a light they know will be red by the time they get there, or even a stop sign, they hit the gas and then slam the brakes.

Show Answer

Question 114

The passage given below is followed by four alternate summaries. Choose the option that best captures the essence of the passage.

The unlikely alliance of the incumbent industrialist and the distressed unemployed worker is especially powerful amid the debris of corporate bankruptcies and layoffs. In an economic downturn, the capitalist is more likely to focus on costs of the competition emanating from free markets than on the opportunities they create. And the unemployed worker will find many others in a similar condition and with anxieties similar to his, which will make it easier for them to organize together. Using the cover and the political organization provided by the distressed, the capitalist captures the political agenda.


Question 115

Five sentences are given below. Four of these, when appropriately rearranged, form a logical and meaningful paragraph. Identify the sentence which does not belong to the paragraph and enter its number as the answer

1.It is well, however, to remember that its use has been excessive and unnecessary, and its price can be cut by wholesale voluntary abstinence.
2.So far as is known, taking meat even in large excess is not harmful, but it represents luxury and waste.
3.With the increased distribution of wealth, the demand for meat grows.
4.Indulgence in meat is due to the desire for strong flavour.
5.Its consumption by all classes had vastly increased in all prosperous countries prior to the war.

Show Answer

Question 116

There is a sentence that is missing in the paragraph below. Look at the paragraph and decide where (option 1, 2, 3, or 4) the following sentence would best fit.

Sentence : This contributes to the overall effort to reduce fiscal deficit.

Paragraph : Administrative reforms in expenditure on centrally sponsored schemes and projects have delivered big savings on interest payments ……1…… Finmin is justified in seeking wider coverage of the single nodal agency (SNA) system for transferring funds to states ……2…… SNA is transparent in that funds are debited into specific accounts only at the stage they’re needed …...3…… This improves scheme and project monitoring. More importantly, SNA corrects against money idling in accounts and, thus, reduces the interest bill. Finally, it generates a wealth of data that can inform public policy ……4……

Show Answer
Google Preference Image Google Preference Image

How helpful did you find this article?

Related Blogs

Frequently Asked Questions

CAT Sample Papers Free CAT Mock Test