Join WhatsApp Icon CAT WhatsApp Group
Instructions

The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question.

Imagine a world in which artificial intelligence is entrusted with the highest moral responsibilities: sentencing criminals, allocating medical resources, and even mediating conflicts between nations. This might seem like the pinnacle of human progress: an entity unburdened by emotion, prejudice or inconsistency, making ethical decisions with impeccable precision. . . .

Yet beneath this vision of an idealised moral arbiter lies a fundamental question: can a machine understand morality as humans do, or is it confined to a simulacrum of ethical reasoning? AI might replicate human decisions without improving on them, carrying forward the same biases, blind spots and cultural distortions from human moral judgment. In trying to emulate us, it might only reproduce our limitations, not transcend them. But there is a deeper concern. Moral judgment draws on intuition, historical awareness and context - qualities that resist formalisation. Ethics may be so embedded in lived experience that any attempt to encode it into formal structures risks flattening its most essential features. If so, AI would not merely reflect human shortcomings; it would strip morality of the very depth that makes ethical reflection possible in the first place.

Still, many have tried to formalise ethics, by treating certain moral claims not as conclusions, but as starting points. A classic example comes from utilitarianism, which often takes as a foundational axiom the principle that one should act to maximise overall wellbeing. From this, more specific principles can be derived, for example, that it is right to benefit the greatest number, or that actions should be judged by their consequences for total happiness. As computational resources increase, AI becomes increasingly well-suited to the task of starting from fixed ethical assumptions and reasoning through their implications in complex situations.

But what, exactly, does it mean to formalise something like ethics? The question is easier to grasp by looking at fields in which formal systems have long played a central role. Physics, for instance, has relied on formalisation for centuries. There is no single physical theory that explains everything. Instead, we have many physical theories, each designed to describe specific aspects of the Universe: from the behaviour of quarks and electrons to the motion of galaxies. These theories often diverge. Aristotelian physics, for instance, explained falling objects in terms of natural motion toward Earth's centre; Newtonian mechanics replaced this with a universal force of gravity. These explanations are not just different; they are incompatible. Yet both share a common structure: they begin with basic postulates - assumptions about motion, force or mass - and derive increasingly complex consequences. . . .

Ethical theories have a similar structure. Like physical theories, they attempt to describe a domain - in this case, the moral landscape. They aim to answer questions about which actions are right or wrong, and why. These theories also diverge and, even when they recommend similar actions, such as giving to charity, they justify them in different ways. Ethical theories also often begin with a small set of foundational principles or claims, from which they reason about more complex moral problems.

Question 22

Which one of the options below best summarises the passage?

The passage makes three main points. First, it explains why people find an impersonal AI moral judge appealing. Next, it raises a key concern: moral judgment relies on intuition, history, and context, so turning ethics into rules might remove that depth. The passage also notes that ethics has always included some formal structure, using examples like utilitarianism. It explains with a physics analogy that formal systems can exist together without merging into one final theory. The main idea is not to reject or endorse these systems, but to highlight the tension and variety among them. Option C sums this up best. It shows why an impersonal AI judge is appealing, points out the risk that making rules can “erode case-sensitive judgement,” and still recognizes that reasoning based on principles can be applied widely. By mentioning the physics analogy to explain structured variety, it matches the passage’s idea that ethical systems, like physical theories, can be formal but still different from each other.

Option A goes further than the passage by saying it “rejects formal methods in principle” and that AI should never be used in courts, medicine, or diplomacy “under any conditions.” The passage does not do this; it talks about formal ethical systems and why AI might work for them. Option B misunderstands by saying that codified systems keep case details and that the physics analogy means we will end up with one main framework. In fact, the passage warns about losing nuance and uses the physics example to show that different theories can exist together, not that they will merge. Option D is also not accurate because it treats copying human moral judgment with AI as clear progress. The passage keeps questioning whether copying without depth is really better, so this option is too one-sided.

Create a FREE account and get:

  • All Quant CAT complete Formulas and shortcuts PDF
  • 38+ CAT previous year papers with video solutions PDF
  • 5000+ Topic-wise Previous year CAT Solved Questions for Free