Set-4 Reading Comprehension For SBI PO and SBI Clerk 2019 | Must Go Through These Questions

Dear Aspirants,

We are providing the most important Reading Comprehension for SBI PO 2019, SBI Clerk 2019 and all other competitive bank and insurance exams. These questions have very high chances to be asked in SBI PO 2019, SBI Clerk 2019.
Get the Best Test Series for SBI PO 2019 at the most affordable price (Based on Real Exam Pattern) – Click Here
Download the Best GK Gaming App for Current Affairs and GK (Bank+SSC)– Click here (App No 1)       (App No 2)

Directions:(1-10) Read the passage carefully and answer the questions that follow:

Autonomous weapons – killer robots that can attack without a human operator – are dangerous tools. There is no doubt about this fact. As tech entrepreneurs such as Elon Musk, Mustafa Suleyman and other signatories to a recent open letter to the United Nations have put it, autonomous weapons ‘can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons [that can be] hacked to behave in undesirable ways’.

But this does not mean that the UN should implement a preventive ban on the further development of these weapons, as the signatories of the open letter seem to urge.

For one thing, it sometimes takes dangerous tools to achieve worthy ends. Think of the Rwandan genocide, where the world simply stood by and did nothing. Had autonomous weapons been available in 1994, maybe we would not have looked away. It seems plausible that if the costs of humanitarian interventions were purely monetary, then it would be easier to gain widespread support for such interventions.

For another thing, it is naive to assume that we can enjoy the benefits of the recent advances in artificial intelligence (AI) without being exposed to at least some downsides as well. Suppose the UN were to implement a preventive ban on the further development of all autonomous weapons technology. Further suppose – quite optimistically, already – that all armies around the world were to respect the ban, and abort their autonomous-weapons research programmes. Even with both of these assumptions in place, we would still have to worry about autonomous weapons. A self-driving car can be easily re-programmed into an autonomous weapons system: instead of instructing it to swerve when it sees a pedestrian, just teach it to run over the pedestrian.

To put the point more generally, AI technology is tremendously useful, and it already permeates our lives in ways we don’t always notice, and aren’t always able to comprehend fully. Given its pervasive presence, it is shortsighted to think that the technology’s abuse can be prevented if only the further development of autonomous weapons is halted. In fact, it might well take the sophisticated and discriminate autonomous-weapons systems that armies around the world are currently in the process of developing if we are to effectively counter the much cruder autonomous weapons that are quite easily constructed through the reprogramming of seemingly benign AI technology such as the self-driving car.

Furthermore, the notion of a simple ban at the international level, among state actors, tacitly betrays a view of autonomous weapons that is overly simplistic. It is a conception that fails to acknowledge the long causal backstory of institutional arrangements and individual actors who, through thousands of little acts of commission and omission, have brought about, and continue to bring about, the rise of such technologies. As long as the debate about autonomous weapons is framed primarily in terms of UN-level policies, the average citizen, soldier or programmer must be forgiven for assuming that he or she is absolved of all moral responsibility for the wrongful harm that autonomous weapons risk causing. But this assumption is false, and it might prove disastrous.

All individuals who in some way or other deal with autonomous-weapons technology have to exercise due diligence, and each and every one of us needs to examine carefully how his or her actions and inactions are contributing to the potential dangers of this technology. This is by no means to say that state and intergovernmental agencies do not have an important role to play as well. Rather, it is to emphasise that if the potential dangers of autonomous weapons are to be mitigated, then an ethic of personal responsibility must be promoted, and it must reach all the way down to the level of the individual decision-maker. For a start, it is of the utmost importance that we begin telling a richer and more complex story about the rise of autonomous weapons – a story that includes the causal contributions of decision-makers at all levels.

Finally, it is sometimes insinuated that autonomous weapons are dangerous not because they are dangerous tools but because they could become autonomous agents with ends and interests of their own. This worry is either misguided, or else it is a worry that a preventive ban on the further development of autonomous weapons could do nothing to alleviate. If superintelligence is a threat to humanity, we urgently need to find ways to deal effectively with this threat, and we need to do so quite independently of whether autonomous-weapons technology is developed further.

1. Which of the following points cannot be inferred from the passage?

2. Why does the author give the example of self driving car?

3. Which of the following options can be inferred from the passage that starts with ‘Furthermore’?

4. Which of the following sentence captures the essence of the penultimate paragraph?

5. What is meant by the sentence mentioned in bold?

6. What does the author try to convey through the passage?

7. The tone of the passage can be said to be

8. Which of the following words is the closest in meaning to the word ‘insinuated’ as used in the passage?

9. Which of the following words is the closest in meaning to the word ‘pervasive’ as used in the passage?

10. Which of the following words is the farthest in meaning to the word ‘benign’ as used in the passage?

 

 

Check your Answers below:

 

 

 

  • Directions:(1-10) Read the passage carefully and answer the questions that follow:

    Autonomous weapons – killer robots that can attack without a human operator – are dangerous tools. There is no doubt about this fact. As tech entrepreneurs such as Elon Musk, Mustafa Suleyman and other signatories to a recent open letter to the United Nations have put it, autonomous weapons ‘can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons [that can be] hacked to behave in undesirable ways’.

    But this does not mean that the UN should implement a preventive ban on the further development of these weapons, as the signatories of the open letter seem to urge.

    For one thing, it sometimes takes dangerous tools to achieve worthy ends. Think of the Rwandan genocide, where the world simply stood by and did nothing. Had autonomous weapons been available in 1994, maybe we would not have looked away. It seems plausible that if the costs of humanitarian interventions were purely monetary, then it would be easier to gain widespread support for such interventions.

    For another thing, it is naive to assume that we can enjoy the benefits of the recent advances in artificial intelligence (AI) without being exposed to at least some downsides as well. Suppose the UN were to implement a preventive ban on the further development of all autonomous weapons technology. Further suppose – quite optimistically, already – that all armies around the world were to respect the ban, and abort their autonomous-weapons research programmes. Even with both of these assumptions in place, we would still have to worry about autonomous weapons. A self-driving car can be easily re-programmed into an autonomous weapons system: instead of instructing it to swerve when it sees a pedestrian, just teach it to run over the pedestrian.

    To put the point more generally, AI technology is tremendously useful, and it already permeates our lives in ways we don’t always notice, and aren’t always able to comprehend fully. Given its pervasive presence, it is shortsighted to think that the technology’s abuse can be prevented if only the further development of autonomous weapons is halted. In fact, it might well take the sophisticated and discriminate autonomous-weapons systems that armies around the world are currently in the process of developing if we are to effectively counter the much cruder autonomous weapons that are quite easily constructed through the reprogramming of seemingly benign AI technology such as the self-driving car.

    Furthermore, the notion of a simple ban at the international level, among state actors, tacitly betrays a view of autonomous weapons that is overly simplistic. It is a conception that fails to acknowledge the long causal backstory of institutional arrangements and individual actors who, through thousands of little acts of commission and omission, have brought about, and continue to bring about, the rise of such technologies. As long as the debate about autonomous weapons is framed primarily in terms of UN-level policies, the average citizen, soldier or programmer must be forgiven for assuming that he or she is absolved of all moral responsibility for the wrongful harm that autonomous weapons risk causing. But this assumption is false, and it might prove disastrous.

    All individuals who in some way or other deal with autonomous-weapons technology have to exercise due diligence, and each and every one of us needs to examine carefully how his or her actions and inactions are contributing to the potential dangers of this technology. This is by no means to say that state and intergovernmental agencies do not have an important role to play as well. Rather, it is to emphasise that if the potential dangers of autonomous weapons are to be mitigated, then an ethic of personal responsibility must be promoted, and it must reach all the way down to the level of the individual decision-maker. For a start, it is of the utmost importance that we begin telling a richer and more complex story about the rise of autonomous weapons – a story that includes the causal contributions of decision-makers at all levels.

    Finally, it is sometimes insinuated that autonomous weapons are dangerous not because they are dangerous tools but because they could become autonomous agents with ends and interests of their own. This worry is either misguided, or else it is a worry that a preventive ban on the further development of autonomous weapons could do nothing to alleviate. If superintelligence is a threat to humanity, we urgently need to find ways to deal effectively with this threat, and we need to do so quite independently of whether autonomous-weapons technology is developed further.

    1. Question

    Which of the following points cannot be inferred from the passage?

    Ans:4

    The first paragraph of the passage tells us how tech entrepreneurs have signed a open letter urging the UN to impose a ban on the development of autonomous vehicles. Therefore, option A can be inferred.

    In the second paragraph, the author states that the world simply stood by and did nothing during the Rwandan genocide. Therefore, we can infer option B.

    The author claims that the genocide could have been tackled has autonomous weapons been there at that time. Therefore, we can infer option C as well.

    Option E can be inferred from the first paragraph. The signatories urged the UN to impose a ban on autonomous weapons. Therefore, option E can be inferred.

    Option D states that monetary concerns prevented nations from interfering in the Rwandan genocide. But, the line ‘It seems plausible that if the costs of humanitarian interventions were purely monetary, then it would be easier to gain widespread support for such interventions’ implies that there were other reasons than just monetary reasons. Therefore, option D cannot be inferred and hence, option D is the right answer.

  • 2. Question

    Why does the author give the example of self driving car?

    Ans:5
    The author uses ‘self driving cars’ as an illustration to show that even if autonomous weapons are banned, it requires only minor tweaks to convert other autonomous goods into weapons. Therefore, the author wants to convey that banning the autonomous weapons will not yield the desired results. He wants to emphasize that other autonomous objects are not any safer than autonomous weapons and hence, there is no basis for banning autonomous weapons. Hence, option E is the right answer.
  • 3. Question

    Which of the following options can be inferred from the passage that starts with ‘Furthermore’?

    Ans:2
    The author wants to convey that an international ban is unlikely to deter individuals from developing such technologies. The line ‘It is a conception that fails to acknowledge the long causal backstory of institutional arrangements and individual actors who, through thousands of little acts of commission and omission, have brought about, and continue to bring about, the rise of such technologies’ conveys that an international ban fails to acknowledge the role played by the individuals. Therefore, option B is the right answer.The author wants to convey that an international ban is unlikely to deter individuals from developing such technologies. The line ‘It is a conception that fails to acknowledge the long causal backstory of institutional arrangements and individual actors who, through thousands of little acts of commission and omission, have brought about, and continue to bring about, the rise of such technologies’ conveys that an international ban fails to acknowledge the role played by the individuals. Therefore, option B is the right answer.
  • 4. Question

    Which of the following sentence captures the essence of the penultimate paragraph?

    Ans:4
    The paragraph states that though the role played by the international bodies cannot be discounted, individuals cannot be absolved of their responsibilities either. Both of them are equally responsible and individuals must exercise caution while making choices. Therefore, option D is the right answer.
  • 5. Question

    What is meant by the sentence mentioned in bold?

    Ans:2
    Through the sentence mentioned in bold, the author intends to convey that we might require the assistance of autonomous weapons to keep the threat of turning other autonomous machines into weapons in check. Autonomous devices are wide spread and to tackle their misuse, we might have the need to resort to the very autonomous weapons we are opposing. Therefore, option B is the right answer.
  • 6. Question

    What does the author try to convey through the passage?

    Ans:4
    Throughout the passage, the author argues that Autonomous weapons are not the only AI technologies that can be hacked. He argues that autonomous weapons are as safe as or equally vulnerable as autonomous vehicles. Therefore, autonomous weapons are not at any particular disadvantage and banning them would be a short sighted approach. Only option D captures these points. Therefore, option D is the right answer.
  • 7. Question

    The tone of the passage can be said to be

    Ans:4
    Throughout the passage, the author places logical arguments to support autonomous weapons. His intention is not to threaten about the rise of autonomous technologies. Therefore, we can eliminate option A. The passage does not sport any humor and hence, we can eliminate option E as well. The author does not look down upon any entity or use a derogatory tone. Therefore, we can eliminate option B as well.
    The author explains why banning autonomous weapons will be a wrong decision and places arguments to support it. Therefore, the tone of the passage can be said to be explanatory and hence, option D is the right answer.
  • 8. Question

    Which of the following words is the closest in meaning to the word ‘insinuated’ as used in the passage?

      Ans:3
    ‘Insinuated’ is a term used to describe information that is not explicitly mentioned but can be inferred. ‘Implied’ is the term closest in meaning among the given options. Therefore, option C is the right answer.
  • 9. Question

    Which of the following words is the closest in meaning to the word ‘pervasive’ as used in the passage?

    Ans:4
    ‘Pervasive’ means ‘widespread’. Therefore, option D is the right answer.
  • 10. Question

    Which of the following words is the farthest in meaning to the word ‘benign’ as used in the passage?

    Ans:3
    ‘Benign’ is a term used to describe harmless things.
    Innocuous means harmless.
    Immaculate means clean.
    Malignant means harmful.
    Therefore, option C is the right answer.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.