LSAT and Law School Admissions Forum

Get expert LSAT preparation and law school admissions advice from PowerScore Test Preparation.

 Brook Miscoski
PowerScore Staff
  • PowerScore Staff
  • Posts: 418
  • Joined: Sep 13, 2018
|
#60297
Freddy,

Please see the prior comment about the difference between "reliable" and "accurate," as well as another reason to eliminate (E).

Regarding (D), the administrator does not say that the air control tapes are inaccurate, he simply says that the review was partial. He's not saying the tapes reviewed were wrong about specific flights, just that they might not be representative of the other flights.

Regarding whether (B) is extreme, it helps to go back to the stimulus to see what data the administrator believes is superior. He thinks that flight reports created by the pilots are superior to a sampling of the recorded events. But if a pilot made a mistake, why should we expect the pilot to realize that? These are minor course deviations, not airline accidents. The pilot's perception and memory is not necessarily reliable (trustworthy) since there's every reason to believe the pilot just didn't notice. (B) is not extreme, it's a great explanation of why sampling the recording of the events could be more reliable than counting on the pilots to notice extremely minor things.
 Brook Miscoski
PowerScore Staff
  • PowerScore Staff
  • Posts: 418
  • Joined: Sep 13, 2018
|
#60298
Origami,

For "reliable" versus "accurate," please see the preceding comments. The administrator thinks that using the larger data set is more reliable--more likely to produce a correct answer. It's an odds argument, not a commitment to which number is actually correct.

A flaw question stem gives you a "must be true" task in the sense that the answer must describe something that occurred in the stimulus. However, you are describing an error that the stimulus made, so it is not as direct as repeating the stimulus. The administrator's premise is that self-reporting by the pilots who sometimes stray off course is a reliable source of information. However, the difference between the self-reporting and the sampled recordings implies that pilots might not realize they had gone off course. Answer choice (B), which points out that pilots who cannot be relied on to stay on course might not be a reliable source of information about whether they strayed off course, is not introducing new information.
 chernyshevsky
  • Posts: 1
  • Joined: Oct 20, 2019
|
#71415
Brook Miscoski wrote:Regarding whether (B) is extreme, it helps to go back to the stimulus to see what data the administrator believes is superior. He thinks that flight reports created by the pilots are superior to a sampling of the recorded events. But if a pilot made a mistake, why should we expect the pilot to realize that?
There's nothing in the stimulus making the connection between flights straying off course during landing with pilot mistakes. Strong crosswind could easily be the reason behind all of the landing incidences.
 Rachael Wilkenfeld
PowerScore Staff
  • PowerScore Staff
  • Posts: 1358
  • Joined: Dec 15, 2011
|
#71442
Hi chernyshevsky,

Is it common sense to assume that ALL of the strays were due to headwinds? Would strong headwinds be something that a pilot would not be expected to account for? It would seem that out of 2 million flights, at least some would include a pilot error. With that in mind, answer choice (B) is the one that describes a flaw. If there is the possibility of pilot error, we can't trust pilot self-reports.

Imagine a company wants to know how much their employees walk during the day. They decide to hold a contest between the 100 employees to see who walks the most, with the winner getting a prize. That data would be used to come up with an average amount walked by the group over the week. There aren't enough pedometers to go around, so there are two options to get data. 1) 10 random people are chosen for the pedometers, and data is estimated from those 10. Or 2) everyone self-reports their miles. Which is more reliable? Do we have to know that anyone is lying, or is it enough to know there's likely to be motivation to lie?

That's what we have here. Answer choice (B) provides motivation to lie for the pilots self-report. We see this frequently as an error on the LSAT in terms of survey design. We can't always trust self-reports.

Hope that helps!
Rachael
 Katherinthesky
  • Posts: 36
  • Joined: Feb 07, 2020
|
#87742
Hello!
The "it" in (E) refers to the air traffic control tapes, correct?

Thanks in advance.
 Adam Tyson
PowerScore Staff
  • PowerScore Staff
  • Posts: 5153
  • Joined: Apr 14, 2011
|
#87769
The "it" in answer E refers to "the higher number" referenced in that answer. The higher number in the stimulus is 1 in 20,000, which is the number based on the partial review of air traffic control tapes. So yes, you are correct!
User avatar
 ihenson
  • Posts: 8
  • Joined: Jul 02, 2023
|
#102512
Hi! I've read through all the responses, and I still don't really understand how we would have known this is the answer based on common knowledge. I didn't realize that the flight reports were self-reported? The language "flight reports required of pilots for all commercial flights" doesn't necessarily mean this is self-reported data. For example, I was required to provide an immunization report for work, but it wasn't self-reported. Couldn't this have also been a report generated by data the plane or plane towers gather during flight and at the time of landing?

I'm also struggling the "unreliable" "accurate" thing with answer D. I was under the impression that while things can be reliable and inaccurate, you couldn't have something that was unreliable, but accurate. Would someone also be able to provide an example of something that is unreliable, but accurate?
User avatar
 Jeff Wren
PowerScore Staff
  • PowerScore Staff
  • Posts: 389
  • Joined: Oct 19, 2022
|
#102596
Hi ihenson,

In this argument, we have two very different statistics, the 1 in 20,000 figure based on a partial review of air traffic control tapes and the 1 in 2 million figure based on a thorough study of flight reports required of pilots. The airport administrator making the argument believes that the 1 in 2 million figure is the better, "more reliable" figure, and therefore the figure that we should use in making policies when designing runways.

The problem is that we don't know what the "real" figure actually is. Is it closer to the 1 in 20,000 or closer to the 1 in 2 million?

In your question, you mention that you didn't realize that the flight reports were self-reported by the pilots and raise the possibility that the reports were generated by data the planes gathered. I understand how you wouldn't necessarily know that the reports are made by the pilots (self-reported) even with the wording "required of the pilots" in the argument for the reasons that you state, and in the "real world" they probably do use such data to avoid the very flaw that this argument has.

The major problem with your idea is that if that were the case, there wouldn't really be a flaw in the argument at all (at least in terms of favoring the 1 in 2 million figure). In other words, if the 1 in 2 million figure was based on data generated by the plane or the control tower (and there was no other reason to think that the figure was incorrect, such as a software malfunction, etc.), then that figure would presumably be the more accurate statistic.

Since we know that the argument is flawed based on the question stem, you should be thinking of how that 1 in 2 million figure may be wrong. Answer B gives a reason. Since self reporting errors (and people lying on surveys in general) are common survey flaws tested on the LSAT, Answer B hopefully should alert you to what may be going on here to explain the huge discrepancy in these two statistics.

"Accurate" just reflects the real results of the survey. If I survey 3 people and exactly 2 of them have brown eyes, then it is "accurate" to report that 2 of the 3 people surveyed had brown eyes. Because the sample size is so small however, it would not be "reliable" to use these results to draw any broad conclusion about the proportion of people in our society who have brown eyes.

In this question, the 1 in 20,000 figure may be "accurate" in the sense that, of the data reviewed, 1 in 20,000 is the correct figure. It may not be reliable, however, if the data were not from a large enough sample size, just as in the above example.

Get the most out of your LSAT Prep Plus subscription.

Analyze and track your performance with our Testing and Analytics Package.