Assessability: drafting a footnote to Onora O’Neill

Transparency is important: Onora O’Neill tells us it is a way to build trust, by demonstrating trustworthiness, and not just by making things accessible. For analysts, the expectation that information is useable and interpretable, enabling people to understand answers to questions they have, is natural. But her concern that information is assessable is more involved: analysis is quite hard to evaluate and typically forms only part of the basis for information being propagated.

However, analysis is not separable from the rest of the information, indeed if it were separation might be the best solution.So something more is needed to specify exactly what might be expected for analytical information to be assessable, for its contribution to a claim. Jenny Saul has gone into detail about what it is to claim something, and what is in fact claimed, so that is taken for granted here. But understanding what rests on some analysis as opposed to being more political or rhetorical in nature is narrower.

Analytical Claims

Any claim relying on some analysis rests on that being procedurally correct, but an analytical claim has other features too: broadly it is an inference about an abstraction. So we might hope to assess the correctness, inference and abstraction employed by the analysis underpinning the claim. First, that the analysis itself is valid, secondly that the assumptions are suitable for the claim being made, and thirdly that the abstraction involved corresponds to the account made in the claim. Precisely, we want to assess whether the claim is valid, contextualised and derivative, respectively, and so is based in cited evidence.

A claim may be a contrast or other estimate, and so the model may be implicit, making the assumptions obscure and therefore not assessable, even for experts. Similarly the abstractions involved may be labelled but not defined, or commonly understood in a way that is inconsistent with the definition suitable for analytical work. Validity is not just contingent on correct calculations, but also making corrections for bias and other discrepancies that are common in real data. For claims to be assessed information on each of these will be required, in a way that makes such resolution possible to the audience.

Derivatives

When Frederick Mosteller said it was easy to lie with statistics, but easier to lie without them, he will have had several things in mind. There is a scale as well as terms to be defined to go with estimation, If we step beyond the need to have some empirical source at all for the claim. Absolute scale aside, most statistics are used politically as some comparison or difference, trend or effect. This is typically reducible to an elementary logical form and that form can be compared to cited sources of evidence, to establish whether the logic of the claim is derived there. Several sources of evidence can be combined to support the synthesis of one claim, and if a political claim can be based on ideas other than evidence, but it ought to be possible to assess the extent to which it is so based.

Contexts

A symbolic representation relies on precise specification of the symbols used and the domain of the sets they range across. Political statements may elide these terms, but most language is not so concerned with precision, especially as the world around us is not readily so reduced. Rather than insisting on the exactitude of the entire symbolic representation, we can assess if the various terms meet the evidentiary standards of the claim. Thus a claim can offer the requisite context of what is included, within scope or not, and the level of confidence in this as it varies. Additionally a claim based on statistics relies on some aspects of abstraction which have limits so we ought to assess the extent of the support. These are formally presented as model assumptions, an their imposition on context is typically around exchangeability, whereas a claim may be attending to specific classes of cases.

Validity

Assumptions restrict the context a claim applies to but they also circumscribe what inference can be made. Yet it is common that a claim does not correspond to the evidence provided in support, not just because it is taken out of context or extemporised. Fallacious claims typically have errors in their logic or statistical reasoning, such as claiming a causal relation without warrant. Assessing whether a claim is valid therefore requires appreciating its logic and the type of evidence sufficient to make that sort of claim. These sorts of errors have been extensively discussed in science and philosophy as they are established concerns, whereas social policy evidence is a more recent complexity. So various types of bias and associated fallacies are documented, and primers on clear thinking have been produced for the general public as far back as the 1930s. The main issue is to assess the logic of the claim at all rather than accept specious rhetoric along the lines of “it stands to reason” in place of any argument at all.

Assessment

In general terms, the requirements for assessment are about transparency, that there is information making it clear what the claim is and how it is evidenced. And in political usage this will extend to clarity over exactly what is intended by terms used and the coverage and exclusions in the population for both claim and evidence. Digging into what these may be is typical of good journalism, but statistical challenges also have a more critical form, which often comes across as nitpicking when it is a more careful testing of validity and domain. While statisticians can ask about plausible alternative explanations or unmeasured influences, mathematicians often offer precise counterexamples to test the generality. And philosophers might take this a step further, to say that what was important was to exchange reasons, responding to well formed questions and challenges effectively.

Transparency

The main problem assessability encounters is information dumping as a response to demands for transparency: putting so much out without structure is indigestible, but it also obfuscates. If documents published assume the reader knows what terms mean, or the supporting documents for that are not published alongside them, then claims cannot be assessed. Details of methodology are taken as read when they are in fact unwritten, and unfamiliar to outsiders, even as they can be erroneous. And it is well known that other perspectives are very important in matters of wide ranging public policy.

This can become more significant as we move to transparency of a particular kind as a remedy for the automation of decisions affecting consumers and users of public services. Algorithms can suffer from flawed and biased allocations, mistmatched contexts and be applied to situations beyond their design. But we cannot know that without the relevant information being provided in support. The development of standards for algorithmic transparency is promising but it has a number of risks, including a fundamental point that we need to be clear about what is the aim of transparency in the first place.

Post Script (01/09/22)

Kevin McConway has pointed out that exactly what of Onora O’Neill’s work this was a footnote to was unclear, and the Ethics of Communication seems the best match. He also suggested that claims about the future would benefit from particular consideration, especially as they feature in the preceding post on Responsible Modelling so that will benefit from a follow up post examining how to assess a claim about the future, in the present.

2 thoughts on “Assessability: drafting a footnote to Onora O’Neill

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.