9 Infrastructure Flashcards

(130 cards)

1
Q

How can attempts to understand “the social” be positioned?

A

On a spectrum from agency-centred (focus on individuals/interaction) to structure-centred (focus on social structures shaping action).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What does “agency-centred” mean in social theory?

A

Explanations emphasise individual meaning-making and social action and how order is produced through everyday interaction.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What does “structure-centred” mean in social theory?

A

Explanations emphasise social facts/norms, relations of production, or deeper universal structures that shape social life.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What does it mean that social theory often treats “the social” as sui generis?

A

“The social” is seen as a phenomenon in its own right, either emerging from interaction or being shaped by structure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

In public discourse, what is “the social” commonly taken to mean?

A

The domain of society related to people in interaction, often contrasted with areas like economics, politics, nature

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Which approaches challenge separating “people” from “society/nature”?

A

Actor-Network-Theory (ANT), feminist critique, and non-Western approaches argue against separating “the people” part from the rest of society or nature

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What key claim does ANT make about “objects” and “the social”?

A

Objects play a role in social life; ANT rejects an “artificial divide” between social and technical dimensions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How do material artefacts relate to “how we live together”?

A

Material artefacts are built into social living; they help constitute social relations and practices

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How do “things of nature” relate to social life in these approaches?

A

Natural elements are integrated into “how we live together,” not treated as outside the social

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What does analysis focus on in these approaches (ANT/feminist/non-Western)?

A

The “how” of living together, explicitly including the role of objects/artefacts and nature

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Give examples of theorists/concepts often placed toward the agency end of the spectrum.

A

Weber (meaning-making/social action), Garfinkel/Blumer (interaction & shared meaning-making), Goffman (scripts)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Give examples of theorists/concepts often placed toward the structure end of the spectrum.

A

Durkheim (norms/social facts), Marx (means & relations of production), Lévi-Strauss (universal mental structures).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Material artefact

A

A physical object created, modified, or used by people that embodies cultural,
historical, or functional significance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Connect artefacts to each other and you get

A

infrastructure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

“The notion of infrastructure commonly refers to the networked
technical support structures that facilitate the provision of services and the movement of goods,
people, and ideas through space.” (Niewoehner 2022)

A

Infrastructure as an object in the world

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

‘Infrastructure’ focuses our attention on “the
constant interweaving of technical objects, social organization, knowledge practices, and moral
orders.” (ibid.)

A

Infrastructure as an analytic to understand the world

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is social determinism in the technology–society debate?

A

The view that society determines what technology is and does; technology is treated as a comparatively passive object shaped by human power.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is technological determinism?

A

The view that technology determines what society can be; technology is treated as powerful, shaping a comparatively passive society

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What key question does Winner pose about artifacts and politics?

A

Whether links between technical systems and power/authority come from “intractable properties in the things themselves” or are imposed by institutions (e.g., ruling class/governing body)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Winner’s weak version: what does it claim?

A

Technologies can be designed to support certain social orders; many technologies have interpretive flexibility (different social uses/meanings/design choices)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Winner’s strong version: what does it claim?

A

Some technologies are strongly linked to specific social orders, with few meaningful alternatives—initial choices have major consequences.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What does Winner mean by “inherently political technologies”?

A

Technologies that require or strongly fit particular social relations; this can range from “compatibility” to strict requirement, and relations can be internal/external to the tech’s workings

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Why is nuclear power often used as an example in Winner’s strong view?

A

It’s presented as a case where the technology’s operation is tied to highly specific power/authority arrangements (an “inherently powerful” artifact)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

According to the course slide, what is Bourdieu’s key point about material culture?

A

Everyday life inscribes itself into material culture, and material culture then stabilises everyday life/social order, creating path dependencies.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
What are path dependencies in this context?
Once material arrangements (infrastructure, artifacts, routines) are established, they lock in certain ways of acting and organizing society, making alternatives harder.
26
What is Latour’s core methodological claim about material culture and social order?
The connection should not be assumed a priori; it must be investigated empirically by observing actor-networks (e.g., via ethnography of infrastructure).
27
How does Joerges describe the “moral issue” raised by Winner’s argument?
Choosing to “live in” specific material artifacts is ipso facto a political decision—e.g., you can’t simply choose nuclear plants and still preserve certain democratic forms.
28
How does Latour (via Joerges) differ from Winner regarding the power of things?
For Latour, power doesn’t lie in artifacts themselves, but in how they are networked/associated with other actants; the effects are contingent on configurations.
29
Ethnography of Infrastructure
Susan Leigh Star
30
infrastructure is sunk into other structures, social arrangements, and technologies.
Embeddedness
31
infrastructure does not have to be reinvented each time or assembled for each task.
Transparency
32
infrastructure has a temporal or spatial reach beyond a single event or one-site practice.
Reach
33
the taken-for-grantedness of infrastructure is a sine qua non of membership in a community of practice. Strangers and outsiders encounter infrastructure as a target object to be learned about.
Learned as part of membership
34
infrastructure both shapes and is shaped by the conventions of a community of practice
Links with conventions of practice
35
infrastructure plugs into other infrastructures and tools in a standardized fashion.
Embodiment of standards
36
infrastructure does not grow de novo; it wrestles with the inertia of the installed base and inherits strengths and limitations from that base
Built on an installed base
37
the normally invisible quality of working infrastructure becomes visible when it breaks down.
Becomes visible upon breakdown
38
infrastructure is big, layered, and complex, and hence never changed from above. Changes take time, negotiation, and adjustments.
Is fixed in modular increments, not all at once or globally
39
Seeing like a state
Scott
40
What does Scott mean by “seeing like a state”?
A way of governing where states (and powerful private actors) try to manage society through large-scale interventions that require simplified, administrative “views” of complex realities.
41
What kinds of actors “see like a state” (in the slide)?
States and powerful private actors
42
Give examples of large-scale interventions mentioned on the slide.
Health insurance, spatial planning, taxation, education, deep tech (technology)
43
What is the ideal stated aim of these interventions?
To “improve the human conditions”—i.e., make people’s lives better.
44
What must large-scale interventions commonly do first?
Make populations, territory, or exchange systems (the economy) “legible.”
45
Define legibility (Scott).
The process of simplifying complex social, economic, and ecological practices into forms a state can easily understand and manage.
46
What are classic examples of legibility tools (from the slide)?
Uniform surnames, cadastral maps, standardized agricultural plots
47
Why can “schemes to improve the human condition” fail, according to the slide?
Because top-down schemes can assume society is fully modellable/abstractable, ignore local practices and environmental knowledge, and rely on uniform standardized categories that erase variation
48
Apply “legibility” to health insurance: what is made legible, and how?
People’s health and care become legible via categories (diagnosis codes, eligibility classes, standardized forms), enabling administration at scale—but risking missing local/individual complexity
49
Practical diagnostic question: how do you spot a “seeing like a state” move in a policy/tech project?
Look for steps where messy realities are turned into uniform categories, maps, registries, IDs, or standards so a central actor can act on them at scale.
50
Susan Leigh Star and James C. Scott
both study “background ordering” Both are interested in how social order is produced through arrangements that often fade into the background. For Star, this is infrastructure: embedded, far-reaching arrangements that are transparent to competent users and tightly linked to practice. For Scott, this is the state’s administrative “view”: large-scale interventions require making society legible so it can be governed.
51
Main object of analysis: infrastructure vs. state simplification
Star (infrastructure ethnography): looks at infrastructure as embedded, transparent, extending across time/space, and learned as part of membership in a community of practice. Scott (seeing like a state): looks at how states (and powerful private actors) intervene at scale and therefore need legibility, i.e., simplifying complex practices into manageable forms
52
Viewpoint: inside practice vs. governing from above
Star: emphasizes the insider perspective—members don’t notice infrastructure because it is “taken for granted”; outsiders encounter it as something to learn. Scott: emphasizes the administrator/planner perspective—governance works by turning complexity into categories like names, maps, standardized plots.
53
How they fit together (scott n star leigh)
You can treat legibility projects (Scott) as a kind of infrastructuring: states build standards, registries, maps, categories that become infrastructure. Once established, these systems can become transparent and “learned as part of membership” for those who operate within them (Star).
54
Application example (Scott and Star leigh)
Scott lens: cadastral maps make territory legible so the state can tax, plan, and administer land. Star lens: for surveyors, planners, and property administrators, the registry becomes infrastructure—embedded in work routines, transparent to insiders, but confusing to outsiders who must learn categories and procedures.
55
When do top-down schemes to “improve the human condition” tend to fail?
When they assume society can be perfectly modelled/abstracted, ignoring real complexity.
56
What is a second common reason these schemes fail?
They ignore local practices and environmental knowledge, so the solution doesn’t fit the context
57
What is a third common reason these schemes fail?
They rely on uniform, standardized categories that erase variation and local specificity
58
What ideology often drives overconfidence in formal planning?
High-modernist ideology: strong faith in formal planning and comprehensive design from above.
59
How does “lack of feedback from the local level” contribute to failure?
Without feedback mechanisms, planners can’t correct mismatches between the plan and on-the-ground realities during implementation
60
What kind of knowledge is the state often unable to capture?
Informal or tacit knowledge—practical know-how and context-sensitive understanding that isn’t easily formalized
61
You design an AI system for welfare eligibility using fixed categories. What Scott-type risk appears?
Standardized categories erase variation, so real-life cases won’t fit the model, producing unfair or ineffective outcomes
62
A city introduces a uniform “best practice” street plan for all neighborhoods. What failure mechanism is likely?
Ignoring local practices/environment (how people actually move, local conditions), so the plan clashes with lived reality
63
A ministry rolls out an education reform with strict metrics and no teacher input. What are the two key risks?
Overconfidence in formal planning (high-modernist ideology) plus lack of local feedback, making the reform brittle and hard to adapt
64
A national health-insurance reform is based only on what can be coded in forms. What is likely to be missed?
Tacit/informal knowledge (complex care realities, local workarounds), which the state struggles to capture in formal categories.
65
How could you redesign a top-down scheme to reduce Scott-style failure?
Build in local feedback loops, allow contextual adaptation, and avoid forcing everything into one uniform category system
66
What’s a warning sign that a policy assumes society can be “perfectly modelled”?
When it treats messy social life as fully representable in a single formal model and expects uniform compliance without exceptions.
67
what does it mean Treating the ML pipeline as ‘infrastructure’?
stop seeing it as just a technical workflow (data → model → deployment), and instead analyze it like a socio-technical system including dependencies, standards, installed base, breakdowns, and incremental change.
68
What is embeddedness in an ML pipeline-as-infrastructure view?
The ML pipeline depends on and is integrated with other infrastructures and conventions (e.g., data systems, organizational routines).
69
What does “learned as part of membership” mean for an ML pipeline?
Being competent in the organization requires learning what you need to operate within the pipeline—and how the pipeline turns the world into a particular type of “problem.”
70
What does transparency mean in infrastructure terms (applied to ML pipelines)?
For competent users the pipeline becomes “taken for granted”—it doesn’t need to be reinvented each time and fades into the background
71
What are standards in the ML pipeline-as-infrastructure framing?
The pipeline relies on standards (categories, formats, evaluation conventions, processes) that allow it to connect with other tools/infrastructures
72
What is the installed base and why does it matter for ML pipelines?
Infrastructure is built on an existing base (legacy systems, data definitions, metrics). It doesn’t start from scratch and inherits inertia, strengths, and limitations.
73
What does “infrastructure is invisible until breakdown” mean for ML pipelines?
The pipeline’s role becomes visible when it fails (e.g., errors, drift, missing data, harms), revealing dependencies and maintenance work.
74
What are modular increments in changing ML infrastructure?
Infrastructure is typically introduced/fixed gradually (piecemeal modules), not globally all at once—because it requires negotiation and alignment with the installed base
75
You build a new ML model but deployment is blocked by data access rules and tooling. Which infrastructure property explains this?
Embeddedness / installed base—the pipeline depends on other infrastructures (data governance, legacy systems) that constrain what you can do
76
A new team member struggles because they don’t know what “ground truth” labels mean in your dataset. What does this illustrate?
“Learned as part of membership”—insiders take label meanings/conventions for granted; outsiders must learn the infrastructure
77
A harm incident occurs and suddenly everyone examines the labeling process, monitoring dashboards, and handoffs. What happened?
A breakdown made the infrastructure visible; politics and consequences that were buried in technical encodings become traceable
78
You can’t change a problematic feature definition quickly because many downstream systems rely on it. What concept is this?
Installed base inertia / path dependency—existing dependencies make change costly and slow.
79
Your pipeline requires a fixed schema for “customer type,” forcing diverse realities into a few boxes. What infrastructure issue is this?
Reliance on standards/categories that enable coordination but can erase variation and embed assumptions
80
How does the ML pipeline-as-infrastructure idea connect to Susan Leigh Star’s “ethnography of infrastructure”?
It uses Star’s properties (embeddedness, transparency, learned as membership, conventions) to study ML pipelines as background systems shaping practice.
81
How does it connect to Scott’s idea of legibility?
Digital infrastructures (including ML pipelines) shape who/what becomes legible by turning the world into standardized data/metrics that institutions can act on
82
How does it connect to “artefacts have politics” / “infrastructuring the world”?
Algorithms are components of digital infrastructures that help establish social/moral order; infrastructuring enables some forms of life/identity to thrive while restricting others—so you should consider consequences.
83
SL in image recognition early categories (e.g., “bad person,” “failure”) reveal how social judgements get encoded as technical labels. - Label choice bias - Annotation bias - Representational bias
Taxonomies as political artifacts
84
Taxonomies as political artifacts
classification systems (taxonomies—labels, categories, ontologies) are not neutral. By deciding what categories exist and where the boundaries are, they build particular social judgments and power relations into “objective-looking” systems—especially when those categories get embedded in infrastructures (databases, bureaucracies, ML systems). What counts as a thing? What differences matter? Who gets grouped together or separated? What is treated as “normal” vs “deviant/problematic”? Those choices advantage some perspectives and make others harder to express—so the taxonomy becomes a political artifact (an artifact that settles issues of power/authority by design).
85
Why taxonomies clearly political
In supervised learning, label sets can directly encode social judgments. The slide notes early image-recognition categories such as “bad person” or “failure”—showing how moral/social evaluations can become technical labels, producing label/annotation/representation biases. In large databases/ontologies, “technical” schema choices can systematically privilege certain groups’ knowledge. Bowker notes controversies like ethnic classification in the US census, and how dominant groups defining an ontology can reinforce their power; he also gives the example that indigenous plant knowledge may be pushed into useless “free text fields,” making it effectively invisible to the system
86
Taxonomies as political artifacts link to Seeing like a state (Scott)
taxonomies make people/places/actions legible—they simplify complexity so institutions can manage it. But simplification can erase variation
87
Taxonomies as political artifacts link to Infrastructure (Star/Bowker)
once a taxonomy is built into infrastructure, it becomes taken-for-granted and hard to change—so the politics “disappear” into everyday routine
88
“Customer segments” are a good example of taxonomies as political artifacts
What makes customer segments “political” (in the course sense) 1) They create legibility for large-scale intervention (Scott) Segmentation simplifies messy, diverse lives into a small set of categories so an organization can manage people at scale (pricing, eligibility, marketing, service levels). That’s exactly the logic of legibility: turning complexity into categories that are easy to administer. 2) Once embedded, they become infrastructure (Star) Inside the company, segments become taken-for-granted: they sit in CRMs, dashboards, campaign tools, call-center scripts, and KPI systems. For insiders this becomes “just how the system works”; for outsiders it’s opaque and hard to contest—typical infrastructure dynamics. 3) The taxonomy distributes resources and risks Segments often determine: discounts vs. higher prices, faster support vs. waiting, product access/credit limits, which customers are targeted for retention vs. ignored. So the classification has real consequences—this is what it means for an artifact to have politics (it can support particular social orders or power relations).
89
Amazon Mechanical Turk et al. annotators perform the underpinning labour under tight time constraints, shaping data quality/bias.
Crowd workers as invisible infrastructure
90
Ethnographically explore who defines what a face, object, or emotion “is,” and whose perspective becomes the benchmark.
The politics of “ground truth”
91
what is Ethnographically explore mean
If you ethnographically explore an ML pipeline, you would: sit with the people who label data / review outputs, observe how categories and “ground truth” are decided, trace what tools and standards shape decisions, pay attention to moments when the system breaks or people work around it. That’s “ethnographically exploring”: learning from what people actually do in a specific setting. From Latour/ANT, it means to “follow the actors themselves” and watch how associations are made in practice, instead of assuming “the social” explains everything in advance. From Susan Leigh Star’s ethnography of infrastructure, it means looking at infrastructure as something that is embedded, transparent to competent users, and learned as part of membership—and noticing it especially when it becomes visible (e.g., for outsiders or during breakdowns).
92
the relationship between material culture and social order should not be assumed a priori; it must be investigated empirically by observing actor-networks (e.g., ethnography of infrastructure). Ethnographically explore = study something by ethnography: -observe practices in context (often participant observation) -talk to participants (interviews) -follow actors / actor-networks (Latour/ANT) -trace routines, tools, categories, infrastructure -notice breakdowns and outsiders’ perspectives (Star)
Bruno Latour
93
Supervised learning (SL)
the model learns from labelled data, where humans provide the “ground truth” labels (often via experts or crowd work)
94
Self-supervised learning
the model learns from unlabelled data by creating its own learning signal via pretext tasks (e.g., data augmentations or reconstruction/prediction tasks), rather than using human labels during pretraining.
95
Politics of self-supervised learning
Issues with the data itself, pretext task, augmentation, downstream supervision
96
downstream supervision
supervised (labelled) training step that happens after a model has been pre-trained, e.g., after self-supervised learning, when you fine-tune the model for a specific “downstream” task (classification, detection, etc.). In other words: even if pretraining is “self-supervised,” you often still use human-provided labels later to adapt the model to an application. A key point in the materials is that this downstream fine-tuning can introduce bias, because humans and institutions decide what labels exist and how things are categorized
97
Human-induced bias in labels themselves, e.g. during supervision or downstream fine-tuning
Label bias
98
Bias in the annotation process
Annotation bias
99
class hierarchy skewed towards e.g. Western categories
Taxonomy bias
100
model learns ‘being indifferent’ towards significant differences
Augmentation bias
101
often large, non-curated, unlabelled data sets
Distribution bias
102
Where is the data coming from
data infrastructures)
103
Curated and labelled public/private data sets Curated and unlabelled public/private data sets
revised ImageNet
104
Non-curated, labelled public/private data sets
early ImageNet
105
Non-curated, unlabelled public/private
web-scraping
106
How are we pre/training our model/algorithm
model infrastructure
107
medical experts in imaging
Expert supervised learning
108
crowd work
Lay supervised learning the practice of using large numbers of non-expert, paid online workers (on “crowdwork” platforms such as Amazon Mechanical Turk) to produce the labels/annotations that SL needs as its “ground truth.” labour politics issue: supervised learning can rely on low-paid crowdwork t can affect model outcomes via annotation/label biases and quality issues (workers under time pressure, different interpretations shaping the dataset).
109
Supervised Learning (ImageNet) :Human labels Self-Supervised Learning:Data augmentations / reconstruction tasks
Source of “ground truth”
110
Supervised Learning (ImageNet) :Label bias, annotation bias, taxonomy Self-Supervised LearningAugmentation: bias, distribution bias
Dominant bias
111
Supervised Learning (ImageNet) : WordNet categories, Western ontologies Self-Supervised Learning:Web-scale cultural imbalances, platform biases
Cultural imprint
112
Supervised Learning (ImageNet) :Low-paid crowdwork Self-Supervised Learning:Data scraping, removal of human labour → invisibilised
Labour politics
113
Supervised Learning (ImageNet) :Some curation but skewed Self-Supervised Learning:Often low curation, heavily skewed
Representational diversity
114
Supervised Learning (ImageNet) :Good on similar tasks Self-Supervised Learning: Often stronger on diverse or novel
Generalization
115
Supervised Learning (ImageNet) :Encoded stereotypes, harmful categories Self-Supervised Learning:Ambiguous semantics, background
Potential harms
116
Strategies to mitigate biases: balanced sampling
Ensure equal representation of genders, ages, and ethnicities.
117
Strategies to mitigate biases: Synthetic Data Generation
Use GANs or 3D simulation for gestures and facial expressions
118
Strategies to mitigate biases: Fairness-Aware Training
Apply techniques like re-weighting, adversarial debiasing, or domain adaptation.
119
Strategies to mitigate biases: Cross-Cultural Annotation
Include annotators from diverse cultural backgrounds to reduce subjective labelling bias.
120
Strategies to mitigate biases: Transfer Learning Across Domains
Fine-tune models on smaller, diverse datasets after pre-training on large biased ones.
122
Strategies to mitigate biases: Data Augmentation
Vary lighting, angles, and backgrounds. Simulate diversity in skin tones and clothing.
123
“Artefacts have politics” (Langdon Winner)
technical things are not neutral: the design and arrangement of artefacts (and larger technical systems) can build in or require particular power relations—so choosing/using certain technologies is also, in effect, a political choice. Winner distinguishes two ways this happens: Design/arrangement settles a political issue: The “invention, design, or arrangement” of a device/system can become a way of settling community affairs. Course example: the story of Robert Moses’ low bridges to Long Island, allegedly built so buses couldn’t pass, thereby excluding poorer (and Black) people who relied on public transport—an “ongoing social injustice” embedded in infrastructure. Inherently political technologies: Some technologies are “strongly compatible with particular kinds of political relationships.” Example in the text discussing Winner: the claim that you can’t “decide to have nuclear plants and at the same time preserve democratic social forms,” because the technology tends to require centralized control.
124
Algorithms are part of digital infrastructures that contribute significantly to
- who and what becomes ‘legible’ or not to whom and how and with what consequences; - how ‘we’ understand ourselves as individuals, groups and societies; - social and moral order
125
the development, maintenance, repair etc. of infrastructure, is a continuous contribution to making worlds, i.e. to allowing certain forms of life, identity, politics … to flourish while making it difficult for others.
Infrastructuring
126
What is “infrastructure” (in the course sense), beyond “tubes and wires”?
Infrastructure is understood as networked technical support structures that enable services and movement of goods/people/ideas, and analytically as the interweaving of technical objects, social organization, knowledge practices, and moral orders (Niewoehner 2022)
127
What does “infrastructuring” mean?
“Infrastructuring” is the ongoing work of developing, maintaining, repairing, and extending infrastructure. It is not only technical work; it is socio-technical and shapes social possibilities. The slide frames infrastructuring as something that can enable some societal forms/identities to thrive while restricting others
128
Explain: “When you engage in infrastructuring, make sure that as many possible consequences of your choices as possible are on the table.”
It means you actively surface and discuss the downstream effects of design choices instead of treating them as neutral “technical details.” Because infrastructure often sits in the background and is taken for granted, its consequences can be hard to trace and its politics can be “buried in technical encodings”—so responsible infrastructuring requires making effects visible and debatable early.
129
Give an example of consequences you should “put on the table” when building an algorithm as part of digital infrastructure.
If you build a public-service eligibility algorithm, you should put on the table how it affects legibility—i.e., “who and what becomes legible”—because that shapes societal self-understanding and social/moral order (e.g., what categories of people the state can “see” and act upon)
129
What does it mean to “take responsibility for the worlds that your social and technical skills help to build”?
It means recognizing that building infrastructure helps produce particular social worlds: what is possible, normal, and valued; who is included/excluded; and what moral order is stabilized. So responsibility involves accountability for those socio-political effects, not just whether the system “works.”
130
Why does the material say politics can be “buried” in infrastructure?
Because infrastructure is frequently invisible and taken for granted; maintenance work is often undervalued/invisible; and therefore outcomes become difficult to trace—making it easier for political choices to disappear into “technical” decisions and standards.