ML Data Engineering & Feature Pipelines Flashcards

(48 cards)

1
Q

What is the main role of data engineering in machine learning systems?

A

To provide reliable, well-documented pipelines that deliver feature and label data in the right shape, quality, and cadence for training and serving models.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is a feature in ML?

A

An input variable used by a model to make predictions, typically derived from raw data through transformations and aggregations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is a label in supervised learning?

A

The target variable that the model is trained to predict, representing known outcomes for historical examples.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Why is feature engineering often more impactful than model tweaking?

A

Better features can reveal relevant structure in data and improve signal-to-noise, while model changes may offer smaller gains on poor features.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the difference between a feature and a raw field from a source system?

A

Features are cleaned, standardized, and engineered for modeling, while raw fields are direct outputs from source systems that may include noise and idiosyncrasies.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is a feature pipeline?

A

A data pipeline that computes, stores, and serves features for training and inference, often from raw logs or transactional data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Why is it useful to separate feature pipelines from model code?

A

It decouples data preparation from modeling logic, enabling reuse across models and clearer ownership for data vs algorithms.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is an offline feature store conceptually?

A

A storage layer that holds historical feature values for training and batch scoring, usually on a warehouse or data lake.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is an online feature store conceptually?

A

A low-latency store that serves the latest feature values for real-time inference, often based on key-value or cache technology.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Why is consistency between offline and online features critical?

A

If features differ between training and serving, models will see a different input distribution in production, causing performance degradation (train–serve skew).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is train–serve skew?

A

A mismatch between the data and features used during training and those seen at inference time in production.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are common causes of train–serve skew?

A

Different feature code paths offline vs online, time-dependent features not computed the same way, or using future information in training that is unavailable at inference.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

How can you reduce train–serve skew?

A

Reuse the same feature definitions in offline and online pipelines, avoid using future data, and thoroughly test end-to-end feature flows.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is a snapshot training dataset?

A

A static extract of features and labels at a given time, used to train a model on a specific snapshot of history.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is a point-in-time correct training dataset?

A

A dataset that ensures features for each training example are computed only from information available up to the event time, avoiding leakage from the future.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is data leakage in ML datasets?

A

Using information in training that would not be available at prediction time, leading to overly optimistic metrics and poor performance in production.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What are examples of leakage from feature engineering?

A

Using post-outcome flags as features, aggregating over future periods, or joining labels back into features inadvertently.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Why is point-in-time correctness challenging?

A

It requires storing historical states of features and carefully designing joins and windows so future updates do not contaminate past examples.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is a common pattern for building point-in-time features?

A

Store time-stamped events and compute features using window functions or aggregations restricted to times before the label event.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is target leakage via look-ahead windows?

A

When feature windows accidentally extend beyond the label event time, effectively including information about the outcome in inputs.

21
Q

What is a label generation pipeline?

A

A pipeline that derives target values from raw events or transactional data, applying definitions and time windows consistently.

22
Q

Why should label definitions be documented and versioned?

A

Changing label logic over time can alter what the model is learning; versioning makes experiments reproducible and auditable.

23
Q

What is a positive-unlabeled (PU) situation in labels?

A

A setting where positive cases are labeled but negatives are mixed with unlabeled and potentially positive cases, common in fraud or anomaly detection.

24
Q

Why is sampling important for ML training datasets?

A

Full historical data may be too large or imbalanced; sampling can improve training efficiency and class balance while preserving key patterns.

25
What is stratified sampling?
Sampling that preserves the proportion of classes or important subgroups, improving representativeness of the training set.
26
Why must sampling decisions be recorded?
They affect metrics and reproducibility; future comparisons and retraining must know how data was sampled.
27
What is a feature drift in ML systems?
A change over time in the distribution of input features compared to the training data.
28
What is concept drift in ML systems?
A change over time in the relationship between features and labels, such that the original model becomes less appropriate.
29
How can feature drift be detected?
By monitoring feature distributions over time and comparing them to historical baselines, using statistical tests or divergence measures.
30
How can label distribution drift be detected?
By monitoring label proportions and outcome rates over time, when labels are available, and comparing to training distributions.
31
What is a retraining pipeline?
An orchestrated process that periodically or conditionally rebuilds a model using recent data, evaluates it, and, if acceptable, promotes it.
32
Why must retraining pipelines be automated?
Manual retraining is error-prone and slow; automation ensures models stay current and reduces operational overhead.
33
What is a model registry conceptually?
A system that stores model artifacts, metadata, and versions, supporting promotion from staging to production.
34
What metadata should be tracked per model version?
Training data snapshot, feature and label definitions, hyperparameters, metrics, code version, and deployment history.
35
Why is reproducibility important in ML data pipelines?
To debug issues, audit model decisions, compare versions, and comply with regulations or internal governance.
36
What is a feature importance analysis used for in data engineering?
To understand which features are most predictive, guiding feature pipeline investment and data quality prioritization.
37
Why is feature availability and latency as important as predictive power?
A highly predictive feature that is slow or unavailable at inference time may not be usable in production settings.
38
What is feature freshness in an ML context?
How up-to-date feature values are relative to when predictions are made, critical for time-sensitive models.
39
How can batch feature pipelines support low-latency serving?
By precomputing features on a schedule and storing them in fast key-value stores or caches for quick lookup at inference time.
40
What is online feature computation?
Computing features on the fly at request time from recent events or streaming data, balancing freshness with latency constraints.
41
What is the tradeoff between precomputed and on-demand features?
Precomputed features reduce inference latency but may be less fresh; on-demand features are more current but can increase latency and complexity.
42
What is a feature view or feature definition object?
A declarative specification of how to compute a feature set from underlying tables or streams, used by feature stores.
43
Why is decoupling feature definitions from model code beneficial?
It enables reuse across models, standardizes feature logic, and reduces the risk of inconsistent implementations.
44
What is A/B testing in the context of ML deployment?
Running two or more model versions in parallel on different subsets of traffic to compare performance and business impact.
45
Why do A/B tests depend on solid data engineering?
Because reliable tracking of exposures, predictions, and outcomes is required to attribute effects correctly and avoid bias.
46
What is logging in an ML pipeline context?
Recording inputs, predictions, model version, and possibly outcomes for each inference to support monitoring, debugging, and retraining.
47
Why must logging avoid leaking sensitive data unnecessarily?
Logs often have broad access and long retention; logging too much sensitive data increases risk and compliance burden.
48
What is a good one-sentence mental model for ML data engineering?
Design feature and label pipelines that are point-in-time correct, reusable across training and serving, and continuously monitored for drift and quality.