Data Cleaning + Preperation Flashcards

(30 cards)

1
Q

What is data quality and list key quality dimensions.

A

Data quality measures fitness for purpose. Key dimensions: accuracy (correct values), completeness (no missing data), consistency (uniform format), timeliness (current), and validity (conforms to format rules).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

How would you approach data cleaning?

A

Data cleaning involves: identifying missing values (handle via imputation or removal), detecting outliers, standardizing formats, removing duplicates, correcting errors, validating against business rules, and documenting all changes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is data validation and how does it differ from verification?

A

Validation checks if data meets required quality standards before use. Verification confirms data accurately represents source information. Validation asks ‘is it acceptable?’ while verification asks ‘is it correct?’

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are the main causes of missing data?

A

Causes include: data entry errors, equipment failure, non-response in surveys, system outages, and intentional removal. Understanding cause determines best handling strategy and whether analysis remains valid.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Describe methods for handling missing data.

A

Methods include: deletion (remove rows/columns), mean/median imputation, forward/backward fill for time series, or advanced techniques like k-NN or model-based imputation. Choose based on percentage missing and analysis type.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Explain how you would identify duplicate records in a table.

A

Use GROUP BY with HAVING COUNT(*) > 1 to find duplicates, or use window functions like ROW_NUMBER() OVER(PARTITION BY key_columns ORDER BY key_columns) to flag them. Then investigate root causes and decide whether to keep or remove.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Explain how you would reconcile data from multiple sources.

A

Reconciliation involves: identifying key fields for matching, loading data from both sources, joining on keys, comparing values, investigating discrepancies, identifying duplicates or missing records, and documenting differences.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is data standardization and why is it important?

A

Standardization ensures consistent format, units, and representation across data. Examples: standardizing date formats (YYYY-MM-DD), converting all text to lowercase, using consistent units (e.g., kilograms vs pounds). Prevents analysis errors.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How would you handle inconsistent data types?

A

Identify type mismatches, convert columns to correct types using CAST or CONVERT, validate conversions work correctly, handle failed conversions carefully, and document all type changes made for reproducibility.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Explain the concept of data imputation.

A

Imputation fills missing values with estimated values. Methods: mean/median/mode, forward/backward fill, regression-based, or k-NN. Choose based on missing data mechanism, percentage missing, and impact on analysis.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is feature engineering and its importance in data preparation?

A

Feature engineering creates new variables from existing data to improve model performance. Examples: creating age groups from birth date, deriving ratios, encoding categorical variables. Often provides largest performance gains.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How would you handle categorical variables in analysis?

A

Options include: one-hot encoding (create binary columns), label encoding (assign integers), target encoding (encode by target mean), ordinal encoding (for ordered categories). Choice depends on modeling technique used.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Explain the concept of data profiling.

A

Data profiling examines data to understand its characteristics. Includes: checking distributions, identifying missing values, detecting outliers, validating formats, and understanding relationships. Foundation for effective cleaning.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is data lineage and why does it matter?

A

Data lineage traces data origin through transformations to final use. Important for: debugging data issues, ensuring reproducibility, compliance, understanding dependencies, and impact analysis of changes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

How would you detect and handle data anomalies?

A

Detect using: statistical methods (z-scores, IQR), business rules, visual inspection. Handle by: investigating cause, validating accuracy, deciding whether to remove/keep/adjust, and documenting decisions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Explain the concept of data governance.

A

Data governance establishes policies, procedures, and controls for data management. Includes: defining data ownership, ensuring quality, managing access, establishing standards, and enforcing compliance.

17
Q

What is the difference between data cleansing and data wrangling?

A

Cleansing focuses on error correction and standardization. Wrangling includes all data preparation: cleaning, transforming, reshaping to make suitable for analysis. Wrangling is broader concept.

18
Q

How would you handle outliers in your dataset?

A

Options: investigate cause (data error vs. legitimate extreme), visualize to understand impact, use robust statistics, remove if erroneous, keep if valid and meaningful, or analyze separately.

19
Q

Explain the concept of data transformation.

A

Transformation changes data representation: scaling (normalization), encoding, creating new variables, aggregating, reshaping. Important for: meeting algorithm assumptions, improving interpretability, and enhancing relationships.

20
Q

What is binning and when would you use it?

A

Binning groups continuous values into intervals/categories. Use for: reducing noise, creating categorical features, improving interpretability, improving model performance. Methods: equal-width, equal-frequency, custom.

21
Q

How would you handle time zone issues in temporal data?

A

Standardize all data to single time zone (typically UTC), document original time zones, be aware of daylight saving transitions, handle timestamp consistency across systems, and validate time calculations.

22
Q

Explain the concept of data normalization vs standardization.

A

Normalization scales values to 0-1 range (or similar). Standardization (z-score) scales to mean 0, std dev 1. Both remove units of measurement. Choose based on algorithm requirements and interpretability needs.

23
Q

What is the importance of documenting data cleaning steps?

A

Documentation enables: reproducibility, understanding decisions made, identifying errors, enabling collaboration, facilitating audits, and supporting knowledge transfer. Essential for data integrity and credibility.

24
Q

How would you validate that your data cleaning was successful?

A

Validate by: checking completeness (no missing values), verifying distributions make sense, spot-checking cleaned data against original, running business logic validations, comparing metrics before/after cleaning.

25
Explain the concept of data enrichment.
Enrichment adds new data from external sources to enhance existing data. Examples: adding demographic data, incorporating economic indicators, appending geographic information. Improves analysis potential.
26
What is the difference between structured and unstructured data?
Structured data has defined format/schema (databases, CSVs). Unstructured data has no predefined structure (text, images, video). Unstructured requires parsing, extraction, and transformation before analysis.
27
How would you approach cleaning large datasets?
Sample data first for analysis, develop scripts for reusability, use efficient tools/databases, implement data validation checks, document all transformations, test on subset before full run, and handle errors gracefully.
28
Explain the concept of master data management (MDM).
MDM creates single authoritative source for critical data (customers, products, vendors). Ensures: consistency across systems, reduces redundancy, improves data quality, enables better integration and analytics.
29
What is exploratory data analysis (EDA) in context of preparation?
EDA involves investigating data structure, distributions, relationships before formal analysis. Includes: visualizations, summary statistics, correlation checks, anomaly detection. Informs cleaning decisions and reveals issues.
30
How would you handle class imbalance in classification datasets?
Methods: undersampling majority class, oversampling minority class, synthetic generation (SMOTE), adjusted class weights, different evaluation metrics (AUC vs accuracy), ensemble methods designed for imbalance.